Weakness of Will and Delay Discounting 0192865951, 9780192865953

Breaking one's dieting rule or resolution to quit smoking, procrastination, convenient lies, even the failure of en

245 69 2MB

English Pages 208 [209] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Weakness of Will and Delay Discounting
 0192865951, 9780192865953

Table of contents :
Cover
Weakness of Will and Delay Discounting
Copyright
Dedication
Contents
Preface
1: Introduction
PART I: PHILOSOPHY
2: Weakness of the Will
2.1 Conceptual Background
2.2 Weakness of the Will: A Rough Sketch
3: Philosophical Accounts
3.1 Ancient Philosophers
3.1.1 Plato’s Socrates
3.1.2 Aristotle
3.2 Hare
3.3 Davidson
3.4 Bratman and Mele
3.5 Holton
PART II: SCIENCE
4: Agency in Descriptive Research
4.1 An Economic Framework of Human Agency
4.2 Weakness of Will as Unfree or Irrational Behaviour
4.3 Weakness of Will within the Framework
5: Discounting
5.1 Delay Discounting Theory
5.2 Preference Reversals
5.3 Weakness of Will
5.4 Preference Reversals in Discounting Models
5.4.1 Exponential and Hyperbolic Models
5.4.2 Comparing Exponential and Hyperbolic Models
5.5 Preference Reversals Versus Weakness of Will
5.6 Weakness of Will Concerning Immediate Rewards
5.7 Conclusion
PART III: SCIENCE MEETS PHILOSOPHY
6: Describing Weakness of Will
6.1 Hazards (Sozou’s Suggestion)
6.2 Uncertain Hazards (Dasgupta and Maskin’s Model)
6.3 Uncertainty Processing as a Mechanism of Weakness of Will
6.4 Weak-Willed Action as Biased Action
6.4.1 Biases and Biased Action
6.4.2 Weak-Willed, Biased Actions
6.5 Conclusion
7: Criticizing Weakness of Will
7.1 Rationality
7.2 Irrational Weakness of Will
7.3 Irrational Delay Discounting
7.3.1 Incoherent Preferences
7.3.2 Reasons to Promote One’s Well-Being
7.3.3 Exploitation
7.3.4 Uncertainty Bias
7.4 Conclusion
8: Practical Takeaways
8.1 What to Do about Biases
8.2 Adjusting the Environment
8.3 Addressing Weak-Willed Delay Discounting
8.3.1 Changing the Relative Amounts of Benefits
8.3.2 Adjusting the Delay
8.3.3 Reducing Uncertainty
8.3.4 Changing Individual Sensitivity towards Uncertainty
8.4 Conclusion
9: Conclusion
Appendices
A Models of Weak-Willed Discounting
B Hyperbolic Delay Discounting
Synchronic Preference Reversals Require Two Different Values for k
Synchronic Preference Reversals: Size of k
Diachronic Preference Reversals: Size of k
C Exponential Delay Discounting
Synchronic Preference Reversals
Diachronic Preference Reversals
D Sozou’s Model
E Dasgupta and Maskin’s Model
F Discounting Models and Marshmallow Cases
Glossary
Bibliography
Index

Citation preview

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Weakness of Will and Delay Discounting

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Weakness of Will and Delay Discounting NORA HEINZELMANN

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Great Clarendon Street, Oxford, OX2 6DP, United Kingdom Oxford University Press is a department of the University of Oxford. It furthers the University’s objective of excellence in research, scholarship, and education by publishing worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and in certain other countries © Nora Heinzelmann 2023 The moral rights of the author have been asserted All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, without the prior permission in writing of Oxford University Press, or as expressly permitted by law, by licence or under terms agreed with the appropriate reprographics rights organization. Enquiries concerning reproduction outside the scope of the above should be sent to the Rights Department, Oxford University Press, at the address above You must not circulate this work in any other form and you must impose this same condition on any acquirer Published in the United States of America by Oxford University Press 198 Madison Avenue, New York, NY 10016, United States of America British Library Cataloguing in Publication Data Data available Library of Congress Control Number: 2023936728 ISBN 978–0–19–286595–3 DOI: 10.1093/oso/9780192865953.001.0001 Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY Links to third party websites are provided by Oxford in good faith and for information only. Oxford disclaims any responsibility for the materials contained in any third party website referenced in this work.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Für Marcus

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Contents Preface

ix

1. Introduction

1 I. PHILOSOPHY

2. Weakness of the Will 2.1 Conceptual Background 2.2 Weakness of the Will: A Rough Sketch

9 9 16

3. Philosophical Accounts 3.1 Ancient Philosophers

31 31

3.2 3.3 3.4 3.5

3.1.1 Plato’s Socrates 3.1.2 Aristotle

32 35

Hare Davidson Bratman and Mele Holton

39 45 51 56

II. SCIENCE 4. Agency in Descriptive Research 4.1 An Economic Framework of Human Agency 4.2 Weakness of Will as Unfree or Irrational Behaviour 4.3 Weakness of Will within the Framework

63 63 66 69

5. Discounting 5.1 Delay Discounting Theory 5.2 Preference Reversals 5.3 Weakness of Will 5.4 Preference Reversals in Discounting Models

74 75 77 79 85

5.4.1 Exponential and Hyperbolic Models 5.4.2 Comparing Exponential and Hyperbolic Models

5.5 Preference Reversals Versus Weakness of Will 5.6 Weakness of Will Concerning Immediate Rewards 5.7 Conclusion

86 90

93 97 100

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

viii contents

III. SCIENCE MEETS PHILOSOPHY 6. Describing Weakness of Will 6.1 Hazards (Sozou’s Suggestion) 6.2 Uncertain Hazards (Dasgupta and Maskin’s Model) 6.3 Uncertainty Processing as a Mechanism of Weakness of Will 6.4 Weak-Willed Action as Biased Action 6.4.1 Biases and Biased Action 6.4.2 Weak-Willed, Biased Actions

6.5 Conclusion

7. Criticizing Weakness of Will 7.1 Rationality 7.2 Irrational Weakness of Will 7.3 Irrational Delay Discounting 7.3.1 7.3.2 7.3.3 7.3.4

Incoherent Preferences Reasons to Promote One’s Well-Being Exploitation Uncertainty Bias

7.4 Conclusion

8. Practical Takeaways 8.1 What to Do about Biases 8.2 Adjusting the Environment 8.3 Addressing Weak-Willed Delay Discounting 8.3.1 8.3.2 8.3.3 8.3.4

Changing the Relative Amounts of Benefits Adjusting the Delay Reducing Uncertainty Changing Individual Sensitivity towards Uncertainty

8.4 Conclusion

105 105 108 112 117 118 121

124

126 126 131 133 135 136 139 140

143

144 144 148 151 152 152 155 156

157

9. Conclusion

158

Appendices A Models of Weak-Willed Discounting B Hyperbolic Delay Discounting C Exponential Delay Discounting D Sozou’s Model E Dasgupta and Maskin’s Model F Discounting Models and Marshmallow Cases

161 161 162 165 167 169 171

Glossary Bibliography Index

173 176 195

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Preface Our world could be better than it currently is. There is a lot that we as individuals and collectives could and ought to do to make it better. Yet, all too often, we know what we could and should do, and still we do not do it. For example, we do not persist in our resolutions to eat healthier and work harder and we act against our better judgement that we ought to increase vaccination rates or cut greenhouse gas emissions. This problem has always bugged me, not just as a philosopher. Philosophy sometimes discusses it under the label of ‘akrasia’ or ‘weakness of will’. To target it, I think, it is not enough to describe it or to explain why it is problematic. We also need to take into account how we can and do act. In other words, we should adopt an interdisciplinary approach that links empirical with conceptual and normative investigations. My monograph is intended to contribute to this. It connects philosophical research with delay discounting theory, which is popular in behavioural science. Thereby, it seeks to not only provide a new perspective on a common issue our world faces but also insights into how to address it more successfully. The monograph developed from parts of my doctoral dissertation and various lecture notes. I thank everyone who supported this work. In particular, Arif Ahmed, Hannah Altehenger, Michael Bratman, Tom Dougherty, Natalie Gold, Richard Holton, Christian Kietzmann, Al Mele, Patricia Rich, Don Ross, Pauline Sabrier, Alex Soutschek, as well as students in Cambridge, Erlangen, and Mainz have read parts or all of the manuscript, sometimes multiple times. Velia Fischer kindly helped prepare the index and glossary. For discussion, encouragement, or information I thank Simona Aimar, Monika Betzler, Partha Dasgupta, Gerhard Ernst, Julia Henke Haas, Rae Langton, Erasmus Mayr, Jessica Moss, Damien Storey, and audiences in Berlin, Bristol, Cambridge, Hamburg, Helsinki, Münster, Munich, Noto, Osnabrück, Oxford, St. Louis (MO), and Tübingen. Moreover, I am indebted to the behavioural scientists who taught and trained me, notably Benedetto de Martino’s and Philippe Tobler’s groups. Last but not least I thank Peter Momtchiloff, his colleagues, and the anonymous readers of Oxford University Press.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

1 Introduction Someone goes on a diet in order to eat healthier. Among other things, they adopt a ban on all overly fatty and sugary foods. One day, they are looking at the menu for that night’s dinner. Considering the courses and their nutritional values, they decide to skip dessert later. However, when dessert is served, they fail to stick to their earlier decision and indulge in the food that the diet strictly forbids. Hardly anyone would deny that, as we have described the dieter, they are weakwilled. Like this one, typical cases of weakness of will1 concern bodily desires for, e.g., food, drink, or sex. They have been discussed for millennia by scholars from Aristotle (Nicomachean Ethics 1147a31–1147b6), to Davidson ([1970] 1980c), Mele (2012, p. 37), Ainslie (2001, p. 87), and Mischel (2014). Throughout this book, the dieter will serve as our paradigm example for weakness of will. We shall examine how different accounts describe and analyse it. This book has two goals. First, it aims to facilitate cross-disciplinary understanding. The empirical literature on delay discounting is hardly accessible to a reader not familiar with econometric theory. Conversely, researchers in the behavioural sciences may find philosophical accounts invoking discounting models difficult to understand without inside knowledge of the debates and historical background. Targeting this lacuna, the present monograph renders relevant conceptions and findings from both disciplines intelligible to outsiders. Some passages may thus not be of interest to readers with expertise in either of these fields. In particular, philosophers may wish to skip Part I, which covers basics from the philosophy of weakness of the will. Conversely, readers from the behavioural and economic sciences could pass over Part II, which introduces accounts of value-based choice and discounting models. In Part III, those who know the discounting models by Sozou, Dasgupta, and Maskin may wish to skip Sections 6.1–6.2, readers acquainted with research into biases can ignore Section 6.4.1, and anyone familiar with the literature on rationality might want to pass over Section 7.1. The second aim of this book is to provide new insights by linking philosophical and delay discounting theory about weakness of will. More specifically, Part III proposes a novel understanding of weak-willed delay discounting as determined

1 I shall use ‘weakness of will’ and ‘weakness of the will’ interchangeably.

Weakness of Will and Delay Discounting. Nora Heinzelmann, Oxford University Press. © Nora Heinzelmann 2023. DOI: 10.1093/oso/9780192865953.003.0001

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

2 weakness of will and delay discounting by uncertainty or risk processing. This suggests a new way of criticizing weakwilled actions as biased ones. This, in turn, has practical implications for individuals and policymakers: we may address the cognitive bias by adjusting the size, delay, uncertainty, or risk of the rewards in question, or our sensitivity to them. The remainder of the Introduction is an overview over the book. It has three parts: the first focuses on the philosophy of weakness of the will. The second is concerned with the science of delay discounting and relevant background assumptions it relies on. The third combines both perspectives to draw conclusions relevant to research, policymaking, and individuals seeking to understand and tackle weakness of will. Part I (‘Philosophy’) focuses on the philosophy of weakness of will. Chapter 2 (‘Weakness of the Will’) introduces basic conceptions and questions from the literature. It starts with a detailed explanation of core concepts (‘agent’, ‘action’, ‘judgement’, ‘intention’) as they feature in philosophical accounts of weakness of the will. The chapter then roughly characterizes weakness of will as a failure by the weak-willed person’s own standards and shows how it accounts for three core features of weakness of will: it is puzzling, it involves a conflict, and it is typically regarded as a defect. Chapter 3 (‘Philosophical Accounts’) provides a spotlight overview highlighting historically influential as well as contemporary accounts. It contrasts these approaches with respect to their terminology (e.g. ‘akrasia’, ‘incontinence’, ‘moral weakness’), conception of the phenomenon (state of character, weakness of a faculty of the mind, failure of foresight or of practical reasoning, etc.), and presumed normative failure (of knowledge, morality, or rationality). Accounts that rely on discounting theory (Mele, Bratman) are discussed in greater detail. The chapter breaks down into the following sections: 1. Ancient philosophers (Plato and Aristotle): This section introduces the longstanding philosophical debate about whether weakness of will (‘akrasia’) is possible. It illustrates this with the opposing views of Plato’s Socrates and Aristotle. Socrates regards weakness of will as a temporary loss of knowledge due to defective foresight. For Aristotle, the weak-willed person makes a mistake in practical reasoning, drawing and acting on the wrong conclusion. 2. Hare: The analytic philosopher Hare regards weakness of will (‘moral weakness’) as a moral failure. Using Medea’s killing of her own children as a prime example, Hare likens moral weakness to (temporary) insanity. His view thus bears similarities to Socrates’ and Aristotle’s positions. 3. Davidson: Building on Aristotle and Aquinas, Davidson describes weakness of the will (‘incontinence’) as irrational but not as immoral. His most influential paper on the topic characterizes it as a failure of practical reasoning. In his later work, Davidson regards weakness of will as a failure of intention, and eventually argues that it is due to a divided mind in conflict with itself.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

introduction 3 4. Mele and Bratman: Both contemporary philosophers criticize and refine Davidson’s proposal but continue to regard weak-willed behaviour as action against the agent’s better judgement. Also, both draw on research using discounting models, in particular the work of psychiatrist Ainslie, to describe the psychological mechanism leading to weak-willed behaviour. 5. Holton: Turning against the Davidsonian tradition, Holton argues that weakness of will is a failure to persist in a previously formed intention rather than a failure to intend or act in accordance with one’s best judgement. By distinguishing weakness of will from akrasia, Holton’s approach raises questions about the unity of the concept of weakness of will. In conclusion, the chapter provides the reader with a broad overview over different philosophical approaches to weakness of the will and showcases prominent examples invoking delay discounting models. Part II (‘Science’) focuses on the science of weakness of will and more specifically on scientific models of delay discounting. This part consists of two chapters, 4 and 5. Chapter 4 (‘Agency in Descriptive Research’) explains the relevant conceptual background that economic and empirical research is based on, viz. a specific account of agency. According to this framework of human agency, the agent has preferences about their available options, which in turn align with the value or utility assigned to each option and determine how the agent decides between their options if given a choice. Delay discounting theory, as initially developed, is based on these assumptions. Thus, they in turn underlie and constrain any account of weakness of will invoking delay discounting theory. From this perspective, it may be incoherent to characterize weakness of will as, say, a failure to make decisions in accordance with one’s preferences because it conflicts with the conceptual assumption that decisions align with preferences. Empirical research invokes delay discounting theory to account for weak-willed behaviour, among other things. By describing this behaviour and its underlying mechanism, they explain and predict it. That is, the behavioural sciences provide a descriptive account of agency, i.e. an account of how humans act. Thus, they do not rely on normative assumptions about how humans should act. However, delay discounting theory and the framework of agency underlying it have initially been developed as normative suggestions: they specify how we should or ideally would act. Descriptive accounts invoking delay discounting theory thus presuppose normative assumptions. Therefore, they are subject to both descriptive and normative desiderata. For example, they may be expected to both predict an agent’s preferences and assume that they are transitive. To avoid these challenges, one may be tempted to restrict delay discounting theory or the framework of agency underlying it to the normative domain only. The approach would then be regarded as one of ideal agency and not supposed

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

4 weakness of will and delay discounting to describe actual human beings. As a consequence, it would not be of use to empirical research. However, to draw on the large empirical literature, the present monograph follows the sciences in their descriptive understanding of delay discounting theory. Because its underlying conceptual and normative constraints are substantive from a philosophical perspective, Chapter 4 concludes with an outline of the possible ways in which weakness of will can be coherently described given these constraints. On this view, weakness of will is best understood as a specific kind of preference reversal. Discounting models are, in turn, apt to account for these reversals. Chapter 5 (‘Discounting’) explains delay discounting theories in greater detail. Roughly, ‘discounting’ refers to a change in value with some feature like probability or temporal delay. For example, the longer we have to wait for a pleasant event, the more we discount its value. Delay discounting theories from economics and psychology model how values change with temporal delay, and how the discounted value of one option relative to another may in turn affect the agent’s preference and choice. For example, as time and delay change, an agent may reverse their preference of a long-term benefit of dieting over a short-term benefit of having dessert. Economists call such a shift a ‘preference reversal’. Discounting models can describe and even predict preference reversals with mathematical precision and are thus versatile and powerful. Therefore, they have become influential models for weakness of will and are widely employed in the empirical sciences. Chapter 5 explains discounting theories to an academic readership not familiar with the econometric or empirical literature. It presents different accounts, notably exponential and hyperbolic models, and explains how both of them can describe weakness of will. It also clarifies common misunderstandings. For instance, laypeople and philosophers sometimes assume that exponential models are normative or describe ideal or rational agents, whilst hyperbolic models describe actual behaviour that is irrational. This is because exponential models, this view continues, do not allow for preference reversals but hyperbolic models do. I explain why this view is overly simplistic or even incorrect. Exponential discounting models can allow for preference reversals. I prove my claims mathematically in the Appendix. Finally, the chapter discusses limitations of discounting models thus understood. For one thing, not all cases of weakness of will involve preference reversals, and even those that do cannot always be accounted for by discounting theory. For instance, they cannot describe weak-willed behaviour in the so-called marshmallow experiments. I conclude that although there is substantial overlap between discounting models and accounts of weakness of will, the two conceptions do not have the same scope. In Part III (‘Science Meets Philosophy’), we combine the philosophical and the scientific perspectives again to draw conclusions about how to best describe, criticize, and deal with weak-willed delay discounting.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

introduction 5 Chapter 6 (‘Describing Weakness of Will’) suggests a new way of describing weak-willed delay discounting. Scientists and economists have long acknowledged that the standard discounting models presented in this book so far face serious problems. Accordingly, researchers have suggested more sophisticated accounts. But these have not yet gained much attention by philosophers who investigate weakness of the will. This chapter draws on some of the more recent scientific suggestions. It explains their rationale in order to develop a novel philosophical account building on these suggestions. On this view, weak-willed behaviour is, roughly, due to how the agent responds to the uncertainty about whether or when an anticipated good materializes. This suggestion is intuitively appealing: the temporal delay of a future benefit, such as better health thanks to dieting, always involves a risk of not obtaining this benefit. For instance, the dieting effort could be undermined by an unknown disease, or the dieter might simply die. In a similar vein, many authors have pointed out similarities of probability and delay discounting. Probability discounting is the reduction of value due to probability. For instance, we tend to value the option of receiving a monetary reward less when we receive it only with some probability rather than with certainty. The lower such a probability is, the longer we have to wait for the reward: we have to throw a fair die more often when six is the winning number than when any even number wins. Empirically, a connection between probability and delay discounting has been established. For example, riskseeking individuals are more patient, and vice versa. Economists have recently developed discounting models that describe this suggestion mathematically. The chapter explains them to readers not familiar with the econometric literature. Chapter 7 (‘Criticizing Weakness of Will’) examines the grounds on which weak-willed delay discounting may be criticized. Weakness of will has always been regarded as a defect. Aristotle classified it alongside vice and bestiality, and Hare called it a moral weakness. For contemporary philosophers, it is a prime example of practical irrationality. Yet what the irrationality in question consists in has remained a topic of debate. For example, some (e.g. Davidson) have argued that the weak-willed person irrationally forms a belief against her best evidence, others have regarded it as a failure of willpower (cf. Holton). This chapter provides a new answer in light of the account developed in the previous chapter. From this perspective, weak-willed delay discounting is to be understood as fallaciously yielding to a certain kind of cognitive bias. Our bias here is the psychological mechanism that leads us to regard a distant benefit as smaller than an immediate one. This proposal echoes the Socratic view on which the weak-willed agent fails to correctly measure distant future goods over near ones. I draw on an analogy to perceptual biases like the Ponzo illusion. Just like we may allow uneasiness to deter us from stepping on a solid glass bridge over an abyss, the weak-willed agent lets misrepresentation of prudential or moral goods lead them to act against their own standard.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

6 weakness of will and delay discounting How is weak-willed delay discounting irrational, then? I answer as follows. The cognitive bias itself may not be irrational but our evaluations or actions based on this bias may be. This is because they make us incoherent or lead us to inadequately respond to our reasons. Just as we may judge that the two lines in a Ponzo illusion are of different length although measuring both with a ruler yields identical results, we may continue to discount a future good too much although we reasonably believe that it is larger than an immediate pleasure. Chapter 8 (‘Practical Takeaways’) explores strategies for individuals and policymakers to address weak-willed action and flawed discounting. Treating the phenomena like cognitive biases may mitigate their stigma of failure and transgression. Generally, we blame people much less for falling prey to illusions or biases than for, say, obesity or chronic procrastination. Changing perspective in this way does not provide excuses or even justify weak-willed behaviour. Instead, it helps to focus on and to address the underlying problem. There are, roughly, three strategies we employ to address biased behaviour, and they hold promise for tackling weakness of will as well. The first is to change the incentives in common decision problems. For example, increasing the costs of alcohol, sugar, or tobacco with a tax and thus making the immediate reward less rewarding tends to lower consumption and thus to promote choice of later, larger rewards. The second strategy is to change the delay of the rewards or the uncertainty or risk associated with it. For example, nudges may shift the point in time at which we make decisions about delayed rewards so that e.g. we precommit to a healthier option early on rather than when we are under the immediate temptation of an unhealthier alternative. Third, by decreasing uncertainties and risks on the societal level, institutions can decisively change how sensitive to risks and uncertainties individuals or entire nations are. For instance, providing social safety measures increases economic security and in turn allows individuals to commit to long-term investments like education or retirement savings. The closing Chapter 9 of the monograph takes stock of our findings and identifies open questions that may provide avenues for future research.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

PART I

PHILOSOPHY

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

2 Weakness of the Will This chapter is an introduction to the philosophy of weakness of the will. The first section explains core terms like ‘agent’, ‘action’, ‘intention’, and ‘judgement’. The second section details how weakness of the will is a kind of failure by the person’s own standards.

2.1 Conceptual Background ‘Weakness of the will’ is both a technical term in philosophical research and a word of ordinary language. This is true for many philosophical terms. Indeed, philosophers aspire to provide definitions and theories that capture the ordinary meaning and usage of words. Still, without knowledge of the technical terms it is easy to misunderstand and misinterpret philosophical texts. Therefore, we take a closer look at relevant terms in this section after some brief methodological remarks about how philosophical research proceeds in determining their meaning. Philosophers have been using a variety of methods to determine the meaning of words.1 For example, so-called ‘conceptual analysis’ seeks to identify necessary and sufficient conditions for when a concept applies (Block and Stalnaker 1999; Margolis and Laurence 2019). In other words, these conditions identify all cases and only those cases that fall under the concept in question. Conceptual analysis is sometimes dubbed an ‘armchair’ method after the location where it may be performed. That is, it is an a priori activity that refines definitions of concepts, e.g. in reaction to counterexamples or thought experiments. Conceptual analysis has been regarded as a primary if not the essential research method in philosophy (Jackson 1998; Williamson 2007). Since antiquity to the present day, philosophers have employed it to analyse the concept of weakness of the will as well. For example, in this vein Davidson ([1970] 1980c) has specified what is required in order for anyone to be weak-willed and what applies to everyone who is weak-willed.2 However, it has turned out that it is at best not yet and at worst not at all possible to specify necessary and sufficient conditions for many everyday, scientific, and philosophical concepts such as ‘person’, ‘healthy’, or ‘knowledge’ (Churchland 2007; Gettier 1963; Lakoff 1987). Therefore, some philosophers have invoked

1 Cf. Chapter 3 for examples.

2 Cf. Section 3.3.

Weakness of Will and Delay Discounting. Nora Heinzelmann, Oxford University Press. © Nora Heinzelmann 2023. DOI: 10.1093/oso/9780192865953.003.0002

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

10 weakness of will and delay discounting other methods. For example, authors have variously suggested that conceptual analysis should be complemented with scientific knowledge (‘Canberra plan’, Chalmers 2012; Chalmers and Jackson 2001; Jackson 1998; Papineau 2021, sect. 2) or experimental investigation into lay usage (‘experimental philosophy’, Knobe and Nichols 2013). Some have proposed to revise concepts in light of normative considerations (‘conceptual engineering’, Cappelen 2018; Haslanger 2000, 2012; Plunkett and Cappelen 2020). One can take such approaches to ‘weakness of will’ as well. For instance, Mele (1987, 2012)3 has invoked empirical findings to provide an account of weakness of the will, and several researchers have used experimental methods (Beebe 2013; May and Holton 2012; Mele 2010; Sousa and Mauro 2015). In this book, we are particularly interested in whether weakness of will can be characterized by delay discounting theories. As it will turn out,⁴ this approach fails to provide us with necessary and sufficient conditions for weakness of the will but it can account for many core cases of it, and even suggests a new perspective on how to understand them.⁵ This approach towards weakness of will thus presupposes to some extent what has been called a naturalist approach to concepts, i.e., a philosophical methodology that allows for connections to the empirical (‘natural’) sciences (Rysiew 2021; Wrenn 2022). Nevertheless, here we begin with an ordinary understanding of the terms, avoiding methodological commitments as far as possible. In everyday language, we refer to weakness of (the) will,⁶ weak-willed people, weak-willed actions, weakwilled decisions or choices, and weak-willed characters. Let us initially focus on weak-willed people and characters. These two hang together: when we speak of weak-willed characters we believe that the people we are talking about are weak-willed. Philosophers often use the label ‘agents’ instead of ‘people’. A philosopher’s agents are neither representatives who act on behalf of others, nor business managers, nor spies, nor chemical substances. ‘Agent’ simply designates someone— or something—that acts. Typically, agents are (human) persons. But businesses, primates, or robots could be agents as well. When we say of someone that they are weak-willed, we can mean at least two things (cf. Spitzley 2009, p. 76):⁷ on the one hand, we may imply that weakness of will is a property like eye colour or handedness. That is, it is a property that is robustly the same over time. A person who is weak-willed remains weak-willed, just as someone with brown eyes keeps this eye colour. Aristotle’s view is close to this category, as he claims that weakness of will is a state of character.⁸

3 Cf. Section 3.4. ⁴ Cf. Section 5.4. ⁵ Cf. Section 6.3. ⁶ We shall not distinguish between these two. ⁷ These are two extremes along a continuum; there may be, for instance, properties that change slowly over time. ⁸ Cf. Section 3.1.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 11 On the other hand, though, we may imply that an agent’s weakness of will is a property like mood, skill, or health. That is, it is a property that is relative, say, to a certain point in time or context. For instance, an agent might be happy in one moment but anxious the next, a skilled amateur runner but a terrible professional one, or have excellent mental but poor dental health. Similarly, someone may be weak-willed in one respect, situation, or point in time but not in another. For instance, a dieter may be terrible at sticking with their eating rules but a diligent and hard-working student who never procrastinates. Modern philosophical accounts of weakness of will usually take this second perspective. Often, they focus on weakness of will relative to time. In this vein, a philosopher may aim to provide an account by completing a claim like the following one: An agent is weak-willed at t if and only if. . . . If t is understood as a specific point in time, then this definition specifies synchronic cases of weakness of the will. Literally, ‘synchronic’ means ‘together in time’, i.e., simultaneous or instantaneous (‘syn’, ‘συν-’: ‘together’; ‘chronos’, ‘χρόνος’: ‘time’). In philosophy, it concerns instances or points in time. Thus, in a synchronic case of weakness of will, the agent is weak-willed at one particular instance in time. We can take a snapshot of them, as it were, and in that snapshot they are weak-willed. The next or the previous moment, they may not be. In contrast, diachronic weakness of will is something that the agent displays over a period of time. Literally, ‘diachronic’ means ‘through’ or ‘over time’ (‘dia’, ‘δια-’: ‘through, across’). In philosophy, it concerns periods of time. Thus, merely by looking at an agent at one instance in time t1 we cannot tell whether they are diachronically weak-willed or not. We also have to consider at least one other point in time t2 . Then the agent may be weak-willed over the period of time t1 to t2 . An account of diachronic weakness of will may thus spell out the claim above as follows: An agent is weak-willed over a period of time t (between t1 and t2 ) if and only if . . . . The dieter is an example for diachronic weakness of will because they decide to skip dessert earlier (at t1 ) but then indulge at dinnertime (at t2 ); they are weakwilled over a period of time. Whether they are synchronically weak-willed depends on how we spell out the details of the case. We may assume that, at dinnertime (t2 ), the dieter still endorses their earlier decision so that, in some sense, they judge or intend to not have dessert. At the same time, though, they indulge. Thus, they seem to be weak-willed at one particular instance in time (at t2 ) and in a snapshot taken at that moment. They are, in short, synchronically weak-willed. In contrast, though, assume that by dinnertime the dieter has completely given up their earlier

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

12 weakness of will and delay discounting decision to skip dessert. They may now think that the decision was mistaken. In this case, there is no individual temporal instance when the dieter seems weakwilled. Then they are not synchronically weak-willed. The distinction between synchronic and diachronic cases of weakness of the will is not a strict one. For instance, imagine a view on which an agent is weak-willed at t if and only if they perform a weak-willed action at t. As it stands, this is an account of synchronic weakness of will. But performing actions takes time and does not happen instantaneously. Therefore, we might also be tempted to say that the account is one of diachronic weakness of will. Either way, as it stands, the schema above does not allow for an agent to be weak-willed at a point in time or over a period of time t in one respect but not in another. However, we can imagine such cases. For example, imagine that the dieter is studying for an exam at exactly 8 pm, resisting the temptation to procrastinate and turning down friends who invite them out to party. At the same time, the dieter drinks large amounts of sugary soda that they know they should not have. In one respect, we may thus say, this person is weak-willed; they violate their diet. At the same time (at 8 pm), they are also not weak-willed because they are not swayed by temptations to stop studying. To account for cases like these, we might refine the schema further and define weakness of will as relative to some respect r: An agent is weak-willed at a point in (or over a period of) time t with respect to r if and only if. . . . From this perspective, the dieter is synchronically weak-willed at 8 pm with respect to their diet. But they are not weak-willed with respect to their study plans. Therefore, it seems, the agent is in some sense strong-willed: they do what they know they ought to do, against all odds. Overall, then, the dieter is weak-willed and also strong-willed at 8 pm. This verdict, however, might strike us as contradictory and thus as absurd. How could anyone have a property and also not have it, at the same time? But, again, this is a property the agent has only in some very restricted sense: they are weak-willed with respect to their diet, and not weak-willed with respect to their studies; just as someone could have excellent mental but poor dental health, or be a skilled or a terrible runner, depending on context. A better way to address the issue that also avoids the flavour of self-contradiction might be the following. Instead of considering whether an agent is weak-willed (simpliciter, at a certain time, or in a certain respect), we might wish to consider whether they are weak-willed in acting a certain way or if their action is weakwilled. For example, the dieter is not weak-willed in studying and their studying is not weak-willed. Yet they are weak-willed in drinking soda and their soda drinking is weak-willed. This approach has indeed been popular with philosophers. It takes the form

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 13 An agent is weak-willed in 𝜙ing if and only if. . . . The Greek letter ‘𝜙’ (‘phi’) stands for an action. For instance, ‘𝜙ing’ could designate drinking or studying. What actions are is a difficult and intensely discussed question in philosophy (Wilson, Shpall, and Piñeros Glasscock 2016). Here, we understand ‘action’ in a very broad way: it designates observable behaviour like drinking or studying and non-observable actions like thinking or forming an intention. But actions also include behaviour that does not require activity from the agent. For instance, waiting motionless or allowing someone to kiss you are both actions. Likewise, some omissions may be actions: you seem to act when you refrain from helping a needy victim calling you in distress or when you deliberately let a deadline pass (Bernstein 2015; Clarke 2014). We normally perform zillions of actions at the same time. For instance, we can simultaneously write a text, listen to music, resist going to a social event, etc. However, not all things we do are actions, and not all bodily movements are actions. For example, twitches, heartbeats, or being kissed are typically not actions. Philosophers usually say that cases like these are not actions because they are not intentional. You can be kissed against your will, and you typically do not twitch intentionally. More specifically, an action is something done intentionally under some description (Anscombe 1957; Davidson 1980a). For instance, someone might intentionally pump water. They might thereby poison the well but do so unintentionally. Their action is thus intentional under the description ‘pumping water’ but not under the description ‘poisoning the well’. An intentional action does not require an intention to act so (Bratman [1987] 1999b, p. 112). Intentions are mental states, like beliefs or desires.⁹ For instance, someone might have the intention to cycle home and therefore intentionally cycle home. They thereby intentionally pedal and steer, yet they need not have intentions to pedal or steer. Pedalling and steering can happen completely absent-mindedly and without intention. Still, they are intentional actions and thus unlike a twitch, which is merely an unintentional bodily movement but not an action. Intentions and intentional actions are usually taken to have some object or content: we intend to do something, we act with an intention to achieve something else, we intentionally 𝜙, etc. Philosophers call the first of these categories ‘intentions for the future’, such as when you intend to have dinner tonight (Anscombe 1957). The second are intentions with which we act, such as when we read a sentence with the intention to learn what its author said. The third category are intentional actions; we are already familiar with them. ⁹ Intentions are widely researched in philosophy. For instance, one debate concerns the question of whether they are reducible to (other) mental states (a view that has been attributed to e.g. the early Davidson 1963), or whether they are mental states in their own right (e.g. Bratman [1987] 1999b). We need not enter this discussion. For our purpose, it is sufficient to have a basic understanding of what an intention is.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

14 weakness of will and delay discounting Weak-willed actions are intentional actions. The dieter intentionally eats dessert, they do not undergo cake-eating twitches. Whether the agent has the intention to eat dessert, though, is a different question. Perhaps they have an intention quite to the contrary: they intend to stick to their diet and to forego dessert—but fail to follow through with their intention in their behaviour. Precisely this failure to persist in an intention is weakness of will on Holton (1999, 2009)’s account. For example, the dieter is weak-willed when they over-readily give up their intention to decline dessert in the face of countervailing temptation.1⁰ On another account, weak-willed action is action against one’s better judgement. For instance, Davidson ([1970] 1980c) developed an influential characterization along these lines.11 Accounts like this require a clear conception of judgements because we need to understand how it is psychologically possible to judge that you should not do what you are doing. Judgements are object of much research in philosophical theory. Again, we shall adopt a broad and rough understanding. On this view, a judgement is a mental state or attitude. This mental state is about and the attitude is held towards a proposition, which is the content of the judgement in question. Typically, the proposition is affirmed or regarded as true (cf. Schwitzgebel 2011). For instance, when the dieter judges that they ought not have dessert, then they take the claim “I ought not have dessert” to be true. Judgements are therefore a kind of what philosophers call ‘propositional attitudes’. The proposition that is the content of the judgement is often abbreviated as ‘p’. For the dieter, ‘p’ is the proposition “I ought not have desert”, and the dieter judges that p. Judgements are highly similar to beliefs but differ from them in some respects. For one thing, agents do not always consciously endorse beliefs although they usually do so for judgements. For example, we usually believe that we have hands. But we hardly ever judge that we have hands. We take them for granted. So believing that p does not entail judging that p. However, the converse may be true: perhaps judging that p entails, in some sense, believing that p. Judgements are then a proper subset of beliefs. As I said, weak-willed action is often understood as action against one’s better judgement. But what is a better judgement? Typically, it is a judgement about what the agent (subjectively) ought to do, what they have most reason to do, or what is best or better to do. In other words, the proposition p that is the content of the agent’s judgement takes one of the following forms: • • • •

I ought to 𝜙; I have most reason to 𝜙; It would be best to 𝜙; It would be better to 𝜙.

1⁰ Cf. Section 3.5 for details.

11 Cf. Section 3.3.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 15 All of these claims are normative or evaluative claims. That is, they concern moral, rational, or prudential requirements rather than, say, expectations or suppositions (as in “the train ought to have arrived”). Philosophers predominantly assume that, when an agent makes a judgement of this kind, they genuinely acknowledge that it has authority over their actions. However, there are dissenting voices.12 Here, we follow the majority opinion and assume that agents’ better judgements are genuinely normative. That is, when making such judgements, agents do not merely use the words ‘ought’ or ‘best’ in an inverted comma sense. To see why this is so, imagine the dieter tells a friend: “you know, I ought not eat dessert, at least if my doctor is right. But my body weight has been in the healthy range for months now and my blood sugar levels are perfect. I haven’t seen the doctor for ages and if she knew about my achievements, she’d certainly allow me to indulge every now and then.” Presumably, in this case, “I ought not eat dessert” is not a genuine normative judgement. Instead, it is a report of the doctor’s order that the dieter regards as outdated. When the agent has dessert, they are not weakwilled because they do not act against their better judgement. They merely act against doctor’s orders that seem to no longer apply. Like the dieter in this case, agents frequently change their judgements over time. This usually happens because circumstances change. I judge that it is 2.26 pm, and soon I shall judge that it is 2.27 pm. Sometimes, agents change their minds because their tastes, preferences, wishes, or desires change. For instance, a little child might judge that asparagus is disgusting but the teenager they grow into might judge that asparagus is delicious. An interesting question relevant to us is whether changes of judgements can be weak-willed. Let us focus on cases where an agent makes a normative or evaluative judgement of the sort mentioned above, and then revises this judgement without good reason. Call this specific change a ‘judgement-shift’ (Holton 1999, pp. 80–1). Changing one’s better judgement because circumstances or preferences change is thus not a judgement-shift. It seems possible that judgement-shifts occur frequently. For instance, Schelling (1980) reports that he wanted to become an arctic explorer as a boy and, when he went to bed, judged that he ought to sleep without a blanket in order to get used to the cold. But when he awoke freezing in the middle of the night, he judged that he ought to sleep with the blanket. As this happened over and over again, young Schelling changed his judgement about the blanket rather frequently. But did he do so without good reason? It seems that this would depend on whether Schelling correctly anticipated how cold he would get at night. Perhaps, if he correctly judged that he would freeze but that it was still better to not take the blanket, then it seems that he might later on lack a reason to change

12 For example, Hare’s, cf. Section 3.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

16 weakness of will and delay discounting his mind. But if he underestimated the cold and only learned of its severity during the night, he might well have good reason to shift his judgement. Presumably, the longer Schelling continued to go to bed without a blanket, the less we might be inclined to say that he still underestimated the cold—after all, having felt it so many nights in a row, he eventually must have known what to expect. From some point onwards, then, Schelling was changing his mind without good reason, and undergoing judgement-shifts. But are such judgement-shifts weak-willed? If so, then they are at best diachronic cases of weakness of will. But even the question of whether they are diachronic cases of weakness of will is a difficult one. On the one hand, we might be tempted to say that judgement-shifts are not weak-willed (Holton 1999, ch. 5). In many cases, it seems perfectly legitimate to change one’s mind without good reason. For example, imagine you and your friends are interviewed by a journalist who asks each of you in turn to name a female philosopher. Initially you settle on Elizabeth Anscombe but when your turn comes you say ‘Ruth Barcan Marcus’.13 We can shift our judgements in this way just like that, without good reason, and yet it is perfectly fine. But even in cases where such shifts are problematic, it is not clear that the issue is weakness of will. Perhaps agents like Schelling are merely fickle or unstable in their judgements. This might be problematic for other reasons such as pragmatic issues. For example, being fickle can make it harder to coordinate with others. On the other hand, though, weakness of will might just be a special kind of judgement-shifts. If this is true, then not any judgement-shift is weak-willed but only some particular judgement-shifts are. This is so according to some accounts of weakness of will. For instance, on a Socratic view, the seemingly weak-willed agent oscillates between, say, the dessert and the diet. For Socrates, weak-willed judgement-shifts differ from innocuous ones in that they happen due to a temptation that is close at hand and overpowers the agent. We shall take a closer look at this account below.1⁴ Moreover, an approach that relies on delay discounting theory seems to be committed to a similar view, as we shall see.1⁵ For now, we take a more general view on what weakness of will actually is.

2.2 Weakness of the Will: A Rough Sketch The philosophical literature on weakness of will is excessive and disintegrated, a fact that manifests itself in part by the abundance of labels that are used, sometimes interchangeably and sometimes in an attempt to distinguish one phenomenon from another, albeit related one: ‘akrasia’ (Aristotle), ‘incontinence’ (Davidson),

13 I thank my students for this example.

1⁴ In Section 3.1.1.

1⁵ Cf. Chapter 5.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 17 and ‘moral weakness’ (Hare) are just a few examples. This section characterizes weakness of will as intentionally failing by one’s own standards. This rough sketch is intended to capture the core idea shared by different philosophical accounts of weakness of will. We shall also examine whether our rough sketch accounts for three characteristic features of weakness of the will and conclude with a survey over how the different accounts diverge in three respects.1⁶ Let us begin with a rough characterization: weakness of will is intentionally failing by one’s own standard or standards. Such failing involves, at the very least, setting or endorsing a standard and violating it, by failing to adhere to or breaking it. The dieter has adopted a standard which prohibits overly fatty and sugary foods. In the particular situation when they look at the dinner menu, they acknowledge that their own standard requires them to forego dessert at dinner. When they later have dessert anyway, they fail by their own standard because they violate their dietary requirement. Although this is a rough and incomplete sketch of weakness of will in need of further development, it does capture three core features of the phenomenon: it is puzzling, it involves a conflict, and it is a defect. Although they do hang together, let us consider them one by one. First, it is a commonplace that weakness of the will is a puzzling phenomenon. For one thing, it has turned out difficult to find a definition for it that is not inconceivable or even inconsistent. Consider the dieter: they are weak-willed, it seems, in that they eat dessert although at the same time they think that abstaining would be better. But clearly, they eat dessert for some reason or desire: dessert is delicious. This is true not only for the dieter but for anyone: when we act, we seem to see some good in it. It is difficult to imagine someone who, say, bought a saucer of mud that they had absolutely no use or wish for—not even the trivial one to set a counterexample to what we just said (Anscombe 1957, § 37). When an agent intentionally does or desires something, they do so in light of a supposed good. This claim is sometimes called the ‘guise of the good’ doctrine and can be found in a large number of authors (e.g. Anscombe 1957; Aquinas, Summa Theologiae Ia –IIae q. 1 a. 6 co.; Aristotle, Nicomachean Ethics 1094b; Buss 1999; De Sousa 1974; Plato, Cratylus, Sophist; Raz 2010; Velleman 1992). Although the doctrine does not rule out weakness of will, it raises the difficult question of how we can act in a weak-willed way under the guise of the good. Weak-willed action seems to be an action against an agent’s imagined good. The dieter eats dessert although they believe that it is bad for their health. How is this consistent with them acting for some good? The issue becomes more pressing once we spell it out in greater detail, which can be done in a variety of ways. Here, we consider two. First, let us focus

1⁶ In Section 2.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

18 weakness of will and delay discounting on the supposed good that the weak-willed agent acts against. To the dieter, for instance, we can ascribe the judgement that it would be good to decline dessert. And, plausibly, if we genuinely judge that something is good, then we are motivated to behave accordingly. Philosophers call this claim ‘motivational judgement internalism’ (Rosati 2016). Our motivation to act is internal to our judgement, as it were. For example, if the dieter judges that it is good to skip dessert, then they are at least inclined to do so. However, if internalism thus understood is true, then it is puzzling why an agent would seemingly be inclined to do something other than what they judge to be good. If the dieter judges that it is good to refrain from eating dessert, why do they not simply do so? As Davidson ([1970] 1980c, p. 23) has put it, it seems that, first, if the agent wants to decline dessert more than they want to eat it and believe themselves free to do so, then they will decline it. But it seems that if they eat dessert anyway, then they want dessert more than they want to decline it. This is paradoxical.1⁷ One might try to resolve the paradox by denying internalism and instead adopting an externalist claim: when an agent judges that something is good, they are not motivated to behave accordingly. But this raises a new challenge (Stroud 2003; Stroud and Svirsky 2021): if externalism is true, then why should weakness of the will be problematic? Weakness of will seems to be a defect.1⁸ But from an externalist perspective, it is entirely innocuous for the dieter to eat dessert. According to an externalist account, it is not the case that a judgement about what to do is, ceteris paribus, linked to motivation (Stroud and Tappolet 2003, p. 7). That the dieter is failing is thus not accounted for by an externalist perspective. A second way in which the puzzling nature of weakness of will can be specified is by focusing on the supposed good that the agent pursues in their weak-willed action. For instance, the dieter eats dessert because it is so delicious. Presumably, then, the agent acts in pursuit of some desire—the craving for sweets, say. Crucially, though, this desire seems to be the strongest one they have at that moment because it is the desire that they act on. In this sense, it is stronger than any rival desire to, say, refrain from eating. Otherwise, the agent would not act on that desire but on a rival one. Acting on one’s strongest desire and not pursuing weaker ones is common. We often have competing desires and resolve these conflicts by following the strongest desire. For example, if we would like to eat pizza but also desire pasta, we commonly just choose what we prefer or desire most. But precisely in this respect, cases of weak-willed action are different. For it seems that, in a case of weakness of will, we fail to do what we really want to do and what our actual desire is. For example, the dieter fails to pursue their heart’s desire of becoming healthier. Unlike in a case of choosing between pizza and pasta, 1⁷ We turn to Davidson’s solution of the paradox later in Section 3.3; see also Section 3.1. 1⁸ Cf. Chapter 7 for details.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 19 it seems that the dieter pursues a desire that is not the strongest one. The strongest desire in the dieter’s case seems to be the desire to be healthier. Otherwise, the failure to decline dessert would not be a failure at all. So it appears that when we act in a weak-willed way, we are precisely not acting on our strongest desire. Taken together, it thus seems that the weak-willed agent acts on a desire that is both their strongest and not their strongest desire. It is their strongest desire in that they act on it but it is not their strongest desire in that there is a rival desire that is stronger, e.g. the desire to eat healthily. This has been called the ‘paradox of (synchronic) self-control’ (Kennett and Smith 1996; Mele 1987; Sripada 2014). Why ‘synchronic’? It is a synchronic paradox because it arises at one given point in time t, not over a period of time. At one and the same point in time, the agent seems to act on and against their strongest desire, which must be impossible. In describing the agent in this way, we contradict ourselves. The two paradoxes just introduced differ in that the latter concerns two jointly inconsistent claims about the strongest desire of the weak-willed agent, and the former concerns two jointly inconsistent claims about whether or not the weakwilled agent does what they judge to be better. The paradoxes are alike in that they both specify what might be puzzling about weakness of the will. Understanding weakness of the will as a failure by one’s own standards makes clear why it is a puzzling phenomenon. Endorsing a standard and simultaneously failing to adhere to it is difficult to reconcile, if not straightforward contradictory. The weak-willed agent exhibits features that are prima facie or de facto incompatible. They seem to both act on and fail to act on their better judgement or strongest desire, and in doing so they endorse as well as violate a standard. How this is possible, or even conceivable, is utterly puzzling. Before we turn to a second characteristic feature of weakness of the will, we shall briefly digress on self-control, which is sometimes contrasted with weakness of will (Aristotle, Nicomachean Ethics 1145a15–20;1⁹ Holton 2009, p. 128). ‘Selfcontrol’ is widely used in a range of disciplines and there is no universally accepted definition of it. However, very roughly, it can be understood as a disposition, ability, or capacity to direct, determine, or regulate oneself. Like weakness of will, selfcontrol may be more or less stable over time: at one extreme, it may rarely if ever change, like one’s eye colour. At the other extreme, it may vary with context or circumstances. Self-control over one’s own actions has been regarded as necessary for or even identical with agency (Buehler 2022; Mele 1992; Shepherd 2021). Typically, this concerns actions understood as observable to a third party, like those manifested by bodily movements. But self-control may also concern mental or bodily states and processes like thought, emotion, heartbeat, or attention

1⁹ Cf. Section 3.1.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

20 weakness of will and delay discounting (James 1890; Peacocke 2021; Watzl 2022). For example, staying focused when doing mental arithmetic, suppressing one’s anger or ignoring distractions may require or constitute instances of self-control. On a broad understanding like this, self-control need not contrast with weakness of will. On the contrary, a weakwilled action can be self-controlled; for example, a dieter yielding to their cravings may prepare and dine on an elaborate feast. However, there is a narrower, cross-disciplinary understanding of self-control, which can contrast with weakness of will. On this view, conflict is essential or even necessary for self-control. For example, Sklar and Fujita (2020, p. 66) claim that self-control “arises when people must regulate two incompatible motivations applicable in a given situation” (cf. Bandura 1991; Carver and Scheier 1990). In a similar vein, Sripada (2021, p. 800) claims that “self-control consists of skilled sequences of cognitive control directed against extended streams of response pulses” like cravings. Thus understood, self-control may counter or prevent weakness of will. For example, it may enable the agent to resist their craving for food and to act on their motivation to exercise rather than on their motivation to eat. This narrower understanding of self-control can be narrowed even further. For instance, Sripada (2021) uses the label ‘self-control’ only for synchronic cases while Duckworth, Gendler, and Gross (2016) refer with it to diachronic ones as well.2⁰ An agent not buying cookies in order to avoid a struggle to resist eating them at home employs a diachronic strategy: they strategically set up their environment in advance to facilitate healthy eating later on. For authors like Sripada, this agent is not employing self-control at all; for authors like Duckworth et al., they do. Either way, an account of self-control as an ability, disposition, or capacity to direct, determine, or regulate oneself when facing a conflict involves criteria of conflict. This is so regardless of whether self-control is required to only fight or also to avoid the conflict. After all, unless it is compulsory, any action conflicts with some alternative: if we do one thing, then we do not do another. But selfcontrol concerns conflict in a stronger sense. For instance, Sklar and Fujita (2020)’s view requires criteria for when motivations are incompatible and Sripada (2021)’s for condition of when cognitive control is directed against response pulses. This feature is characteristic for weakness of the will as well, and we shall now turn to it in greater detail. A second characteristic feature of weakness of the will is that it involves a conflict. The weak-willed agent herself tends to experience this conflict as an inner struggle with herself. If weakness of the will is understood as a failure by one’s own standards, the conflict consists in the clash of the element that sets the standards and the element that violates them. Because both elements are present for or even

2⁰ Cf. Section 2.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 21 within the same person, they at the same time bring about and suffer from the conflict. Some philosophers have spelled the conflict out in greater detail. For instance, weak-willed agents have been described as divided selves or minds with opposing parts. On a very minimalist version of this view, the opposing parts are two incoherent mental states (Egan 2008, p. 61). That is, the agent does not have a unified body of mental states but a fragmented one. Ancient philosophy usually regarded the human soul or mind as divided into parts that could come into conflict with one another. For instance, on an Aristotelian view, the human mind contains an appetitive part that opposes a deliberative part (De Anima iii 9–11). Similar ideas persist in modern philosophy. In his earlier work on the topic, Davidson ([1970] 1980c, pp. 33, 36–7) favourably discusses a three-partite division of the mind into reason, passion, and the will. In this picture, the will needs to side with either reason or passion, and is weak when siding with the latter. This provides a quite literal interpretation of ‘weakness of the will’. According to his later view a weak-willed mind is divided into parts, each of which shows greater consistency or rationality than the whole (Davidson [1982] 2004, p. 181). One mental event in one part can then cause another mental event in that same part while by-passing a stronger countervailing reason from a separate part.21 Lastly, on contemporary accounts building on dual-process models from the empirical literature (Stanovich and West 2000), the mind is divided into two systems (or classes of systems). One of them (‘system 1’) is fast, automatic, and effortless, the other (‘system 2’) slow, controlled, and effortful. Accordingly, weakness of will can be understood as a failure of system 2 to appropriately regulate system 1 (Haas 2018; Levy 2011; Sripada 2014). For instance, in the dieter, system 2 fails to inhibit a desire generated by system 1 to eat dessert. Another way of describing the conflict is to understand the agent as a series of time slices that oppose each other.22 The intrapersonal phenomenon of weakness of the will is then analogous to an interpersonal conflict, such as a social choice problem (Ainslie 2001). On this view, the element that sets a standard and a second element that violates this standard do not clash directly. Rather, the agent displays inconsistency over time, perhaps without being aware of it. Yet another way in which the conflict can be further specified is that it can be likened to paradoxes like the liar, preface, lottery, or sorites paradoxes. These 21 Section 3.3 discusses Davidson’s work on weakness of will in detail. 22 This approach is not committed to the view that weakness of will requires recurrent choices although it does require at least two time slices. For example, an agent might have to choose one single time in their life between having Eaton mess for dessert or not. Still, we could describe this as a situation in which two time slices have opposing views on that same choice. For example, one time slice initially orders dinner without dessert but a later time slice reverses that decision. Ultimately, whether cases like these involve one or two choices depends on our definition of ‘choice’, which I shall not discuss here. Furthermore, note that I am not advancing any of the views I survey in this introductory part of the monograph. I thank an anonymous reviewer for pressing me to clarify these points.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

22 weakness of will and delay discounting paradoxes are either self-contradictory claims or a set of jointly inconsistent claims. Liar paradoxes typically take the first form. They arise when we say ‘I am lying’: either we are lying but then the claim is true, or we are not lying and then the claim is false and, effectively, we are lying after all23 (Beall, Glanzberg, and Ripley 2020). The preface paradox involves several claims. Here, an author believes each and every sentence of their book but, being fallible, also writes in the preface that some sentences of the book are false (Makinson 1965). Similarly, the lottery paradox arises when we endorse all of the following claims for n lottery tickets, which are jointly inconsistent (cf. Kyburg 1961): (1) (...) (n) (n + 1)

This (first) ticket will not win. This ( . . . ) ticket will not win. This (nth) ticket will not win. Some ticket will win.

The sorites paradox is generated by vague terms such as ‘heap’: (1) (2) (3) (...)

One grain of sand does not make a heap. If one grain does not make a heap, then two grains do not make a heap. If two grains do not make a heap, then three grains do not make a heap. ...

From (1) and (2), it follows that two grains do not make a heap; from (1), (2), and (3), it follows that three grains do not make a heap, etc. Yet if we continue in this way, we arrive at a number of grains n (one million, perhaps) that certainly does make a heap: If n − 1 grains do not make a heap, then n grains do not make a heap. In these paradoxes, there seems to be a conflict because the claims are not consistent with each other. Similarly, in a case of weakness of will, there seem to be conflicting claims although each of them, taken in isolation, appears plausible. For instance, imagine that the dieter judges that, in general, they ought to, say, refrain from eating chocolate. Yet in every instance where they have the opportunity to eat or refrain from eating chocolate, they fail to abstain. In this way, the dieter habitually makes

23 We are arguably not lying because we are not aiming to deceive (Chisholm and Feehan 1977; Lackey 2013; Mahon 2016; Williams 2002; but see Carson 2006, 2010; Fallis 2009; Rutschmann and Wiegmann 2017; Stokke 2013). It is therefore better to use statements like “I am saying something untrue” to generate the paradox. I have used ‘lying’ because it illustrates the name of the paradox more clearly.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 23 exceptions to their dieting rule, not noticing that, this way, the exceptions become the rule. It is not trivial, though, to pin down the set of inconsistent claims that the dieter endorses. They might be something like this one: • • • •

On this (first) occasion, it is permissible to eat chocolate. On this (second) occasion, it is permissible to eat chocolate. ... It is not permissible to eat chocolate.

Or, put even more concisely: “I don’t care at all about the difference to my weight one chocolate will make” (Edgington 1997, p. 296). For, if we keep adding up chocolates, the difference to body weight will eventually be substantial and plausibly the dieter will care about it. Although weakness of will itself may not be identical to a paradox similar to the ones mentioned above, it could be described in a similar form. For instance, recall the paradoxes we discussed above when examining what is puzzling about weakness of the will. Whatever way we spell it out, then, it seems that there is a conflict in weakness of will. It arises because of a clash between at least one normative element and one or more elements that violate it. A third characteristic feature of weakness of the will is that it is typically regarded as a defect. The word ‘weakness of will’ suggests as much. Weakness of the will seems wrong, irrational, blameworthy, objectionable, or criticism-deserving. Given our rough characterization, we can say a little more about why weakness of will is defective. On this view, the agent violates their own standard. But violating or failing to meet a standard amounts to a defect. For instance, the dieter violates their dieting rules and, on the occasion in question, they violate the more specific prohibition to eat dessert. Philosophers differ over what kind of defect weakness of will is and what, precisely, it consists in. For example, some authors believe that weakness of the will is immoral. Aquinas suggests that it is a sin (Summa IIa –IIae q. 156). Hare calls it ‘moral weakness’.2⁴ In contrast, Davidson argues that weakness of will is not immoral but merely irrational.2⁵ Similarly, Socrates regards it as a failure of knowledge.2⁶ Most contemporary authors would agree that weakness of will is a prime example of practical irrationality (McIntyre 2006). Because the weak-willed agent typically endorses the norm they violate, some authors have likened weakness of will to hypocrisy or double standards. For example, it has been argued that weakness of will is related to, explained by, or even reducible to self-deception (cf. Beier 2010; Rorty 1970, 1972; Schälicke 2004;

2⁴ Cf. Section 3.2.

2⁵ Cf. Section 3.3.

2⁶ Cf. Section 3.1.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

24 weakness of will and delay discounting Wehofsits 2020; Wolf [1985] 1999; and Elster [1979] 2013, pp. 173–4, for a critique). For example, Wolf ([1985] 1999, p. 240) argues that the weak-willed agent does not in fact fully accept the standard that they would like to accept, and therefore deceives themselves about it. Recall2⁷ that, because it is not possible to deceive oneself about one’s conscious judgement at one and the same time, accounts like this characterize weakness of will as a diachronic phenomenon: the agent shifts over time between the motive that they judge they have, and the one on which they act. Although weakness of will is typically regarded as a defect, there might be exceptions. Prominent amongst them are cases of so-called inverse akrasia. In inverse akrasia, someone’s acting in a weak-willed way is in some important sense superior to the action required by the standard they violate (Arpaly and Schroeder 1999). For instance, Twain’s Huckleberry Finn believes that he ought to turn in Jim, the runaway slave, yet he cannot bring himself to do it (Bennett 1974). Huck is weak-willed (or akratic): he endorses the norm that seems to require him to report the runaway slave. But he fails to act accordingly. From our perspective, Huck’s failure is better than complying with the dubious norm of a slave-holding society. In his weak-willed behaviour, Huck is thus acting better than he would if he was not weak-willed. Considering a similar case, Aristotle remarks that foolishness combined with akrasia may appear virtuous (Nicomachean Ethics 1146a27–31). That is, erring about the norm and then violating the supposed norm may lead an agent to act in a way that effectively fulfils the true norm. One might wonder whether inverse akrasia is truly akrasia or weakness of will, a question we shall consider below.2⁸ Assuming that it is, the question is then whether inverse akrasia is a defect or not.2⁹ The agent is surely praiseworthy or acts well in some sense. Still, there seems to remain a defect as well: Huck fails to do what he thinks he ought to do, and Aristotle’s agent is both foolish and akratic. It is unclear whether the inversely akratic agent is merely lucky or whether they are somehow guided by their true values (Arpaly and Schroeder 1999). Here, we leave this question open. We can say that inverse akrasia is a special case of weakness of the will where the agent’s defect is trumped by an additional feature that is valuable or virtuous. It thus remains true that weakness of will is commonly regarded as a defect. To draw an interim conclusion, we have roughly characterized weakness of will as a failure by one’s own standards, and this account explains why weakness of the will is puzzling, involves a conflict, and is commonly regarded as a defect. However, this sketch is insufficient: not all phenomena it describes are actually weakness of the will. At this point, we need to add further details and criteria 2⁷ Cf. Section 2.1. 2⁸ In Section 2.2. 2⁹ An even more specific question is whether this defect is one of rationality, i.e., whether the inversely akratic agent is irrational. We discuss it in section 7.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 25 for a full account of weakness of will. Yet it is hardly possible to do so without making further assumptions that are not contested in the literature. We shall, therefore, now consider three points on which philosophical accounts of weakness of will diverge. First, and perhaps most substantially, authors diverge over the way in which the weak-willed agent endorses and violates a standard. For instance, violating and endorsing might be properties or mental processes or states of the agent, or, for diachronic approaches,3⁰ features that change over time. A related but different question concerns the ontology of these elements, that is, the question of what they are. Are they neural or psychological systems, patterns of behaviour, or something more abstract? Let us consider a few examples for the sake of comparison. For Plato’s Socrates,31 the seemingly weak-willed agent, who normally knows what they ought to do, changes their mind in the face of temptation. On this view, then, endorsing and violating the standard happens over time. On Davidon’s account,32 the weak-willed agent endorses a standard when they believe that, say, declining dessert is better than having it, all things considered, yet they violate the standard because they fail to infer that declining dessert is better all-out—instead, they conclude that indulging is better. Here, the agent both endorses and violates a norm in their judgements. On Holton’s view,33 the weak-willed agent endorses a standard when resolving to act in a certain way—say, to decline dessert—and violates it when over-readily abandoning the resolution in the face of temptation. From this perspective, the normative element is an intention, and the failure consists in abandoning the intention under certain circumstances. Another point of divergence are two related questions of, first, whether and how much the weak-willed agent needs to be aware of the standard or their failure to abide by it, and, second, whether the standard must be, in some sense, objective or genuinely normative. Prima facie, both issues might seem entirely epistemic, that is, issues of knowledge. Imagine your dieting friend repeatedly tells you about their personal ban on sugar but every time you meet them, they consume sugary snacks. Whether or not this friend is weak-willed might turn on what we know about the case: if it turns out that they never violate their sugar ban intentionally but mistakenly believe that the food they eat is sugar-free, then it seems that they are not weak-willed. However, if it becomes clear that they are fully aware that they break their rule although they think they could do otherwise, they might well be weak-willed. But the problems are not merely epistemic. If we assume that your friend does not know that cola contains sugar, could they be weak-willed when they drink it? Or are they weak-willed only when they know it? And if they mistakenly believe that, say, butter contains sugar, could they be weak-willed when they eat butter?

3⁰ Section 2.1.

31 Cf. Section 3.1.1.

32 Cf. Section 3.3.

33 Cf. Section 3.5.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

26 weakness of will and delay discounting Relatedly, if we know for sure that your friend’s dietary rule loses its grip on them once they encounter temptation, do we call that ‘weakness of will’ or not? Let us first consider whether the standard violated by the weak-willed agent is objective or not. This question leads us back to the discussion about inverse akrasia. For, in inverse akrasia, the weak-willed action is by some objective standard superior to the course of action required by the norm the agent violates. It seems, then, there are two standards of assessment in these cases: the one violated by the agent, and the one not violated. For instance, Huck Finn violates the law demanding runaway slaves to be turned in but he also abides by a norm that requires us to protect innocent fugitives from unjust treatment. From our modern perspective, only the latter is an objectively valid standard. As Huck abides by it, we might not regard him as weak-willed after all. But this verdict is controversial. Davidson ([1970] 1980c) claims that weakness of will does not depend on objective standards at all: all that matters is the standard the weak-willed agent takes themselves to have. That is, an agent like Huck can be weak-willed when he believes that he ought to turn in the slave and fails to do so; whether this belief is true or not does not matter. Let us now consider whether and to what extent the weak-willed agent needs to be aware of the standard they violate, irrespective of whether it is objectively true or justified. Philosophers like Socrates3⁴ and Hare3⁵ claim that the weak-willed agent would have to fully endorse the standard and at the same time also recklessly violate it—something that is impossible, they argue. What seems to be weakness of will and what we call ‘weakness of will’ is something rather different. On the Socratic view, it is a change of mind, on Hare’s view it is either madness or a related pathological phenomenon, or an agent paying mere lip-service to a norm they do not actually endorse. In other words, when an agent behaves in a way we would call ‘weak-willed’ they either do not endorse the standard or they are unaware of it, perhaps temporarily so. On a dissenting view, though, which is also the standard contemporary one,3⁶ the weak-willed agent is aware of the standard. For instance, they judge that they ought to act other than they actually do. Typically, this also allows the agent to acknowledge their failure to abide by it. For example, on Davidson’s analysis,3⁷ the agent judges that they ought to act other than in the weak-willed way, all things considered. This view faces the challenge to explain how it is psychologically possible to be aware of and at the same time counteract a standard, and we shall examine in greater detail the solutions that have been proposed.3⁸ A related challenge arises when philosophers that endorse this account rely on empirical evidence about seemingly weak-willed behaviour. Empirical research tends to examine observable actions. For instance, in classic discounting 3⁴ Cf. Section 3.1.1. 3⁵ Cf. Section 3.2. 3⁷ Section 3.3. 3⁸ In Chapter 3.

3⁶ Cf. Sections 3.3–3.5.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 27 paradigms, a participant is asked to choose between a smaller but earlier and a greater but more delayed reward. When they initially choose the greater reward but then switch back to the smaller one, this is interpreted as weak-willed behaviour. However, if we assume, as the view under discussion does, that in acting against a standard the weak-willed agent must be aware of it, then it is unclear whether the behavioural evidence actually concerns weakness of will. For, the behavioural experiments do not typically ensure that the agent is aware of any standard at all.3⁹ We shall return to this caveat later.⁴⁰ The third question on which authors diverge is to what degree, if any, the weakwilled agent has to be able to abide by their own standard. This question will also resurface later⁴1 when we discuss to what extent an economic theory of agency can account for weakness of the will. On some accounts, the weak-willed agent is completely unable to act other than they do. On these accounts, weakness of will is a kind of compulsion. For example, on Hare’s account,⁴2 the weak-willed person temporarily loses control, like Medea in Greek mythology when she kills her own children in a fit of madness. A complaint about this view has been that it does not accurately account for many cases of weak-willed action. A much-quoted passage from Austin (1956, p. 24 n. 13) describes such an example: I am very partial to ice cream, and a bombe is served divided into segments corresponding one to one with the persons at High Table: I am tempted to help myself to two segments and do so, thus succumbing to temptation and even conceivably (but why necessarily?) going against my principles. But do I lose control of myself? Do I raven, do I snatch the morsels from the dish and wolf them down, impervious to the consternation of my colleagues? Not a bit of it. We often succumb to temptation with calm and even with finesse.

Philosophers concurring with Austin thus maintain that the weak-willed agent is not compelled to act as they do (e.g. Audi 1979; Jackson 1984, p. 2; Gorman 2022). However, even if we distinguish compulsion from weakness of will, the issue remains how the distinction should be drawn. To begin, one might think that compulsive or compelled actions are not intentional (Davidson [1973] 1980b), unlike weak-willed ones.⁴3 But this is not true. Consider a patient with obsessive-compulsive disorder (OCD) who continues to wash their hands although they know they should refrain from doing so because it damages their skin. When this patient washes their hands, they do so intentionally: 3⁹ Neuroscience can overcome this issue to some extent because it is able to detect signals in the brain even when no overt behaviour is observable, or may differentiate two types of behaviour that are indistinguishable by observation by revealing different neurocorrelates. ⁴⁰ In Chapter 4. ⁴1 In Chapter 4. ⁴2 Cf. Section 3.2. ⁴3 Cf. Section 2.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

28 weakness of will and delay discounting they direct and control their movements, they are aware and conscious of them, they knowingly execute them. Both weak-willed and compulsive actions are thus intentional. Perhaps another distinction between the compelled and the weak-willed person might be the following. Even though both act intentionally, the OCD patient must act as they do, they cannot do otherwise. This is not true for the weak-willed agent, who could do otherwise. One might therefore think that the ability to do otherwise is a condition for weakness of will (cf. Watson 1977). When you are compelled to wash your hands, you are unable to refrain from doing so. But if you are weakwilled in washing your hands, perhaps because you are tempted by the warmth of the clean water, you could have resisted. Hence, it seems, in order to be weakwilled—but not for compulsion—you must be able to act other than in the weakwilled way. But this is not true. Imagine that you are alone on a subway train with one other person. The person faints and collapses to the floor. Imagine that you judge that you must get up and help this person. But you are so tired and comfortable in your seat. You fail to get up. You judge it is wrong what you are doing but you do not abide by this judgement. It seems that you might very well be weak-willed. Does it make a difference whether you have also been magically glued to the seat or not? If you cannot get up but do not know this, then it seems you are not excused from what you do. After all, you do not remain seated because you are glued to your seat but for some other reason, and that reason may well be weakness of will. Hence, it seems, the ability to do otherwise is not a requirement for weakness of will. You can be weak-willed even if you lack this ability (cf. Davidson [1973] 1980b; Frankfurt 1969). Neither is an ability to want, will, choose, or decide otherwise a requirement for weakness of the will. Consider our dieter again. They believe they ought to abstain from dessert. Now imagine that, unbeknownst to the agent, an evil scientist has implanted a chip in their brain that causes them to will, want, choose, or decide to eat dessert if the dieter does not do so on their own. If the dieter is not tempted by the dessert at all, the chip will cause them to want, will, choose, or decide in favour of it. So the dieter lacks the abilities to want, will, choose, or decide otherwise. In any case, they want, will, choose, or decide to eat dessert. But now imagine that the chip is not needed after all: out of their own accord, the dieter is tempted, desires, wants, wills, decides, or chooses to have dessert, and they act accordingly. Are they weak-willed? It seems that they may well be. But if this is correct, then abilities to want, will, choose, or decide otherwise are not a requirement for weakness of the will. So what does distinguish weakness of will from compulsion? We cannot give a definite answer to this question here and it remains a matter of ongoing debate in philosophical research (Buss 1997; Gorman 2022; Mele 1987, 2012; Strabbing 2016;

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

weakness of the will 29 Tenenbaum 1999). One approach suggests that weak-willed agents have some other ability or capacity that compulsive agents lack, such as a capacity to have desires that align with one’s overall normative beliefs (Smith 2003) or a capacity that is relevant for being attributionally responsible for an action (Strabbing 2016). On this view, a weak-willed agent could but does not exercise this capacity while a compulsive agent could not even exercise it. Another approach claims that weakness of will and compulsion differ in how actions are caused or motivated. For example, weak-willed agents may act in accordance with a desire, goal, value, or reason they in some sense or independently endorse while compulsive agents merely act to satisfy a compulsive desire (Frankfurt 1971; Gorman 2022). According to some hybrid views, compulsion requires nearly or completely irresistible urges or desires while weakness of will requires an ability or capacity for resistance (Kennett and Smith 1994; Watson 1977). Whatever distinction, if any, philosophers draw between weakness of will and compulsion may have implications for the debate about a related phenomenon: addiction. Some think that addiction is a special case of weakness of will, perhaps a pathological one. For instance, Jeffrey (1983, appendix) treats smoking as an example of weak-willed behaviour. Others argue that addiction is not weakness of will but a kind of compulsion (Holton 1999; Watson 1977). How we distinguish compulsion and weakness of will may help us to classify addiction as one of the two, or as something altogether different. For instance, imagine we take the view that compulsive action differs from weak-willed action in that the former but not the latter is exclusively performed in order to satisfy a craving or ease an unpleasant urge (cf. Gorman 2022). Furthermore, some addicts may engage in addictive behaviour because it serves their sense of identity (Flanagan 2013; Pickard 2021). They may be well aware of devastating effects of their addiction but tend to underestimate them (Levy 2006; Pickard 2021). This might in turn be due to the fact that they have relatively steep discount rates, i.e. they tend to value delayed benefits much less than non-addicts (Bickel, Athamneh, et al. 2019; Bickel, Koffarnus, et al. 2014; Kirby, Petry, and Bickel 1999; Noda et al. 2020).⁴⁴ From this perspective on compulsion, weakness of will, and addiction, it follows that at least some addictive actions may not be compulsive. These are actions performed in the pursuit of a further goal besides addressing compulsive desires or urges. For example, an addict may take heroin in order to experience pleasure and to nurture their identity as a member of the community of addicts. Because compulsion is, on the view we have taken, exclusively performed to satisfy cravings or ease unpleasant urges, actions by addicts like these are not compulsive. Yet they may be weak-willed.

⁴⁴ Cf. Section 5.3 for the suggestion that irrational agents have steeper discount rates.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

30 weakness of will and delay discounting In sum, we have characterized weakness of will as a failure to abide by one’s own standard. It is defective, puzzling, and involves a conflict. This rough sketch has been variously spelled out in greater detail. In what follows, we shall take a closer look at a range of different accounts that have been proposed in Western philosophy since ancient times.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

3 Philosophical Accounts 3.1 Ancient Philosophers In this section, we plunge headfirst into one of the oldest ongoing debates in Western philosophy. It features the three well-known ancient philosophers Socrates (c.469–c.399 bc), Plato (c.429–c.347 bc), and Aristotle (c.384–c.322 bc). All Athenian citizens, they formed three generations of what one would nowadays call an ‘academic family’: Socrates was Plato’s teacher who in turn was Aristotle’s. Like all families, they sometimes disagreed among themselves: [Socrates:] nobody, either knowing or thinking that something else is better than whatever he is doing, and that it is possible, thereupon does it, if he can do what is better [ . . .] . Plato, Protagoras 358B7–C1; transl. by Lamb (1967) Socrates used to altogether oppose the account [of akrasia], believing that there is no akrasia [ . . . ] Now this argument contradicts what appears manifestly. Aristotle, Nicomachean Ethics 1145b25–28; transl. by Irwin (1999)

Socrates and Aristotle disagree about whether or not akrasia exists.1 Socrates seems to think akrasia is impossible. Aristotle thinks that this contradicts the facts. What is akrasia? ‘Akrasia’ (‘ἀκρασία’) literally translates as ‘without’ (‘a-’) ‘power’ or ‘strength’ (‘kratos’). Akrasia is thus a lack of control or, more specifically, a lack of self-governance. From the quoted passages above, we can understand akrasia as doing something whilst knowing or thinking that something else is better and possible. Accordingly, philosophers have distinguished two accounts of akrasia: akrasia as action against your better knowledge, and akrasia as action against your better judgement or belief. As we have seen above, Plato’s Socrates sometimes uses ‘believing’ (‘οἴομαι’, Protagoras 358C7, 358D1, 358B7, 358C) and sometimes ‘knowing’ (‘γιγνώσκω’ or ‘οἶδα’, Protagoras 352D, 357C7, E1, 358B7).

1 Aristotle seems to have coined the term ‘akrasia’, and Plato does not use it—the word does appear twice in the dialogue Definitions but this work is generally not considered authentic. We follow common practice and use ‘akrasia’ as a convenient shorthand for what Plato variously describes as not doing what one knows to be best (Protagoras 352D6–7) or more specifically as being overcome by pleasure (Protagoras 352E6–353A1, 355A8–B1, Laws 633E4).

Weakness of Will and Delay Discounting. Nora Heinzelmann, Oxford University Press. © Nora Heinzelmann 2023. DOI: 10.1093/oso/9780192865953.003.0003

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

32 weakness of will and delay discounting Here, we focus on akrasia as acting against one’s better knowledge. Note, though, that if someone acts against their better knowledge, then they also act against their better belief. You only know what you believe. According to some interpreters including Aristotle (Nicomachean Ethics 1145b23–7; cf. Taylor 2009, pp. 171, 200–4; Vlastos 1969, pp. 178–88; Walsh 1960, p. 30; Santas 1964; Irwin 1977, p. 105; Gallop 1964, p. 117), Socrates merely denies that we act against our better belief and, as a consequence, thereby also indirectly denies that we act against our better knowledge. Either way, the resulting claim is that it is impossible to do something whilst knowing that something else is better and possible. Let us take a closer look at Socrates’ arguments before turning to Aristotle’s alternative account of akrasia.

3.1.1 Plato’s Socrates We do not have any writings of Socrates himself but know him from the representations of his disciples. In Plato’s dialogue Protagoras, Socrates is, among other things, engaged in a discussion about knowledge (ἐπιστήμη). He defends the claim that knowledge cannot be overcome: a person knowing good and bad will not be driven to act other than how knowledge bids them (Protagoras 352C). But how does this claim fit with people’s acting against their better knowledge? Are they not swayed by pleasure although they do know good from bad? A dieter may very well know that having dessert is bad for them but they might still be tempted to indulge. In response, Socrates first points out that when our better knowledge seems overcome by pleasure, we seem to be doing something bad (Protagoras 353D–E). Such actions are not bad because of the pleasure we get to enjoy at the moment when we perform them—the dessert is surely delicious. Rather, they are bad because they help bring about ailments or poverty later on. For instance, the dieter may develop diabetes. The same is true when we avoid painful actions such as strenuous exercise that contribute to greater benefits such as good health in the future. In Socrates’ view, however, all decisions and actions are determined entirely by pleasure and pain, which are identical to good and bad, respectively (Protagoras 354C). We pursue pleasure and avoid pain. We seek the good and keep away from the bad. Today philosophers call such a view ‘hedonism’, from the Greek word for ‘pleasure’ or ‘joy’, ‘hedone’ (‘ἡδονή’; Moore 2019). This word is precisely the one that Socrates uses in the passage from Plato’s Protagoras that we are currently concerned with. In its descriptive form, hedonism states that only pleasure and pain motivate us. In its normative form, it claims that only pleasure and pain are good or bad. Socrates endorses both claims. This is important to bear in mind: what Socrates goes on to say is all based on the assumption that hedonism is

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 33 true. If we give up hedonism, we might also have to give up the conclusions he draws. Let us return to the puzzle of how someone can do other than how knowledge bids them. Somewhat curiously, hedonism makes this puzzle even more puzzling (Protagoras 355C–D). After all, it seems that we have to say the following about actions like the dieter’s. The dieter is overcome by pleasure (gained from desserteating) and therefore acts against what they know to be pleasurable (good health). Because dessert-eating is less pleasurable than good health, it is bad. So the dieter does the bad thing, knowing it is bad, because they are overcome by pleasure—a good thing. They do bad because they are overcome by good. This is ridiculous. But crucially, the good or pleasure that the dieter enjoys is smaller, lesser, or weaker than the one they forego. After all, dessert-eating is simply not worth the diabetes. Thus akrasia amounts to choosing a smaller or lesser good in return for greater or more bad (Protagoras 355E). The person is overcome by a pleasure that is not truly worth the pain later on. In situations like this, the agent ought to choose the greater, more, or stronger pleasure and not the smaller, lesser, or weaker one (Protagoras 356A–C; cf. Warren 2014, ch. 5). This is difficult because the former may be remote or delayed when the latter is closer and immediate. Things that are closer appear larger, amounts greater, and sounds louder, even though they are not. Hence, we need to measure pleasures against each other in size, amount, and strength without being deceived by their appearance. They may appear larger, greater, or stronger than they are because they are closer to us in time or space. Akrasia arises when the way in which pleasures appear to us leads us to incorrectly measure their size, amounts, or strengths (Protagoras 356D). Socrates maintains that we need to make the right choices between pleasures and pains, and this in turn requires knowledge: knowledge of the correct measures of pleasure relative to each other. Akratic action thus happens due to a lack of knowledge. Socrates infers that akrasia amounts to a defect of knowledge, a form of ignorance: it is from defect of knowledge that men err, when they do err, in their choice of pleasures and pains—that is, in the choice of good and evil; and from defect not merely of knowledge but of the knowledge [ . . . ] of measurement. [ . . . ] Accordingly, “to be overcome by pleasure” means just this—ignorance in the highest degree Plato, Protagoras 357D–E, transl. Lamb (1967)

Due to claims like this, Socrates’ position is sometimes equated with a straightforward denial of akrasia, as we saw above in Aristotle. For, if akratic action is action against your better knowledge, and if knowledge can never be overcome, then akratic action never occurs.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

34 weakness of will and delay discounting And indeed Socrates’ aim is to reconcile his claim that knowledge is never overcome with what seems to be a counterexample, namely, akratic action. As we have seen, though, Socrates achieves reconciliation not by simply denying the existence of akrasia. He explains that akratic action occurs because the person is overcome by the appearance of the nearer or immediate pleasure. Their art of measurements fails them. They do not correctly weigh pleasures and pains relative to each other, considering the fact that distance may affect how they appear to us. The weak-willed person thus knows what they should do, then becomes overpowered by appearance and becomes ignorant. Probably, we may add, the agent later on reverts back to their initial position, once the appearance has faded. The dieter resolves to not have dessert after dinner, is then overcome by temptation, and afterwards regrets what they did. So it seems that Socrates is telling us that weak-willed agents waver in their opinions. However, one might object, this is simply an account of how agents change their mind over time. But changing one’s mind over time is not weakness of will.2 Changing your mind over time fails to exhibit the three characteristic features of weakness of the will, i.e. being puzzling, defective and involving a conflict. You might change your mind for very good reasons, perhaps to adapt to circumstances that have changed. Not every change of mind is weak-willed. Therefore, according to popular interpretation (Penner 1990, 1996, 1997), the crucial feature of the weak-willed case is that here, we seem “to take things topsyturvy and to have to change our minds” (Protagoras 356D, transl. W. Lamb). That is, we momentarily take the greater good to be smaller, and the smaller good to be greater. In this, we are mistaken. Socrates thus describes what we have earlier3 called a ‘diachronic’ case of weakness of the will. Here, the agent is weak-willed only over time but not at any particular point in time. Accordingly, strong knowledge is not strong in a given moment but over time (Penner 1997, p. 121). In other writings, notably the Republic, the Phaedrus, and the Timaeus, Plato’s Socrates makes ontological claims about the human mind and its parts which may add further details and levels of explanations to his account of akrasia. Specifically, Socrates takes the soul (ψυχή) to be divided into three parts that we may call ‘reason’ (‘λογιστικόν’), ‘spirit’ (‘θυμοειδές’, ‘θυμός’), and ‘appetite’ (‘ἐπιθυμετικόν’). These parts, or their elements, can oppose one another. For example, if someone craves a drink, the soul is pulled by a force from its appetitive part (Republic 439b–e). However, it may be held back by a force from the reasoning part. We can imagine that something like this happens in an akratic agent like the dieter: their appetitive part pulls them towards eating dessert, their reasoning part holds them back. Note that this ontological account is orthogonal to the diachronic account sketched earlier: neither implies the other. If an agent is akratic because they 2 As we discussed in Section 2.1.

3 In Section 2.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 35 oscillate between two mutually exclusive options, this does not require that there is a conflict between different parts of their soul. Conversely, a divided mind whose parts conflict need not oscillate between different options over time. In the literature, divided-mind accounts have been popular with various authors throughout history (like Aristotle, Aquinas, the later Davidson, and contemporary Chandra Sripada). We shall refer to the diachronic account as the ‘Socratic’ one. Let us briefly revisit the three features of weakness of will outlined earlier.⁴ On Socrates’ diachronic account, the akratic agent plausibly experiences a conflict: they are divided between two options and pulled in two opposite directions. They cannot have it both ways. Their behaviour can certainly appear puzzling to observers. After all, they announce they shall not have dessert, then have it anyway, and later on regret and deplore what they did. Lastly, they are also defective and may be appropriately criticized for their wavering. For instance, the dieter may in the long term fail to improve their health, and in the short term cause irritation and lose money when they repeatedly go back and forth on their eating plans. We can sum up Socrates’ account as follows: when someone seems akratic, they are actually overcome by the appearance of a smaller good (pleasure) and fail in their knowledge of measurement to choose the greater good (pleasure). Restated as a definition in modern words, an agent is weak-willed if and only if they are overpowered by the appearance of a smaller good or pleasure and fail to choose the greater good or pleasure.

3.1.2 Aristotle We started this section with a persistent disagreement in Western philosophy that manifests itself early on between Socrates and Aristotle. Having looked into Socrates’ position first we shall now turn to Aristotle’s. Unfortunately, there is no consensus on what, precisely, Aristotle’s account of akrasia is. Parts of the relevant text are corrupted and interpretations diverge. Therefore, we shall focus on those aspects that are relatively uncontroversial and leave the disputed issues open. In his chapter of the Nicomachean Ethics⁵ that discusses akrasia, Aristotle first describes it as a state of character (ἕξις, hexis). Character states are dispositions or capacities to behave in a certain way but also include desires, feelings, and decisions. Alongside akrasia, Aristotle mentions five other states that we may call ‘enkrateia’, ‘vice’, ‘virtue’, ‘godliness’, and ‘beastliness’ (1145a15–35). They can be grouped into two pairs of contrasting states of three ranks. The supreme state is godliness, a kind of superhumanity that we may at best find in some heroes. It ⁴ Section 2.2. ⁵ The work was supposedly named after, dedicated to, or compiled by Aristotle’s son Nicomachos.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

36 weakness of will and delay discounting contrasts with beastliness, which is the state of animals that are neither virtuous nor vicious. Virtuous people combine particular virtues such as courage with practical wisdom: they do the right thing for the right reason and in the right way. The virtuous contrast with the vicious who do the wrong thing for the wrong reason. The enkratic person does the right thing, like the virtuous one, yet they lack the appropriate motivation. They have inadequate desires but also know what the right thing to do is. They then overcome these desires and act correctly. The enkratic contrasts with the akratic who also decides to do the right thing but then fails to act accordingly. Aristotle then specifies the issue of akrasia further as a mistake in practical reasoning. To understand what this mistake exactly consists in, we shall briefly digress on Aristotle’s theory of reasoning. This serves three purposes. First, without understanding how Aristotle conceived of reasoning it may be challenging to understand how he conceived of an akratic’s flawed reasoning. Second, looking at details of Aristotle’s account will make it clear how knowledge may feature in practical reasoning, which in turn allows us to understand toward the end of this section how the dispute between Socrates and Aristotle is eventually resolved. Third, many authors, including Davidson, have used Aristotle’s framework, and grasping it may help comprehending these other accounts.⁶ For Aristotle as well as for most contemporary philosophers, reasoning may concern the theoretical or the practical domain. We now call these two kinds of reasoning ‘theoretical’ and ‘practical’, respectively. Practical reasoning is concerned with actions, intentions, and other practical subject matters; theoretical reasoning concerns everything else. The distinction may not be exact and practical reasoning might be grounded in theoretical reasoning, or vice versa. This issue need not concern us here. Aristotle believed that all reasoning takes the form of syllogisms. A syllogism could be an abstract object like a group of propositions or it could be a mental activity, a transition from certain mental states to others. Both are related, and it seems we can variously read Aristotle’s ‘syllogism’ in one way or the other. Either way, in a syllogism a statement about certain things follows with necessity from different statements (Prior Analytics 24b18–20). More precisely, from a first, general or universal (‘major’) claim and a second, specific or particular (‘minor’) claim, a third one follows. For instance, “Socrates is mortal” follows from the major statement “all humans are mortal” and the minor statement “Socrates is human”. This is an example of theoretical reasoning. We can trace the historical development of deductive arguments in modern logic all the way back to Aristotle’s theory about syllogisms. However, a syllogism is not identical with a deductive argument in our modern sense, whose conclusion

⁶ Cf. Section 3.3.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 37 follows with logical necessity from its premises. This will become evident when we consider another example for a syllogism, this time in the practical domain (De Motu Animalium 701a17–20): (i) I ought to create a good. (ii) A house is good. (iii) Straightway I make a house. Here, the reasoning is not conclusive: besides houses, there are other goods, so one might reason to make something else. Another crucial difference between the theoretical and the practical examples is that the result of the former is a statement or claim but that of the latter is an action. From practical reasoning an action follows, provided it is not prevented by external constraints. Furthermore, as Aristotle points out, the resulting action and corresponding reasoning may differ from one instance to the next: on one occasion one might reason that everyone ought to walk and start walking; on a different occasion, one might believe that everyone ought to remain at rest, and rest as well. Now that we have a better understanding of how practical reasoning works in Aristotle’s view, let us take a look at how it goes wrong in the akratic agent. Consider the example of a dieter who tries to avoid sugary foods but then has dessert after all. From Aristotle’s perspective, the dieter might reason in either of two ways (Nicomachean Ethics 1147a31–7):⁷ (a)

One should not taste anything that is unhealthy.

(𝛼)

Everything sweet is pleasant.

(b)

This is unhealthy.

(𝛽)

This is sweet.

(c)

I do not taste this.

(𝛾)

I eat this.

Each of these syllogisms, on its own, would lead the dieter to a certain action, namely, to taste the food in question, or to refrain from tasting it. But only one of them can prevail because it is physically impossible to perform both actions. So what is the agent to do? Aristotle implies that the right thing to do would be to refrain from eating the food. Ideally, then, the (enkratic) agent would reason ⁷ How to interpret the passage from 1147a24 onwards has been highly controversial. For one thing, Aristotle neither states a particular claim like (b) nor the major premise (a) but merely says “assume that someone has the general belief hindering him from tasting” (1147a32–3). It is thus debatable what (a) and (b), if they are present at all, look like. Here, I describe the interpretation of those (Bostock 2000; Cooper 1975; Hardie 1968; Robinson 1969; Santas 1969; Urmson 1988; Walsh 1960; Wiggins 1979; Woods 1990) who take Aristotle to suggest that an emotion prevents the akratic agent from using the minor claim (b) in reasoning and action. Others (Charles 1984; Dahl 1984; Kenny 1966) suggest that the akratic agent does not have the minor claim (b) at all, and a major claim like (a∗) “one should not taste anything sweet”. On this view, the akratic failure consists in a failure to reason from (a∗) rather than from (𝛼).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

38 weakness of will and delay discounting from (a) and (b) to (c) and abstain. The problem of the akratic is that they fail to do this. There are several possibilities of what precisely goes wrong. They might not believe (b). Alternatively, they might believe (b) in some sense but fail to attend to it or merely voice (b) “like a drunkard” (cf. Nicomachean Ethics 1146b33– 1147a7). That is, just as a heavily intoxicated person may recite a poem without really knowing what they are doing an akratic person may say “This is unhealthy” without fully knowing what they are saying. Aristotle thus distinguishes between having knowledge but not using it, and having and using knowledge (1146b33– 1147a7). Either way, the akratic dieter fails to arrive at (c); instead, they reason from (𝛼) and (𝛽) to (𝛾) and consume the dessert. But why does the akratic person fail to endorse and act on (b)? They fail because they are hindered by a desire to translate their knowledge into action (Nicomachean Ethics 1147a35). In the example above, appetite for the sweet food leads the agent on and moves them to taste the food they deemed unhealthy. Aristotle divides desires (ὀρέξεις, orexeis) into three types, following Plato’s three-fold division of the soul (1135a5–13): rational desires or wishes (βουλήσεις, bouleseis) about what is believed to be good, arational appetites or urges (ἐπιθυμίαι, epithumiai) for something believed to be pleasant, and arational spirits or tempers (θυμοί, thumoi) like pride or anger for what appears good. The akrates lacks control over the two latter kinds of desires and accordingly acts in violation of their own, better decision (προαίρεσις, prohairesis). Aristotle distinguishes incontinence from similar phenomena like intemperance or softness (VII.6–7), and divides it into two sub-categories labelled ‘impetuosity’ (‘προπέτεια’, propeteia) and ‘weakness’ (‘ἀσθένεια’, astheneia; 1150b20). In a case of weakness, the agent engages in but then abandons the result of deliberation due to a desire. In a case of impetuosity, the agent does not even deliberate because of their desire. An impetuous agent is, presumably, akratic in that they could make the right decision based on prior deliberation, without any need of deliberating again (1147a31–5, 1117a20; Irwin 1999, p. 265). In both species of akrasia, the agent fails to act as their knowledge would require because desire gets in their way. Let’s return to the family dispute with which we started. Socrates claims that it is impossible to do something and at the same time think that something else is better and possible. Aristotle says that this contradicts the plain facts. We have seen that what Aristotle calls the plain facts (akrasia) is a temporary change of mind for Socrates. A strict synchronic version of akrasia is indeed impossible, according to Socrates.⁸ We have also seen that, in Aristotle’s own view, akrasia is a failure of correct practical reasoning or a failure to use the premise. On Aristotle’s view, though, knowledge is universal. That is, an agent may know a universal claim as expressed by a major statement in a practical syllogism like (a). ⁸ This issue resurfaces for an economic account of human agency, cf. Chapter 4.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 39 In contrast, particular claims like (b) and (𝛽) are based on perception and can therefore not be known. On a side note, this is why animals cannot be akratic or enkratic, vicious or virtuous: they act on perception and not on knowledge. As the akratic person fails to use and act on a minor premise, they do not strictly speaking act against their better knowledge. As a consequence, Aristotle ends up agreeing with Socrates in his diagnosis that akrasia involves a certain kind of ignorance (Nicomachean Ethics 1147b13; Bostock 2000, p. 130; Irwin 1999, p. 261; Robinson 1969, p. 146). On both views, the akratic person temporarily lacks full knowledge. In the end, Aristotle’s disagreement with Socrates is thus less profound than it seemed at the outset.

3.2 Hare Although Richard Hare’s account of weakness of will is historically a relatively recent approach, it aligns closely with the ancient ones that we have so far considered. Hare (1919–2002) was a British philosopher who developed a theory called ‘prescriptivism’ (Hare 1952). On this view, a statement is prescriptive if it entails an imperative. For example, the statement “S ought to 𝜙” plausibly entails the imperative “𝜙!”, addressed at S. In contrast, a statement like “it is raining” does not entail an imperative; it is descriptive. The distinction between prescriptive and descriptive is not sharp, though. Some statements may at the same time be both prescriptive and descriptive, such as “physicians tend to their patients’ needs”. Hare maintains that sincerely uttering a prescriptive statement like “I ought to 𝜙” and thus assenting to the corresponding imperative entails the prescribed action, provided it is not externally constrained: “It is a tautology to say that we cannot sincerely assent to a command addressed to ourselves, and at the same time not perform it, if now is the occasion for performing it and it is in our (physical and psychological) power to do so” (Hare 1963, p. 79). This claim seems to echo the Aristotelian view that the ‘conclusion’ of a practical syllogism is an action.⁹ It also entails motivational judgement internalism, the claim that if an agent judges they ought to do something, they are at least somewhat motivated to act accordingly.1⁰ In Hare’s words, “one cannot make a moral judgement sincerely [ . . . ] without being motivated in some way towards actions in accordance with it” (Hare [1996] 1999a, p. 96; cf. Hare 1999b, pp. 96–7). In this respect, Hare agrees with Socrates in Plato’s Protagoras.11 Hare himself acknowledges the similarity of his view with Aristotles’ and Socrates’; in fact, he believes that these philosophers were among the historical precursors of prescriptivism (Hare 1992, 1998).

⁹ Cf. Section 3.1.2.

1⁰ Cf. Section 2.2.

11 Cf. Section 3.1.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

40 weakness of will and delay discounting However, Hare thought that, although a statement like “S ought to 𝜙” may be used in a prescriptive way, it does not always have to be. Sometimes, moral and normative language may be employed non-prescriptively, in an “off-colour” way (Hare 1963, p. 68). For this, Hare provides a range of examples, including cases of what he describes as conventional, “inverted-commas”, and ironic uses (Hare 1952, p. 120, cf. pp. 124–5, 160–72). These are all cases of non-prescriptive or not fully prescriptive use. Let us consider them in greater detail. To begin, when a speaker employs the conventional use of normative or moral language, they may refer to sociological facts (Hare 1952, 1963, 1992). That is, when someone says “I ought to 𝜙” they might not mean that they ought to 𝜙 but merely that social standards demand of them to 𝜙 in the given situation. For example, in this sense the claim “I ought to send my condolences” may express “people expect me to send condolences”, with the agent paying lip-service to a convention that they do not concur with. Similarly, inverted-commas use of moral or normative language is involved when “alluding to the value-judgements of other people” rather than making those judgements oneself (Hare 1952, p. 124). When used in this way, the statement “S ought to 𝜙” again expresses a normative perspective that the agent does not share. For example, “for a good Gothic revival, you ought to see that building” may express that the building in question has been praised by architects as a good Gothic revival. Whether the speaker personally believes that the building is worthwhile to visit remains an open question. Moreover, this inverted-commas use may verge into an ironic use when it conveys critique or dissent. The speaker then expresses the value-judgements of other people and at the same time distances themselves from it. In this vein, someone who despises modern art may say to a tourist enquiring about contemporary exhibits: “you ought to go to the Museum of Modern Art” (Hare 1952, p. 125). In yet other cases, the speaker might mean that they have a feeling of obligation to do something (Hare 1952, pp. 167–9). These feelings are not “full acceptance of prescriptions” (Hare 1992, p. 1306). Instead, when expressing these feelings, an agent states a psychological fact about themselves. The sense or feeling of obligation may be entrenched by education and so strong that the agent themselves treats it as a matter of fact (Hare 1952, § 11.2). When they employ moral or normative language to express it, the agent may use it, “as it were, unconsciously in inverted commas” (p. 167). This supposedly unconscious inverted-commas use may also be involved in cases of self-deception (Hare 1963, p. 83). Hare suggests that a self-deceived agent has “escaped his own notice using [normative language] in an off-colour way” (p. 83). This seems to indicate that self-deception may result from a process of insincere or hypocritical use of normative language in which the deception directed at third parties eventually becomes internalized.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 41 Insincerity is a further example for descriptive use (Hare 1963, § 5.9). Here, the agent does not mean at all what they say; a fortiori a seemingly prescriptive claim like “S ought to 𝜙” is not meant sincerely and prescriptively either. Relatedly, agents may judge dispositionally or in general that they ought to 𝜙 but believe that the particular case at hand is an exception (Hare 1963, p. 53, 1992, p. 1305). For example, an agent may believe that they ought to stop at red traffic lights but on a special occasion not act on this belief because they are urgently rushing a patient to the hospital. Presumably, making exceptions can be more or less hypocritical. For instance, if a dieter repeatedly makes exceptions from their ban on sugary foods, their statement “I ought to abstain” may not seem prescriptive. Hare also assimilates diachronic weakness of will12 to exceptions; here, an agent judged in the past that they ought to do something but later on either forgets this judgement or forgets that the time to act on it has come (Hare 1963, p. 83). In all of these cases, the person does not make a prescriptive judgement about the present case. Rather innocuous are situations where the agent lacks full understanding of what they are saying, as in a drunk person reciting a poem13 (Hare 1992, pp. 1305–6). Here, the agent may utter “I ought to 𝜙” but does not believe what they say. In all of these cases, the agent is not using moral or normative language in a prescriptive way. If they claim that “they ought to 𝜙”, they express a merely descriptive claim, without taking it to entail an imperative. Thus a statement like this does not require the agent to act accordingly. In addition to descriptive and prescriptive use, Hare also seems to allow for borderline or hybrid cases (Hare 1992, 1998). For example, he claims that an agent is sometimes unsure or uncertain about what they ought to do (Hare 1963, p. 83, 1992, p. 1306). Then they may use words like ‘ought’ in the prescriptive way but, because they lack “complete moral conviction”, do not “commit themselves to action” (Hare 1963, p. 83). In other words, Hare seems to think that internalism is true only for cases where the agent uses normative language in a fully or completely prescriptive way. In the following, we thus bracket the borderline or hybrid case for the sake of simplicity. This is, in a nutshell, Hare’s theory of prescriptivism (cf. Hare 1998; Price 2019; Seanor, Fotion, and Hare 1988). Why is weakness of will relevant to it? Imagine that someone sincerely utters a prescriptive statement. On Hare’s view, this person must act in accordance with that statement if they are able to. That is, prescriptivism does not allow for actions against a sincere prescriptive judgement. But thereby it does not seem to account for weak-willed actions (Spitzley 1992; Stroud and Svirsky 2021, § 1). 12 Cf. Section 2.1.

13 This case is also mentioned by Aristotle (Section 3.1.2).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

42 weakness of will and delay discounting Hare thus faces the challenge to explain how his view can be reconciled with what Aristotle said to be manifest: that people sometimes sincerely judge that they ought to and are able to 𝜙, but fail to 𝜙 anyway. Hare calls this phenomenon ‘moral weakness’ or ‘weakness of will’. Throughout his career, Hare repeatedly responds to this challenge. He suggests broadly three ways in which moral weakness could be explained away. We shall take them in turn. To begin, in moral weakness, the agent may use moral language merely in a descriptive or less than fully prescriptive way, as in one of the examples given above (Hare 1952, p. 169). In this case, the agent does not act against a genuinely prescriptive judgement. Therefore, there is no moral weakness. Second, the agent may be expressing a prescriptive statement but be physically or psychologically unable to act in accordance with it (Hare 1963, ch. 5; cf. Spitzley 1992). Hare maintains that in “typical cases” of moral weakness (Hare 1963, p. 80), the agent lacks the psychological ability to resist a temptation. He gives two examples to illustrate this. His first example is that of Medea. In one version of the ancient Greek myth, Medea falls in love with Jason and helps him although she deems it to be wrong and unreasonable. Hare quotes Ovid’s description of Medea: “her struggling reason could not quell desire. ‘This madness how can I resist?’, she cried; ‘[ . . . ] I see and praise the better: do the worse’ ” (Hare 1963, p. 78; Ovid, Metamorphoses book VII, lines 11–25).1⁴ Hare’s second example are words of St. Paul from the Bible: “if what I do is against my will, it means that I agree with the law and hold it to be admirable. But as things are, it is no longer I who perform the action, but sin that lodges in me” (Hare 1963, pp. 78–9; Romans VII: 16–17). Both examples, Hare maintains, illustrate that weak-willed agents lack the psychological power to obey the imperatives entailed by their moral judgements. This seems plausible: Medea speaks of a “madness” she cannot resist, St. Paul of “sin” performing the action. Both Medea and St. Paul, Hare suggests, are unable to resist temptation, which is a psychological ability (Hare 1963, 1992). It is not a “physical inability” (Hare 1992, p. 1306) like a bodily impairment because, in such a case, the agent would not be under a normative obligation or requirement in the first place. A physical inability “causes an imperative to be withdrawn altogether” (Hare 1963, p. 80). In other words, it is not the case that a physically unable person ought to 𝜙. In contrast, a psychologically unable person ought to 𝜙 but cannot. In this vein, Hare claims that moral weakness is a case of “ought but cannot” (Hare 1963, p. 68). For physical inability, he thus seems to agree with the widely accepted dictum that ‘ought’ implies ‘can’ (Griffin 2010; Haji 2002, ch. 4; arguably

1⁴ Hare does not provide details of this translation and I could not find them either.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 43 Kant, 3:524, 4:399, 5:30, 6:47–62, 6:380, 6:401–9; King 2017; Southwood 2016; Streumer 2007; Vranas 2007; Wedgwood 2013b; Zimmerman 1996, ch. 3). But for psychological inability, he does not. For Hare, this inability seems to be on a par with inability due to lack of knowledge, as in “I ought to see him but I cannot because I do not know where he is” (Hare 1963, p. 52).1⁵ Hare acknowledges that there may be no strict divide between physical and psychological ability. For instance, he regards “compulsive neuroses” as examples of psychological inability that “come close” to physical ones (Hare 1963, p. 82). He also claims that there is “no determinable point at which S stops being able to resist” a temptation (Hare 1992, p. 1306). Although moral weakness may thus sometimes amount to compulsion for Hare, it need not; for one thing, he does not regard “compulsion in the pathological sense” as inability (Hare 1992, p. 1306).1⁶ A third way in which Hare seeks to explain away moral weakness is by appeal to two different levels of moral thinking1⁷ (Hare 1981, 1992, 1998; Price 2019). On an intuitive level, moral thinking is concerned with prima facie moral principles. These principles are taken as given and can conflict. On another level that Hare calls “the critical level”, moral principles cannot conflict. On both levels, moral principles are prescriptive. Only on the intuitive level, though, the agent may pronounce a moral principle as true even when they are not fully “set on obeying” it (Hare 1992, pp. 1306–7). This is how they may end up acting against it. For example, if an agent finds that their obligations of friendship require them to break a promise, they may decide that they shall break it, even though they continue to endorse the moral principle that promises must be kept (Hare 1981, chs. 2–3). Disobeying prima facie moral principles is not problematic because these principles allow agents to act against them. In other words, it is consistent to endorse them whilst at the same time breaking them in the situation at hand (Hare 1981, pp. 59–61).

1⁵ Hare seems to waver between someone making an ‘off-colour’ judgement because they (think they) cannot act on it, and someone making a genuinely prescriptive judgement that they cannot act on. For example, on the one hand, Hare claims that, if a speaker makes an exception for themselves, they “substitute the prescription for something weaker” (Hare 1963, p. 53), thereby, as it were, lifting a corner of a net and allowing themselves to escape. He applies the same analysis and metaphor to cases of psychological inability, where the prescription is “downgraded” and “no longer carries prescriptive force” (Hare 1963, p. 80). On the other hand, Hare claims that Medea and St. Paul are subject to “imperatives that are entailed by the moral judgements which they are making” (Hare 1963, p. 79); he also states that a weak-willed agent “thinks that he ought but cannot [ . . . ] because he cannot resist the temptation not to” (Hare 1992, p. 1306). Here, we focus on the latter case because the former seems to amount to another version of the suggestion that, in moral weakness, the agent does not use moral language prescriptively. 1⁶ Cf. Section 2.2 for a discussion of compulsion. 1⁷ Hare’s student Singer (2002) suggests that this proposal may align with dual-process models, cf. Section 2.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

44 weakness of will and delay discounting This is what may happen in moral weakness and moral conflict: two prima facie principles conflict and we cannot obey them both.1⁸ We need to resolve this conflict on the critical level, and our action reveals1⁹ what we eventually end up prescribing to ourselves (Hare 1992, p. 1307). The difference between moral conflicts and moral weakness lies in the nature of the principles that conflict: in moral conflict, these are two moral prescriptions but in moral weakness, at least one of them may be a non-moral prescription (Hare 1981, pp. 59–61). In this case, the agent allows for a moral principle to be overridden by a non-moral prescription; the agent takes a “moral holiday” (Hare 1981, p. 60). For example, imagine an agent experiences a conflict between the prescription that they ought to enjoy as much dessert as possible and a prohibition to take someone else’s share (Hare 1981, p. 57, interpreting Austin (1956)’s ice cream example).2⁰ They may allow the self-serving prescription to override their moral principle without ceasing to endorse the latter. But even when the conflict is thus resolved, Hare maintains, the principle that we did not act on remains on our mind and may cause us to experience compunction (Hare 1981, p. 29; cf. Williams 1965). This is why we may say “I ought to have done otherwise”. In sum, Hare’s theory of prescriptivism entails a strong form of internalism. From this perspective, the agent acts in accordance with their moral judgement, as long as they are physically and psychologically able to. Like the Socratic position, this view faces the challenge of explaining weak-willed action. In weakness of will, the agent seems to make a sincere normative or moral judgement, yet acts against it. Hare responds that in these cases the agent either does not make a truly prescriptive judgement, or they do not truly make it, or they are (psychologically) unable to act accordingly. This is similar to the Socratic solution, on which the akratic agent changes their mind and moral judgement in the face of temptation. Moreover, Hare shares Aristotle’s view that some weak-willed agents are unable to resist a temptation or desire to act in a way that violates a moral principle. Not surprisingly, then, Hare regards weakness of will as a serious defect, even as immoral, as suggested by his label ‘moral weakness’. In his view, a weak-willed agent is blameworthy (Hare 1963, 1992). Hare also regards “weakness of will as just one example of conflicts between prescriptions” (Hare 1981, p. 60). This is because he thinks that both weakness of will and conflicts arise because prima facie principles appear to require two incompatible things, and in both cases a moral principle may be overridden (Hare 1981, pp. 53–60).

1⁸ This point is at the heart of Davidson ([1970] 1980c)’s critique of Aquinas and Aristotle, cf. Section 3.3. 1⁹ This claim seems similar to the suggestion that we reveal preferences in action, cf. Section 4.1. 2⁰ Cf. Section 2.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 45 Lastly, Hare states that weakness of will “clearly is a puzzle” (Hare 1992, p. 1304). He seems to think that the puzzle consists in explaining why we sometimes do not act in accordance with our judgements. This calls for an explanation precisely because, in Hare’s view, there is an internalist connection between judgement and action. Hare’s account of moral weakness as a failure to act on one’s moral judgement thus seems to fit the characterization of weakness of will as a failure by the agent’s own lights,21 and it seems to have its three characteristic features: moral weakness is a defect, it involves a conflict, and it is puzzling.

3.3 Davidson The US-American philosopher Donald Davidson (1917–2003) has been highly influential in philosophy generally and in action theory in particular. In this section, we focus on his landmark essay on weakness of the will but also take a look at other related publications. Although ‘weakness of the will’ appears in the title of Davidson’s seminal essay, he frequently uses the term ‘incontinence’. Presumably, this is due to Ross’s translation of the Greek ‘akrasia’.22 Davidson’s ‘incontinence’ is thus a technical term for ‘weakness of the will’, and we shall follow his usage. Early on in his seminal contribution, Davidson ([1970] 1980c, p. 22) defines weak-willed action as follows:23 “In doing x an agent acts incontinently if and only if: (a) the agent does x intentionally; (b) the agent believes there is an alternative action y open to him; and (c) the agent judges that, all things considered, it would be better to do y than to do x.”

According to this definition, a dieter acts incontinently in eating dessert iff 2⁴ they eat it intentionally, believe that there is an alternative open to them, such as abstaining, and judge that, all things considered, it would be better to abstain rather than to indulge. Three aspects of Davidson’s definition are noteworthy. First, like Hare and unlike Aristotle and Plato, Davidson focuses on a single weak-willed action rather than a person or state of character. Second, he takes weak-willed action to be an action against the agent’s belief rather than his knowledge, which allows for 21 Cf. Section 2.2. 22 I thank Al Mele for this pointer. 23 I have added line breaks and the labels ‘(a)’, ‘(b)’, and ‘(c)’ for the sake of convenience. 2⁴ A philosopher’s ‘iff ’ is shorthand for ‘if and only if.’

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

46 weakness of will and delay discounting incontinence in cases where the agent’s belief is false. Third, condition (b) invokes comparative judgements of the form “x is better than y”. As Mele (2012) has pointed out, this may be problematic. Imagine that the dieter judges that skipping dessert is better than having it, and that having dessert plus going for a run afterwards is better than skipping it. On Davidson’s definition, the dieter is weakwilled when they have dessert because they believe that skipping dessert is better and open to them, even when they go for a run after dinner. But we might not wish to say that. Another counterexample to Davidson’s definition might be that of a satisficer who believes that y is better than x but both y and x are good enough. This agent might not be weak-willed when they do x rather than y, even though they would be on Davidson’s definition. Either way, these are technical difficulties that we shall set aside here. With the definition on the table, Davidson (p. 22) then points out that incontinence, if it exists, challenges the following claim: “in so far as a person acts intentionally he acts in the light of what he imagines (judges) to be the better.”

This is a statement of the ‘guise of the good’ doctrine that we encountered above:2⁵ we act intentionally in the light of some imagined good. Davidson (p. 23) proposes to break down the doctrine into the following two principles P1 and P2: “P1. If an agent wants to do x more than he wants to do y and he believes himself free to do either x or y, then he will intentionally do x if he does either x or y intentionally. [ . . . ] P2. If an agent judges that it would be better to do x than to do y, then he wants to do x more than he wants to do y.”

The first connects intentional action and wanting (or a relative preference), the second connects wanting to a (relative) value-judgement. Together, they entail: “if an agent judges that it would be better for him to do x than to do y, and he believes himself to be free to do either x or y, then he will intentionally do x if he does either x or y intentionally.”

This claim is familiar, too: it is a version of motivational judgement internalism,2⁶ which we have already encountered in Socrates2⁷ and Hare.2⁸ Applied to the dieter, this claim states that because the agent judges that it would be better for them to

2⁵ In Section 2.2.

2⁶ Cf. Section 2.2.

2⁷ Section 3.1.1.

2⁸ Section 3.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 47 abstain rather than to indulge and believe themselves free to do either, they will intentionally abstain if they do anything at all intentionally. Note that internalism is not identical to the ‘guise of the good’ doctrine. Internalism claims that, roughly, if we judge that something is good, then we are motivated to act accordingly. The ‘guise of the good’ doctrine above reverses the logical order of internalism: if we act intentionally, then we act in the light of some imagined good. By breaking down the ‘guise of the good’ doctrine into the principles P1 and P2, Davidson has effectively switched to a discussion of internalism. In what follows, Davidson focuses on internalism and its conflict with incontinent actions. Like Aristotle and Aquinas, Davidson invokes syllogisms to locate the root of the problem. He begins with what he takes to be Aquinas’ schema, who broadly follows Aristotle (cf. Aquinas, Summa Theologiae Ia –IIae q. 77 a.2). We may apply it to the dieter’s case as follows (p. 33): (M1 ) (m1 ) (C1 )

One should not taste anything that is unhealthy. This is unhealthy. One should not taste this.

(M2 )

Pleasure is to be pursued.

(m2 ) (C2 )

This is pleasant. This is to be pursued.

The problem of the incontinent dieter is that they make the inference on the righthand side (the “side of lust”, as Davidson calls it) rather than the inference on the left-hand side (the “side of reason”). However, Davidson objects to this analysis. For, he argues, it ascribes to the agent a near-contradiction like “One should not taste this and this is to be pursued.” But, he claims, the weak-willed agent does not fall prey to a logical selfcontradiction. The analysis is inadequate. Furthermore, Davidson maintains that this issue is not merely problematic for Aquinas’ and Aristotle’s account of weak-willed action but also for their theory of practical reasoning more generally. For, it cannot even account for reasoning in cases where an agent has conflicting (prima facie) beliefs. In these cases, the agent has two beliefs, one in favour and one against acting a certain way. For example, someone might think “I ought to do it because I promised it to A” and “I ought to not do it because I promised it to B”, and someone else might think “x is better than y because it is courageous” and “y is better than x because it is honest”. An agent may have such a pair of beliefs without thereby contradicting themselves (Davidson [1970] 1980c, p. 34). Therefore, Davidson proposes that practical reasoning needs to take a different form: the agent must take all considerations together and reach an overall conclusion (p. 36). This is similar in cases of probabilistic reasoning. In these, an agent makes inferences from statements of probability like:

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

48 weakness of will and delay discounting (i) If the barometer falls, it almost certainly will rain. (ii) Red skies at night, it almost certainly won’t rain. Imagine that the agent also observes that: (iii) The barometer is falling. (iv) The sky is red tonight. What are they to infer from (i)–(iv)? From (i) and (iii) they may infer that it almost certainly will rain. At the same time they may conclude from (ii) and (iv) that it almost certainly won’t rain. However, this will lead them to the claim that it almost certainly will rain and almost certainly won’t rain, which is nearcontradictory. But this is not how we do and should reason in these cases. Instead, we reach conclusions like “probably, it won’t rain”. In the practical case, we reach conclusions like “one should not taste this”. How is that? In response, Davidson (p. 37; drawing on Hempel 1965) first points out that practical and probabilistic claims need to be understood as relative or conditional in their entirety. For instance, (i) should be understood as “probably, [if the barometer falls, it will rain]”, not as “if the barometer falls, it probably will rain”, as this might conflict with a claim like “red skies, it probably won’t rain”. In other words, on Davidson’s analysis the conditional is within the scope of “probably”. The two claims “probably, [if the barometer falls, it will rain]” and “probably, [if the sky is red, it won’t rain]” do not lead to a conflict when combined with (iii) and (iv). The same, Davidson argues, is true for practical reasoning: all normative beliefs are relative to some consideration. For example, “One should not taste anything that is unhealthy” should be read as “prima facie, [if something is unhealthy, then one should not taste it]”. Then, on this view, it is possible for the agent to draw a conclusion without making conflicting statements. In the probabilistic case, the agent may conclude “probably, [it will rain, given (i)–(iv)]” or “probably, [it won’t rain, given (i)–(iv)]”. In other words, they take into account all the available evidence when making their inference. The same is true in the practical case: the dieter may infer “prima facie, [not tasting is better, given (M1 )–(m2 )]” or “prima facie, [tasting this is better, given (M1 )–(m2 )]”. Such prima facie judgements are still conditional on and relative to the available evidence. However, a judgement that prima facie, eating dessert is better (or worse), given the available evidence, is insufficient for action. For action, we need an unconditional all-out judgement (p. 40). That is, the dieter needs to arrive at a judgement to the effect that they taste dessert or do not taste dessert. To infer an all-out judgement like “I do not taste this” from a prima-facie judgement like “all things considered, not tasting is better than tasting”, the agent needs to apply what Davidson calls the “principle of continence”. (In the probabilistic case, the analogous principle is called the “requirement of total

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 49 evidence for inductive reasoning”.) It requires the agent to “perform the action judged best on the basis of all available relevant reasons” (p. 41). An incontinent agent like the dieter makes precisely this mistake: they violate the principle of continence. Even though they infer from their available evidence that, all things considered, it is better to not taste dessert, they fail to judge, all-out, that they abstain. The dieter does not state contradictory claims and commits a logical blunder. However, they do recognize that they are acting other than they should, given their overall reasons and evidence. In short, on Davidson’s analysis, the incontinent agent makes a mistake in practical reasoning: they fail to apply the principle of continence (correctly), and this failure makes them irrational (though they act for a reason). As a consequence, the incontinent person fails to form the relevant all-out judgement required for successful action. In light of these conclusions, Davidson ([1970] 1980c, p. 40) revises his definition of incontinence as follows: “an action, x, [is] incontinent provided simply that the agent has a better reason for doing something else: he does x for a reason r, but he has a reason r′ that includes r and more, on the basis of which he judges some alternative y to be better than x.”

On this definition, the dieter is weak-willed when they eat dessert for the reason that they enjoy it, yet also have a reason to decline the enjoyable but unhealthy food, on the basis of which they judge that they should forego dessert. Let us return to the issue of reconciling motivational internalism and incontinence, with which Davidson started his paper. This problem can now be solved. On the one hand, internalism is salvaged: if an agent judges (all-out) that doing x is better than doing y, then they will do x intentionally rather than y. On the other hand, weakness of will may exist. An incontinent agent judges that, all things considered (prima facie but not all-out), it would be better to do y than to do x. Incontinence and internalism are not mutually exclusive because the former concerns a prima facie judgement and the latter concerns an all-out judgement. Against this suggestion, it has been objected that there are cases of weak-willed action where the incontinent person acts against an all-out judgement, not merely against a conditional or prima facie judgement (Bratman 1979). We shall return to this issue below.2⁹ The remainder of this section presents some later developments in Davidson’s account of weakness of the will which are mostly a consequence of his shifting views on intentions.3⁰ Let us first consider three claims of his earlier work (notably Davidson 1963).

2⁹ In Section 3.4.

3⁰ Cf. Section 2.1 for details on intentions.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

50 weakness of will and delay discounting First, Davidson claimed in his earlier writing that to know a primary reason why someone acted is to know the intention with which the action was done (Davidson 1963, p. 689). Therefore, he is sometimes believed to propose a reductionist claim, namely that an agent’s primary reason for 𝜙ing is the intention with which the agent 𝜙s. On this view, if the dieter eats dessert with the intention to enjoy it then their primary reason for eating dessert is that they enjoy it. This proposal has been controversial, yet we shall not discuss it here. Second, Davidson claimed that “a primary reason consists of a belief and an attitude” (Davidson 1963, p. 688). The attitude here is a pro-attitude or conative state like a desire or a wish. So it seems that Davidson equates a primary reason with a pair of a belief and a pro-atitude. This claim is reminiscent of what has been called the ‘Humean theory of motivation’ (mistakenly attributed to the enlightenment philosopher David Hume). Roughly, the Humean theory of motivation states that motivation requires, in addition to a belief or judgement, a conative state or proattitude (Cohon 2008; Rosati 2016). From this claim and the earlier one, that intentions-with-which are primary reasons, we can infer that the intention with which an agent acts is, on this view, a pair of one of their beliefs and a pro-attitude. For example, the intention with which the dieter eats dessert—to enjoy themselves—is a desire to enjoy themselves paired with the belief that eating dessert is an enjoyment. Third and lastly, Davidson (1963, p. 686) claimed that “the primary reason for action is its cause.” That is, what causes the dieter to eat dessert is also their primary reason for doing so. All three claims taken together, then, entail that an agent’s pro-attitude and belief cause their action. This view has been dubbed ‘causalism’, as it is a causal analysis of intention in action (von Wright 1971). However, causalism has been controversial for a number of reasons. Davidson ([1978] 1980d, p. 89) himself raised the following issue: it seems that agents sometimes have intentions without ever doing anything at all. Davidson calls this “pure intending”. Because pure intending is neither preceded by conscious deliberation nor followed by overt consequences, it is a state or an event independent of both the reasons for the intended action and this action itself. This led Davidson to the view that at least some forms of intending are irreducible. Moreover and relevant to the topic of weakness of the will, Davidson ([1978] 1980d, p. 99) also claimed that, at least for pure intending, “the intention simply is an all-out judgement”. Recall that the all-out judgement results from a practical syllogism, i.e. a piece of practical reasoning. Furthermore, recall that on Davidson’s analysis, the incontinent person fails to form the correct all-out judgement. If this all-out judgement is an intention, then it follows that on Davidson’s revised view, the incontinent person fails to form the adequate intention given their evidence. That is, the dieter reasons correctly that, all things considered, they better abstain from having dessert. But they still intend to have it anyway.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 51 The resulting view is that weakness of will is a failure to form an intention in accordance with one’s practical reasoning. We may thus re-state Davidson ([1970] 1980c, p. 40)’s earlier definition: an agent is weak-willed in intending to do x for a reason r iff they have a reason r′ that includes r and more, on the basis of which they judge some alternative y to be better than x and, we may add, open to them. This account persists in contemporary debates about rationality31 as the so-called ‘enkratic requirement’ that we intend to do what we believe we ought to do (cf. e.g. the special issue on Enkrasia in Organon F, Fink 2013). In more recent work, Davidson ([1982] 2004) proposes a divided-mind ontology, thereby continuing a tradition we have, so far, encountered in Plato and Aristotle. Here, Davidson proposes that the mind has several semi-autonomous substructures. That is, the mind can be broken down into several parts or substructures. Each of these substructures is semi-autonomous in that it can be described like we would normally describe an individual action. For example, within a subdivision, mental events may cause other mental events consciously or unconsciously. The subdivision itself can be consistent and even rational. However, different parts of the mind may endorse mutually inconsistent beliefs or judgements. Davidson maintains that the parts can overlap such that one mental event in one part may cause a mental event in another part without being a justificatory reason for it (p. 181). Davidson’s divided-mind ontology is far less specific than that proposed by some authors we have encountered earlier. For one thing, Davidson does not specify the nature or number of subdivisions, and he does not explain how weakwilled or irrational operations of the divided mind differ from rational or not weakwilled ones. Perhaps for these reasons, it has not been as influential as the proposal he presents in Davidson ([1970] 1980c). Henceforth, we shall focus on that earlier account in this book. As we shall see in the next sections, contemporary writers like Bratman, Mele, and Holton have done the same.

3.4 Bratman and Mele Michael Bratman (born 1945) and Al Mele (born 1951) are US-American philosophers. Mele is known for his work on free will, agency, and self-control. Bratman has developed an influential account of intentions. We focus on their research into weakness of the will, which builds on Davidson’s account.32 The first part of this section details these suggestions. Bratman and Mele also discuss delay discounting theory in their research. This discussion is the topic of the second half of the section.

31 Cf. Chapter 7.

32 Laid out in Section 3.3.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

52 weakness of will and delay discounting To begin, recall that Davidson ([1970] 1980c)’s theory of what he called ‘incontinence’ faces the problem that it does not allow for an agent to judge, all-out, that they ought to do one thing but then act differently; yet this seems to be a prime example of weak-willed action (Bratman 1979; Mele 2012; Stocker 1979, p. 738; Stroud 2014, § 3.1; Tenenbaum 1999). For example, it is conceivable that a dieter judges, all-out, that they must decline dessert, yet still fail to do it. What, then, is going on in such cases? Here is Bratman (1979)’s response. First, he notes that a judgement about what is best all-out is not necessary for action. After all, in situations where an agent has conflicting prima facie beliefs—situations that Davidson regarded as counterexamples to Aristotelian accounts of practical reasoning—they may rationally act in accordance with one of these beliefs, without having to reach an overall conclusion about what would be best, all-out. On Bratman’s view, an agent can act on a prima facie judgement, even when they have a second, all-out judgement that they violate in their action. This would be a case of weakness of will. Weak-willed actions thus happen in accordance with one of the agent’s prima facie beliefs yet against their all-out judgement. On Bratman’s view, then, an agent is weak-willed in 𝜙ing on their judgement or commitment to 𝜙 for reason r iff they judge that r is actually overridden by their other evaluative commitments in favour of some alternative, 𝜓ing (Bratman 1979, pp. 167–8). For example, the dieter is weak-willed in consuming the sugary dessert because they find it tasty although they judge that this reason is overridden by their commitment to eat healthy. Bratman’s and Davidson’s proposals are similar in that, on both accounts, the weak-willed agent fails to reason correctly. However, there are also two crucial differences. First, for Davidson but not for Bratman, it is impossible for the agent to act against their all-out judgement. Second, unlike Bratman, Davidson requires his agent to act on a corresponding all-out judgement. This all-out judgement is a judgement in his earlier work and an intention in his later writing.33 For Bratman the weak-willed person acts in accordance with a mental state, but this state is of “a distinct psychological category” that is different from all-out, prima facie, or other evaluative judgements (Bratman 1979, p. 163). Let us now turn to Mele’s account of weak-willed action. As he defines it, an action A is strictly akratic (Mele 1987, p. 7)3⁴

33 Cf. Section 3.3. 3⁴ In his earlier work (e.g. Mele 1987), Mele uses the terms ‘akrasia’ and ‘incontinence’; in his later writings (such as Mele 2012), he uses ‘weakness of will’. Mele’s most recent account of what he now calls “core akratic action” differs from his earlier one only in “some fine-tuning” (Mele 2012, p. 14; cf. Mele 2022). Therefore, we follow Mele in treating his ‘akrasia’, ‘incontinence’, and ‘weakness of (the) will’ as interchangeable.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 53 “if and only if it is performed intentionally and freely and, at the time at which it is performed, its agent consciously holds a judgment to the effect that there is good and sufficient reason for his not performing an A at that time.”

On the face of it, this definition does not differ much from Davidson’s or Bratman’s, either. But unlike Davidson, Mele allows for weak-willed action when the agent settles on 𝜙ing, perhaps by intending or deciding to 𝜙, yet fails to 𝜙 (Mele [1995] 2003, pp. 71–4). Recall that, for Davidson and Bratman, the weak-willed person fails to act in accordance with some evaluative judgement, perhaps because they fail to form the corresponding intention or all-out judgement. In contrast, Mele’s weak-willed agent may also fail to act in accordance with a decision or intention. This raises the question of how that should be psychologically possible, i.e. how an agent can act differently than they decided or intended to do. To answer this question, Mele turns to empirical evidence from the behavioural sciences and to discounting theory in particular. We shall now take a closer look at how Mele interprets and builds on delay discounting theory in his philosophical work. The behavioural evidence will be our topic in part II of this book. Bratman discusses delay discounting theory as well, and we shall turn to his comments further below. Let us first consider Mele’s answer to the question of how a weak-willed action can be performed against the agent’s decision or intention. His reply builds on work by the psychiatrist George Ainslie (1975, 1982) in all but the last of the following claims. Mele (1987, p. 92; see also p. 85 and Mele 2012, pp. 98–9) argues that weakwilled actions can be “adequately explained in terms of (1) the perceived proximity of the rewards [ . . . ]; (2) the agent’s level of motivation to perform the continent alternative and his earlier level of motivation to perform the akratic alternative; (3) the agent’s failure to make an effective attempt at self-control; and (4) the agent’s attentional condition”. Let us unpack these four aspects. (1) is a claim about rewards. ‘Reward’ is a term extensively used in behavioural and cognitive science. Roughly, rewards are enjoyable, pleasant, or desired objects, events, or experiences (Schultz 2015). In everyday English, ‘reward’ typically denotes something given for an effort or achievement; this is narrower than the scientific usage. For example, for a dieter a dessert could be a reward although it need not be a reward for anything. (1) states that the perceived proximity of a reward partially explains weakwilled action. That is, how close a reward like the enjoyment of sweet desserts appears to the dieter partially explains their behaviour. This is an idea that we

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

54 weakness of will and delay discounting have encountered in Plato’s Socrates3⁵ and that is at the heart of delay discounting theory.3⁶ (2) states that the weak-willed action is partially explained by the agent’s levels of motivation to perform one of several optional actions. Salient options in the case at hand are what Mele calls “the continent alternative” and “the acratic alternative” (1987, p. 92). The former would be, in our dieter example, the option of abstaining from dessert, the latter would be having it. What Mele calls an agent’s “level of motivation” is the agent’s relative preference between several options (1987, pp. 84–6). If an agent’s desire to A now has greater motivational strength than their desire to B now, then, other things being equal, the agent will A rather than B at that time (Mele 1987, pp. 11–15, ch. 3). For example, the dieter seems to be more motivated to eat dessert than to abstain, i.e. they prefer eating dessert to abstaining. (3) concerns precommitment devices and self-control techniques that may help the agent to overcome weakness of will. As an example for a successful self-control strategy, Mele mentions the means taken by the mythical Odysseus during his voyage past the Sirens (cf. Elster [1979] 2013). He has himself tied to his ship’s mast so that he can listen to the Sirens’ seductive songs without being able to approach them and consequently fall into their deadly trap. Mele suggests in (3) that failing to employ similar techniques partially explains why weak-willed action may occur. In our mundane lives, precommitments may concern the siren calls of foods, recreational drugs, or activities of procrastination that we try to contain by, say, not buying and taking home goods we do not wish to consume, shutting off our phones or wifi to fight distraction, etc.3⁷ Lastly, (4) builds on empirical evidence by developmental psychologist Mischel and colleagues on delay of gratification in children (Mischel 1973; Mischel and Ebbesen 1970; Mischel, Ebbesen, and Raskoff Zeiss 1972; Mischel and Moore 1973): the closer in time a delayed reward, the more children attend to its arousing features (Mele 1987, p. 90). Directing one’s attention away from or towards a temptation can therefore partially explain whether and why a weak-willed action happens. Taken together, then, Mele takes (1)–(4) to explain weak-willed behaviour even in cases where the agent acts against an intention or decision: the tempting reward is too close, their level of motivation to resist is now smaller than their earlier level of motivation to indulge, they fail to employ a self-control strategy like Odysseus, and they forego to direct their attention away from the temptation. One may wonder what these four possibilities come down to in detail. For example, on the assumption that the tempting reward is too close, how does it affect the agent? Does the perceived proximity lead the agent to change their better

3⁵ Cf. Section 3.1.1.

3⁶ Cf. Section 5.1.

3⁷ Cf. Chapter 8.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 55 judgement or does it lead them to give up their intention? We return to these questions when we know more about the delay discounting model.3⁸ In the remainder of this section, we examine how Bratman uses discounting theory in his philosophical work.3⁹ Like Mele, he relies primarily on Ainslie’s research (Ainslie 1992) that he discusses in some detail in Bratman 1999a, ch. 3.⁴⁰ Bratman believes that an agent, weak-willed or not, chooses and intends to pursue future as well as present options. This is a key idea of his “planning theory of intentions” according to which an agent intends to do something if they plan to do it (Bratman [1987] 1999b). On this view, intentions are a unique kind of mental state distinct from beliefs or desires. This contrasts with Davidsonian causalism on which intentions reduce to pairs of beliefs and desires.⁴1 Recall that, on Bratman’s view, an agent is weak-willed in 𝜙ing on their judgement or commitment to 𝜙 for reason r iff they judge that r is actually overridden by their other evaluative commitments in favour of some alternative action (Bratman 1979, pp. 167–8). In particular, the weak-willed agent may fail to choose a series of future actions or a policy like the rule to abstain from sugary foods (Bratman 1999a, pp. 51–2). In other words, they fail to form a certain intention about future conduct. Instead, they yield to a temporary change of desires.⁴2 For example, the dieter initially desires declining dessert more than having it, then these desires switch: they now desire eating dessert more than abstaining. How does this change of desires come about? Bratman follows Ainslie in answering that the time remaining until the agent can consume or enjoy one of two rewards affects their desires: when the delay is greater, the agent appears to desire the first reward more than the second one; when the delay is smaller, they seem to desire the second reward more than the first. For example, earlier during the day the dieter desires declining dessert more than having it but when dinnertime comes around, they desire indulging more than abstaining. Note that it is the temporal delay itself that crucially affects the desires, not the uncertainty it involves or the mouth-watering features of the dessert, say. On a delay discounting model, this switch in the agent’s desires over time is in turn due to the discount rate at which the reward is devalued. Bratman (1999a, p. 38) maintains that, if we assume that both rewards are discounted in a specific

3⁸ Cf. Section 5.3. 3⁹ For his closely related account of rationality, see Bratman (2018), especially Ch. 7. We discuss rationality in Chapter 7 below. ⁴⁰ In this chapter, he is mostly concerned with criticizing Ainslie’s account of weak-willed and strongwilled choices, a topic we set aside. We focus on the question of how Bratman interprets Ainslie’s suggestions as an account of weak-willed action. ⁴1 Cf. Section 3.3. ⁴2 Bratman also calls this a “temporary preference change”, using terminology from Ainslie and, more generally, expected utility theory (cf. Section 4.1). Here, I continue to use ‘desire’ for simplicity’s sake because Bratman (1999a) seems to regard desires and preferences as equivalent.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

56 weakness of will and delay discounting way,⁴3 then the desires about the two options relative to each other may switch over time. On this assumption, the less delayed a reward is, the more the agent will discount it. As a consequence, the agent may desire a reward more that is very near in time than a reward that is more delayed (such as, say, dietary success). This, in turn, may cause weak-willed behaviour. To sum up, both Mele and Bratman believe that delay discounting may partially explain weak-willed behaviour. More specifically, it may explain why an agent initially desires one option more than another (in Bratman’s words) or is more motivated to perform one action than another (as Mele puts it), but then reverses their choice over time. According to Mele, the agent is weak-willed because they intentionally and freely act against a judgement that they better do otherwise. According to Bratman, they are weak-willed in that they act against a judgement or commitment that overrides their reasons for acting as they do.

3.5 Holton Richard Holton is a British philosopher specializing in moral psychology, action theory, and philosophy of language. Here, we discuss his account of weakness of the will. Afterwards we compare his proposal with the approach taken by Bratman, Mele, and in Davidson’s early work.⁴⁴ In a nutshell, Holton suggests that a weak-willed agent over-readily revises a resolution⁴⁵ in the face of an inclination the resolution was initially designed to defeat. Holton (2009, p. 10) characterizes a resolution as a pair of an intention for action and of a second-order intention to persist in that intention. Over-ready revision of both intentions is required for weakness of the will.⁴⁶ A typical case of weakness of the will might thus be the following: knowing that they will be very tempted to eat it, a dieter forms a resolution not to have dessert after dinner. They do so because they hope that forming this resolution will help them to resist the temptation. However, when dessert is served, they revise and abandon their resolution although they do not think that there are any new or stronger reasons for this. They merely give in to the temptation they had previously resolved to resist.

⁴3 i.e. with the same hyperbolic discount rate, cf. Section 5.1. ⁴⁴ Cf. Sections 3.3–3.4. ⁴⁵ In his earlier work (Holton 1999), he uses “intention” rather than “resolution” but that changes from Holton (2003) onwards. ⁴⁶ It is unlikely that revision of the first-order intention only is sufficient, otherwise there would be no need for the second, higher-order intention. It is also unlikely that revision of the second-order intention alone would be sufficient. For an agent who abandons that intention only but continues to have the first-order one does not seem to be weak-willed. Thus, revision of both intentions seems to be required, a view that Holton has confirmed in personal communication.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 57 Whether an agent is weak-willed or not thus crucially depends on whether the revision of their resolution was “over-ready” or not. But what does “over-ready” mean, exactly? Holton claims that the vagueness of this term merely reflects the vagueness of the subject matter in question. It seems difficult to decide, from the agent’s as well from an external point of view, whether a revision of a resolution occurs over-readily or not. Holton seems to assume that over-readily revising a resolution amounts to irrationally⁴⁷ revising it (Holton 1999, pp. 247–8). Following Bratman (1999a, p. 68), he interprets such an irrational revision as one following a reconsideration that displays tendencies it is not reasonable to have. He gives some rough guidelines for when it is reasonable to have a tendency to reconsider an intention and when it is not (Holton 1999, p. 249). For instance, it is reasonable to have a tendency to reconsider one’s intention if one thinks that the circumstances are no longer such that the intention serves its purpose. Crucially, Holton (2009, pp. 121–5) points out that agents who are not weakwilled need not be stubborn. Stubborn agents stick to their resolutions without even thinking of them again (cf. Aristotle, Nicomachean Ethics 1151b5–18). But it is possible to think about and even re-think one’s resolution without being weakwilled. For example, making oneself aware of or monitoring one’s resolution is not weakness of will. It is thus possible to “rehearse” one’s resolution without revising it. Note also that it is possible to be weak-willed without acting against one’s resolution. Merely revising (not rehearsing) it may be sufficient for weakness of will. For example, imagine that the dieter resolves to decline dessert but then revises their resolution in the face of temptation and without good reason. On Holton’s account, they may be weak-willed even if they end up not eating the dessert after all. Furthermore, Holton (1999, p. 261) distinguishes weakness of will from compulsion.⁴⁸ To that end, he maintains that an agent is weak-willed only if it is in their “power to desist from the revision” of their resolution, a power that is, supposedly, lacking in compulsion. However, he does not provide us with greater details about this power. Plausibly, it is some kind of willpower. To briefly sum up, Holton’s account of weakness of the will can be stated as follows: an agent is weak-willed iff they over-readily revise a resolution in the face of an inclination the resolution was initially designed to defeat although they are free to resist from the revision. Let us now compare this suggestion with what I shall call ‘the Davidsonian approach’, i.e. the one pursued in Davidson’s earlier work⁴⁹ as well as by Mele and Bratman.⁵⁰ Although individual positions within this approach differ from one another, they all characterize weakness of will as, roughly, failure to act in accordance with one’s better judgement.

⁴⁷ Cf. Chapter 7.

⁴⁸ Cf. Section 2.2.

⁴⁹ Cf. Section 3.3.

⁵⁰ Cf. Section 3.4.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

58 weakness of will and delay discounting We can contrast the Davidsonian proposal with Holton’s in cases where the agent is weak-willed according to one account but not according to the other. Let us refer to such examples as ‘mixed cases’. Examining which of the two approaches provides a better account of mixed cases and of our intuitions about them might help advance the stand-off. There are two categories of mixed cases, with two variants each. In the first category, the agent fails to act in accordance with their better judgement but does not over-readily revise a resolution. That is, the agent is weak-willed on a Davidsonian but not on Holton’s account.⁵1 This can happen in either of two ways. Either the agent judges that they ought to do something, resolves to act accordingly and does not revise this resolution, yet ends up acting against their better judgement after all. For instance, the dieter might judge that they better skip dessert, resolve to do so, retain this resolution—but then indulge anyway. Alternatively, the agent judges that they ought to act a certain way but resolves to do something else, and then ends up acting against their better judgement but in accordance with their resolution. For instance, the dieter might judge that they better not have dessert but resolve to and eat it. Or consider Christabel, a Victorian lady who judges that she ought not have an extramarital affair but then resolves to and does proceed with it anyway (May and Holton 2012, p. 348). The second category of mixed cases comprises examples where the agent overreadily revises a resolution but does not act against their better judgement. In other words, the agent is weak-willed on Holton’s but not on a Davidsonian view. Again, this can happen in two ways. Either both the agent’s better judgement and their resolution require them to 𝜙, they revise their resolution but end up 𝜙ing. For example, the dieter may judge that they ought to abstain from eating dessert and resolve to do so. However, when temptation strikes they over-readily revise this resolution even though they end up declining dessert. Or the agent may judge that they ought to do one thing but resolve to do something else, then over-readily revise their resolution and end up acting in accordance with their better judgement. As an example, we can imagine that the dieter judges that they ought to skip dessert but resolves to have it anyway, and then over-readily revises their resolution and declines dessert after all. Another example is that of a boy who judges that he ought to obey his mother and not play tackle football; he resolves to play anyway, then he loses his nerve, revises the resolution, and fails to show up for the game (May and Holton 2012; Mele 1987, 2010). For each of the four categories of cases, we can ask whether the Davidsonian or Holton’s approach provide the better account of weakness of the will. If the reader’s reactions are representative of the majority of people, then they will have found ⁵1 However, Holton claims that an agent is akratic in such cases. That is, he distinguishes weakness of will from akrasia. We return to this suggestion below.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

philosophical accounts 59 that answers to this question are hard to find. Various studies with laypeople have yielded mixed results (May and Holton 2012; Mele 2010). This issue may be partially due to the fact that, in general, intuitions appear to crucially depend on details that are supposedly not relevant or at least not decisive for the question at hand. In particular, intuitions about a mixed case seem to be influenced by factors largely irrelevant to whether it is an instance of weakness of the will or not. For one thing, whether participants in empirical studies judge that someone is weak-willed in performing a certain action depends crucially on the valence of that action (May and Holton 2012, study 3; replicated by Attie and Knobe 2017). For example, imagine that Phil resolves to stay home and study but then goes out with his friends. Participants are more likely to say that Phil is weak-willed when he and his friends go out to get drunk and pick fights with local immigrant kids than when they have a pizza and watch a movie. This finding might be due to a general tendency for valence to affect a range of judgements about seemingly un-related questions. For example, participants are more likely to judge that an action is intentional when it has a bad side-effect than when it has a good side-effect (‘Knobe effect’, Knobe 2003a,b). Overall, we may tentatively conclude that neither philosophers nor laypeople have clear intuitions about mixed cases. We can therefore not (yet) decide whether weakness of will is a failure to act in accordance with one’s better judgement, or over-readily re-considering one’s resolution in the face of temptation. At this point, a third option might seem appealing. Perhaps ‘weakness of will’ is a homonym. That is, we use it to refer to different phenomena, just as we use ‘bat’ to refer to a flying mammal or an instrument used to hit a ball. Similarly, ‘weakness of will’ may refer to over-readily revising one’s resolution, on the one hand. On the other hand, it may refer to acting against one’s better judgement. We may also suspect that weakness of will is not a natural kind, i.e. that there is no collective or grouping that corresponds in any meaningful way to a structure of the natural world (Bird and Tobin 2022). In this vein, Holton (1999) suggests that his account is one of weakness of will and the Davidsonian one is an account of akrasia. On this view we would, say, call the dieter ‘akratic’ when they judge that they ought not have dessert, resolve to have it anyway and then follow through with their resolution but we would also say that they are not weak-willed. This might be feasible in philosophical discussion but is likely to strike us as impractical in ordinary language. Thus, perhaps, the ordinary notion of weakness of will is a prototype or cluster concept (May and Holton 2012). Such a concept has a probabilistic structure: something falls under that concept if it has a sufficient number of properties encoded by the concept’s constituents (Hampton 2000; Margolis and Laurence 2019; Parsons 1973; cf. Rosch 1978; Wittgenstein [1953] 2009). For example, orange, date, coconut, tomato, and olive all fall under the concept fruit, yet they may share different constituents with it (Rosch and Mervis 1975).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

60 weakness of will and delay discounting Similarly, revising one’s resolution and acting against one’s better judgement strike most of us as weak-willed, yet they may share different constituents with the concept weakness of will. Rorty (1980) has developed this idea in greater detail, specifying numerous respects in which we may wish to call an agent ‘weak-willed’. Specifically, we can think of multiple steps that an agent has to take in order to act well, and each failure to take one of these steps may be an instance of weakness of the will (cf. Kalis et al. 2008). First, an agent may be self-deceived about their beliefs concerning appropriate aims, or they may have misguided aims. Second, they may fail to form the commitments corresponding to their aims. Third, they may mis-interpret a particular situation. Fourth, they may fail to form the appropriate intention, resolution, or decision. Fifth, they may fail to act appropriately. Almost all accounts we have considered so far focus on the fourth of fifth step in this framework, yet, as Rorty indicates, there may be more. To sum up, on Holton’s account, weakness of will is over-ready revision of a resolution in the face of a temptation the resolution was supposed to defeat. This may appear puzzling: after all, why would the agent act against their very own resolution? Presumably, the weak-willed agent also experiences a conflict between the temptation and their resolution, and they fail by their own lights in that they revise the resolution without good reason. Holton’s view thus accounts for the characteristic features of weakness of will.⁵2 When trying to determine how plausible this suggestion is, especially when compared with others, we have discovered that neither philosophical nor lay terminology is unified and clear. What to do about this issue is a question we leave for future research. The remainder of this book may provide new insights that may bear on it.

⁵2 Cf. Section 2.2.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

PART II

SCIENCE Psychology, behavioural economics, cognitive and neuroscience are behavioural sciences. They seek an understanding of action: how human and non-human animals behave and why they do so. Phenomena we call ‘weakness of will’, ‘failure of self-control’, ‘impulsivity’, or the like have long been an object of investigation for the behavioural sciences, which employ empirical methods to describe and explain these phenomena. This book focuses on those strands of this research that use delay discounting theories. These include rational choice models in economics and other social sciences but also computational approaches in psychology, neuroscience, and psychiatry. For one thing, the discounting framework aligns with the dual-process model that stipulates two kinds of processes to explain not only weakness of will in particular but also human decision-making and actions more broadly (Metcalfe and Mischel 1999).1 According to discounting theories, the value of a reward changes with its temporal delay. Delay discounting is therefore also known as ‘time discounting’ or ‘temporal discounting’. The approach builds on a simple idea: typically, the further in the future some good is, the less valuable it is, and the later some bad is, the less disvaluable it is. Economists first developed this intuitive idea into an elaborate theory of delay discounting. Economics is not, and was not, entirely a behavioural science. Some strands of economics, sometimes labelled ‘positive economics’, describe and explain economic phenomena (Samuelson and Nordhaus 2010, pp. 5–6). That is, they focus on the facts: what is the case. Some research in positive economics uses empirical methods, notably behavioural economics. These approaches fall within the camp of behavioural science. But other strands of economics, sometimes labelled ‘normative economics’, prescribe and assess economic views. That is, they focus on the norms: what ought to be the case (Caplin and Schotter 2008). For example, welfare economics investigates what influences the well-being of nations. These approaches place

1 Cf. Sections 2.2, 6.3.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

62 science less emphasis on empirical methods, and they do not fall within the camp of behavioural science. Of course, as with many distinctions, that between positive and normative economics is not exact, and how or indeed whether they can and should be drawn is a matter of debate that we need not enter. The distinction is relevant to us insofar, though, as delay discounting theory was initially developed as a normative suggestion. As such, it is part of normative economics and specifies prescriptive claims. For instance, welfare economists employ discounting theory to determine the extent to which we ought to take future welfare into account in present-day decisions about, say, climate change policies (Broome 2012; Davidson 2015; Greaves 2017). These discounting theories are not our concern in the book.2 But delay discounting theory has also been widely used in empirical investigations to describe actual behaviour. As these descriptive approaches build on the normative account, they inherit some of its features. Importantly, the normative account of discounting is itself based on further background assumptions which are consequently presupposed in descriptive and empirical research. In much of this research, the conceptual background assumptions are not made explicit because they are so widely accepted. This is common practice in all research. For example, philosophers writing about weakness of will do not provide detailed definitions of ‘agent’ or ‘judgement’ in every paper they write. However, when philosophers rely on evidence from empirical research or when they invoke discounting models, they might wish to take into account the background assumptions on which the models and the evidence are based. For, philosophical theory does not typically accept or presuppose them. Therefore, Chapter 4 makes these assumptions explicit and explains them in some detail. It thereby sets the stage on which delay discounting theories enter in Chapter 5. The present Part II of the monograph confines itself to those discounting models that have been discussed in the philosophical literature. The subsequent Part III introduces, inter alia, more recent discounting models from the economic literature that can overcome at least some of the issues raised towards the end of the present part.

2 But see Section 4.2.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

4 Agency in Descriptive Research Delay discounting theories are a class of models initially developed within economic theory. A classic statement is due to Samuelson (1937). Like many authors before and after him, he developed his theory within a specific framework of valuation and behaviour. In other words, delay discounting theory has been built on fundamental assumptions about preferences, actions, and values. These assumptions may not only strike philosophers as nontrivial. For example, one of them is, roughly, that choices align with an agent’s preferences. Also, these assumptions are not always made explicit either in the economic and scientific literature or in the writings of philosophers drawing on it. Yet, they have substantial implications. For example, it may be incoherent within this framework to maintain that an agent’s choice is against their preferences. These implications are particularly relevant to research into weakness of the will. For one thing, they prevent us to describe a weak-willed decision as a choice against the agent’s preferences. Using delay discounting theories that presuppose an economic framework of agency thus constrains any account of weakness of will in a similar way. Therefore, the current chapter focuses on this framework in greater detail. Section 4.1 presents an outline; Section 4.2 considers and sets aside the view that the framework does not describe weakness of will; Section 4.3 argues that, if the framework does account for weakness of will, then the best option available is to account for it in terms of preference reversals.

4.1 An Economic Framework of Human Agency “Much of economics exploits the principle that in the phenomena we want to understand, people are behaving in purposive fashion, aware of their value and alert to their opportunities, knowledgeable about their environment and the constraints on what they may choose, and are able to match actions with objectives over time.” (Schelling 1984, p. ix)

Homo economicus, the model of human agency Schelling describes, is familiar to most of us. It is the classic model of many economists, psychologists, and behavioural scientists. Common sense has it that homines economici deliberate sensibly and carefully to determine what behaviour serves their self-interest best, Weakness of Will and Delay Discounting. Nora Heinzelmann, Oxford University Press. © Nora Heinzelmann 2023. DOI: 10.1093/oso/9780192865953.003.0004

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

64 weakness of will and delay discounting and act accordingly. They are rational and maximize utility. As may already be apparent, some of these characteristics conflict with the possibility of weakness of the will. They are thus relevant for our purpose later in the book. We need not consider full-blown homo economicus with all their properties and I shall therefore cease to speak of them; instead we will focus on the following three key assumptions (cf. Samuelson 1937, pp. 156–7): • An agent, when faced with several options, assigns a prudential value or utility to each of them. The agent expects to gain the value of the option that is eventually realized. • The agent has preferences concerning these options. They prefer the option with the highest expected value. The agent’s set of preferences is subject to certain constraints that we need not worry about here. • The agent always chooses the most preferred option, that is, the option with the highest expected value. This choice is reflected in their behaviour. Economists tend to endorse these claims in some form or other more or less explicitly (e.g. Schelling 1984, p. ix; Becker 1976, pp. 5–8; Neumann and Morgenstern [1944] 1953 [1944], pp. 8–9, 17). Most relevant to our inquiry is that value (utility),1 preference, and choice are regarded as intimately linked. This has at least two important implications. First, values and preferences can be inferred from observations of behaviour: an agent reveals their preferences and thus their valuations in their choices. For somewhat more complicated cases, such as indifference, neat methods have been devised, but we can set these aside here. This makes the framework highly attractive to empirical research that measures behaviour. It builds a bridge between the empirical sciences and economic concepts such as utility functions. The framework is thus, on the one hand, extremely powerful. On the other hand, though, the framework and any approach within it cannot describe weakness of will as a phenomenon where there is no alignment of an agent’s actions, preferences, and expected values. In particular, it is therefore not suited to model philosophical accounts that understand weakness of will as an action against or a failure to persist in one’s preferences or values. Some of the most prominent contemporary accounts in philosophy, however, belong to this category.2 They regard weakness of will, respectively, as a failure to intend in accordance with one’s better judgement (Davidson [1970] 1980c,d; Wedgwood 2013a), or as an over-readily revision of an intention in the face of a countervailing inclination (Holton 1999). While the better judgement not acted upon or the intention revised over-readily might be or become observable by non-behaviourist

1 I shall use ‘prudential value’ and ‘utility’ interchangeably.

2 Cf. Chapter 3.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

agency in descriptive research 65 methods such as neuroimaging, the very phenomenon of weakness of will itself escapes description within the economic framework sketched above. In this vein, Watson (1977, p. 321) maintains that “the existence of weakness of will, or generally the divergence between evaluation and motivation, reveals one kind of limitation on decision theory”, which is committed to the three axioms above. A further challenge is to identify cases of weakness of the will and to distinguish them from others. For one thing, the framework cannot account for the psychology and phenomenology of weakness of will, which philosophers have assigned more (Cordner 1985) or less (Holton 2009, p. 82) weight to. As we shall see,3 agents are supposed to make their choices about delayed rewards in accordance with how they discount their values, regardless of whether they are weak-willed or not. Yet not only philosophers have found this unsatisfying. For instance, Schelling mocks the proposal of describing weakness of will by delay discounting theory as an unrealistic account of human psychology: the weak-willed agent “would have to be someone whose time discount is 100 percent per hour or per minute, compounding to an annual rate too large for my calculator. It is not clear whether the straight fellow who resolves to run three miles before breakfast enjoys such a far horizon that he can appreciate the benefits of elderly good health” (Schelling 1984, p. 63). In other words, Schelling worries that (discounted) value reflected in behaviour does not (fully) account for weakness or strength of will at all. Neither do behavioural manifestations of weakness of will (or overcoming it) reflect the feelings of struggle or conflict, or of failure (or success) by one’s own lights. A second important feature of the fact that the framework intimately links value and behaviour is that the relevant values are to be understood from the agent’s point of view. Let us use the terms ‘prudential value(s)’ and ‘utility’ for what the agent takes to make their life go well, rather than what actually makes it so. Let us reserve the term ‘welfare’ for the latter. The framework is only concerned with utility and values. These need not be purely egocentric; they may reflect the vast range of what human beings take to be of any worth at all, including moral, social, or aesthetic considerations. In addition, they can accommodate individual differences: while one agent favours friendship, another might be more heavily invested in their pursuit of a craft. Distinguishing prudential value or utility from welfare allows for the possibility that an agent is mistaken about what is best for them; it allows for their own values and what they take them to be to diverge. Whether this is possible or ever actually the case is a question we need not address here. Furthermore, we need not be concerned with the question of how welfare is to be defined. We are merely taking notice of this approach in order to avoid confusing it with our own endeavour. For some authors have suggested that welfare is the satisfaction of one’s preferences or

3 In Section 5.1.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

66 weakness of will and delay discounting the fulfilment of one’s desires (Crisp 2008; Parfit [1984] 1987 [1984], App. I). If this view is married with the claim that a person’s welfare is just identical with their utility or values, then in whatever way the agent behaves can be understood as promoting their actual welfare. This is not a view we are concerned with here; we shall thus follow Samuelson (1937, p. 161) in highlighting that “any connection between utility as discussed here [i.e. in delay discounting theory] and any welfare concept is disavowed.” This has an important consequence for approaches that take the framework as normative: if the framework is only concerned with what an agent takes to be their values, its possible prescriptive power is limited relative to the agent’s assumptions. That is, all imperatives or rules of guidance it provides are instrumental to whatever the agent takes to be their values. The next section examines this approach in greater detail.

4.2 Weakness of Will as Unfree or Irrational Behaviour In this book, we take it for granted that the framework applies universally, that is, to all agents at all times. Importantly, we shall assume that the framework applies to weak-willed agents and weak-willed actions. However, this assumption is not obviously true. The current section will therefore briefly discuss two proposals to restrict the framework, so that it does not apply to weakness of will. According to the first, the framework should be restricted to free rather than unfree agency. According to the second, it should be restricted to rational rather than irrational agency.⁴ Thus the first approach entails that weakness of will is unfree; the second entails that it is irrational. On a side note, we need not take a stance on whether freedom or rationality are properties that apply to some agents but not others (e.g. unconscious, very young, or severely disabled persons), or whether they can apply to an agent in some situations or at some times but not others (e.g. in a fit of rage or under the influence of drugs). If the former is true, then weakness of will affects only some agents but not others, and if the latter is true, then it can affect an agent in some situations or at some times but not others. Neither of these possibilities seems prima facie implausible. Let us begin with the first suggestion that the framework does not apply to unfree choices. Weakness of will could be classified as such a case. In other words, only free or partially free choices indicate an agent’s preferences and their expected values. Then weak-willed choices do not promote and weak-willed agents do not aim to maximize expected utility, for they are not free to do so.

⁴ We consider rationality and irrationality in greater detail in Chapter 7.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

agency in descriptive research 67 This suggestion does not seem very plausible.⁵ To begin, weak-willed agents are not unfree in the sense of duress or pressure; the weak-willed dieter is not force-fed or coerced at gunpoint to eat dessert. They also have the ability and opportunity to act other than in the weak-willed way; the dieter could decline the pudding. In addition, the weak-willed action is intentional, controlled, and flexible; succumbing to temptation is not a reflex or twitch. Finally, the weak-willed action is not determined by an irresistible psychological or mental force like madness, although some authors have suggested as much.⁶ Even though we can imagine scenarios where madness overpowers a person and makes them perform like a puppet worked by strings, these are not cases of weakwilled actions, for at least four reasons. First, a puppet directed by madness may be excused or exculpated while the weak-willed person is an appropriate target for critique and blame.⁷ Second, both the weak-willed agent themselves and third parties observing them plausibly describe their action as weak-willed rather than mad. A weak-willed action feels and looks different than mad behaviour. Third and relatedly, madness is typically entirely disadvantageous while weakness of will can be explained by some pleasurable purpose, although there is a better alternative. That is, for mindless Medea there seems to be no point at all in killing her children; the dieter, in contrast, enjoys the dessert even though this enjoyment impedes greater health benefits. Fourth, in typical, mundane examples of weakness of the will, the agent does not suffer from a mental disorder or psychiatric condition while this seems likely for madness. For example, behaviour that may strike us as mad may be due to psychosis, where patients lose touch with reality. In contrast, we could probably not diagnose weak-willed agents like the dieter with psychosis. Philosophers also commonly delineate weakness of will from compulsion (Beebe 2013, p. 4088; Mele 2012, p. 33; Mele 2010, pp. 402–3; Aristotle, Nicomachean Ethics 1148b30), or, less commonly, regard compulsion as a pathological case of weakness of will (Holton 1999, p. 254). Yet, even if weak-willed actions were a kind of compulsive actions, they would still be free, as compulsive actions are free as well.⁸ As an illustrative counterexample to the claim that weak-willed action is unfree, consider weak-willed smoking. Many smokers are weak-willed: they wish to quit but fail to do so. But smoking is not an unfree action. Although the urge to smoke might be very strong at times, it is not irresistible: orthodox Jewish smokers are able to abstain from smoking on the sabbath, as are flight attendants during long flights (Dar, Stronguin, et al. 2005; Dar, Rosen-Korakin, et al. 2010). This would be impossible if the smoker had no choice but to smoke. In contrast, the second suggestion, which restricts the framework to rational agency, has been more popular. For example, Morgenstern says about the scope of ⁵ For further discussion on whether weak-willed agents are free in the relevant sense, see Section 2.2. ⁶ e.g. Hare, cf. Section 3.2. ⁷ Cf. Section 2.2. ⁸ Cf. Section 2.2.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

68 weakness of will and delay discounting expected utility theory: “Naturally, it is assumed that the individuals are accessible intellectually […]. In that sense, there is a limitation since there are certainly persons for whom this is impossible. Whether they then should be called ‘irrational’ is a matter of taste” (Morgenstern 1979, p. 180). In other words, then, expected utility theory, which relies on the three axioms stated earlier,⁹ only applies to what we may call ‘rational’ agents. Whether we do call them ‘rational’ or not is, perhaps, less a matter of taste but rather a matter of the degree to which one shares Morgenstern’s confidence that “the theory is ‘absolutely convincing’ which implies that men will act accordingly” (p. 180). We shall set this question aside here. Let us focus on the idea that expected utility theory in particular and the more general economic framework described above apply only to at least momentarily rational or, as Morgenstern calls them, “intellectually accessible” individuals. Crucial for our purpose, then, is the claim that the framework applies only to partially or fully rational agents, and that any weak-willed person is not at all or not fully rational. There is some debate about the first part of this assumption (Dasgupta 2005; Hodgson 2012; Sen 1977) but it does not seem downright implausible. The second part of the assumption is, however, hardly contested. In fact, weakness of will is often seen as a prime example of practical irrationality. The concept itself is arguably a normative one, and some authors, notably Aquinas (Summa theologiae IIa q. 156 a. 2) and Hare (1952),1⁰ have even treated it as having a moral component. This would make ‘weakness of will’ a so-called ‘thick’ concept (Williams [1985] 2006, pp. 140–3). Despite the appeal such an approach seems to have, it faces at least two problems. First, one might feel uneasy about restricting the framework only to rational agents. For instance, Schelling (1984, p. 59) complains that doing so would be “casting suspicion on the entire individualistic-utilitarian foundation of neoclassical economics by adding a large fraction of the literate adult population to that already large population disqualified by infancy, senility, or incompetence from being represented in our theory”. That is, he seems to fear that restricting the framework to rational agency might limit its scope overly severely. On a side note, he thus seems to be far less optimistic than Morgenstern about people complying with expected utility theory. Accordingly, Schelling worries that excluding weakness of will from the phenomena that the framework describes will call into question its relevance and credibility. Second, if the framework is restricted only to rational agency, then it is normative rather than descriptive. This is so, as most authors would agree, because rationality is in turn normative. Moreover, the norms and prescriptions the framework provides are, as we have noticed earlier, merely instrumental. That is,

⁹ In Section 4.1.

1⁰ Cf. Section 3.2.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

agency in descriptive research 69 they do not inform us about what is rational simpliciter but merely about what is rationally required to promote whatever ends and subjective values we happen to have. This by itself is not a problem. Indeed, some delay discounting models, notably Samuelson’s model, have been regarded as normative in this way. However, it would prevent descriptive and empirical approaches from using delay discounting theory to model weakness of will. For it is highly questionable to make use of a normative account in descriptive or empirical research. As psychologists and behavioural economists are commonly more interested in describing or explaining weakness of will than in specifying norms of rationality prohibiting it, empirical approaches should give the framework universal application, comprising weakwilled behaviour. As a consequence, the challenge arises to clearly distinguish irrationality from rationality within the framework, i.e. to explain what renders a given action rational or irrational. We shall not attempt to answer this question here.11 Rather, we shall be concerned with one of its prerequisites: a clear understanding of how the framework can describe weakness of will. This is the topic of the following section.

4.3 Weakness of Will within the Framework This section argues that, within the economic framework of valuation and decisionmaking, weakness of will is best understood as a preference reversal. Let us begin by considering the two logically possible ways in which weakness of will can be described by the economic framework. Weakness of will involves two elements that conflict with each other.12 Either we use one description from our framework for both elements or we use two descriptions, one for each element. For illustrative purposes, consider our dieter again, a commonplace example of weakness of the will. They display two sets of behaviours that appear incongruous with each other: on the one hand, they announce that they are planning to skip dessert, they restrict their calorie intake, etc. They endorse dietary norms and rules. On the other hand, they do not do what they announced they would do, they tell us that they want to indulge, etc. They violate their own dietary rules. Within our framework, it seems that we either have to accept that the two seemingly contrariwise sets of behaviours do in fact reflect the agent’s seemingly contrariwise valuations and preferences. Alternatively, we could disregard one of the two sets and argue that it does not reflect the dieter’s preferences. The latter option amounts to restricting the framework to just one of the two sets of behaviours: it either denies that abstaining reflects the agent’s valuation

11 We discuss it in Chapter 7.

12 Cf. Section 2.2.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

70 weakness of will and delay discounting and preferences. They might not actually wish to indulge at all, ever. Alternatively, attempts to abstain may not reflect the agent’s true colours. Maybe the dieter does not really, at any time, wish to skip dessert at all. Either way, this approach faces the challenge to explain why the disregarded behaviour does arise, and how it should be understood. Readily available suggestions have been mentioned above:13 compulsivity, momentary madness, etc. Discounting utterances that favour quitting could be understood, say, as mere talking in inverted commas (Hare 1952, pp. 124–6, 163–5). Maybe the dieter expresses their partner’s wish, an expectation from society, or their doctor’s advice. They might even succeed in deceiving themselves about their own preferences (Wolf [1985] 1999). More problematic and difficult is to explain, first, why one set of actions should be treated as not revealing the agent’s true preferences, whilst the other should reflect their true colours, second, which of the two patterns of behaviour should be identified as the genuine one, and, lastly, how this in turn could be justified without taking a stance that could be criticized as paternalistic. Because I do not think that these difficulties can be overcome, I shall set this approach aside here and focus on the alternative one. On the alternative view, we accept that both of the seemingly contrariwise sets of behaviours do in fact reflect the agent’s valuations and preferences. The question thus arises how they can both be ascribed to the agent. To my best knowledge, there are three plausible options: 1. The agent is in some way or other divided with different parts representing different behavioural patterns, preferences, and values. Both parts are present at the same moment in time; accordingly, we could call this a ‘synchronic’1⁴ division. For instance, it appears that the dieter is divided into two parts: whilst they help themselves to some dessert at dinner, they at the same time say that they ought to not do so. Simultaneously, one part of their mind seduces them to give in to temptation, the other scolds them for doing so. 2. The dieter repeatedly reverses their preferences, values, and course of action. Effectively, they are thus divided again—but over time: some of the agent’s time slices display the indulging behaviour, others the abstaining behaviour. We might call this a ‘diachronic’1⁵ division. For example, imagine that the dieter—at times—genuinely prefers indulging over abstaining, and—at other times—genuinely prefers abstaining over indulging. Assume that, on some mornings on their way to work, the dieter stops at a little deli on their way and purchases a freshly baked chocolate chip cookie. However, on some other mornings they refrain from

13 Section 4.2.

1⁴ Cf. Section 2.1.

1⁵ Cf. Section 2.1.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

agency in descriptive research 71 doing so. The time, place, circumstances, even the dieter’s mood might be the same—still, their preferences change. Note that the dieter is not indifferent between eating the cookie and not eating it. On the contrary, they always have a clear opinion on the question. It is just that sometimes their opinion takes one form, and sometimes another. 3. The agent has a first-order preference to indulge but a second-order preference to prefer to not indulge (Jeffrey 1983). For instance, the dieter prefers having dessert over skipping it, yet also prefers to not have this preference. Now I shall argue that the second of these three options is superior to its two rivals. Consider the last option first. This proposal understands the dieter’s case as a curious combination of preferences (Jeffrey 1983, pp. 214–27): when given a choice between indulging and abstaining, the agent prefers the former. At the same time, they have the meta-preference to prefer to abstain. We can illustrate this idea as follows. Imagine that the dieter is offered a reliable and harmless pill that changes, from the moment it is swallowed, their first-order preference: prior to taking the pill, they prefer indulging over abstaining. Afterwards, they prefer abstaining to indulging. Before taking the pill, they always accept dessert when it is offered to them. Afterwards, they always decline it. Now, they prefer to take the pill rather than not to, or to take it instead of a pill that has some other effect. When given a choice between dessert and the pill, the agent prefers the dessert (Jeffrey 1983, p. 2231⁶). Let us set aside worries about second-order preferences more generally, and objections that Jeffrey’s proposal does not capture weakness of will (Meacham and Weisberg 2011; Mele 1992). In addition, it cannot adequately describe the dieter as weak-willed within our framework. For one thing, the notion of preference as Jeffrey uses it to describe the dieter’s case is no longer the one initially introduced in our behavioural framework. Recall that we are interested in how a case of weakness of will can be described within this framework, i.e. based on the assumption that an agent prefers and chooses the option with the highest expected value. Jeffrey’s conception implicitly rejects this assumption. For, Jeffrey (1983, p. 216) states that “in case of conflict, pref [that is, “preference in the technical or regimented sense”] is shown by the outcome, which need not be evident in action; e.g. because, of two propositions, that which is preferred true may not be in the agent’s power to make true”. In other words, if an agent has conflicting preferences, the chosen option may not be the preferred one, maybe because the agent might not be able to realize the preferred option.

1⁶ I take it that, in ordering (2), the top item is to be read “S” rather than “S”.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

72 weakness of will and delay discounting But our framework simply does not allow for this: it assumes that, if an agent chooses an option over an alternative, then they prefer it. If, as Jeffrey suggests, the agent is not able to choose the alternative, then it is not really an option, and the agent acts in an unfree manner. In this case, the framework does not apply to weakness of will, as discussed above. Therefore, I shall not further pursue the third way in which we could describe the weak-willed agent within our economic framework of human agency. Thus, we are left with the first and second options. Both options regard a weakwilled agent as divided. The ontological claim that an agent is divided—over time or at one particular instance in time—can be spelled out in a wide variety of ways. According to one suggestion, one brain system within the weak-willed agent gives priority to readily available rewards like eating dessert whilst another brain system favours long-term goals like a prolonged, healthy life (Kahneman [2011] 2012; Levy 2011). According to another suggestion, the dieter’s appetitive part of the mind drives them to indulge whereas the deliberative part tries to hold them back (cf. Aristotle, De Anima iii 9–11). Sometimes this idea has been stylized as a battle between reason and emotion. According to other proposals, there are incoherent mental states in a weak-willed mind (Egan 2008, p. 61), several semi-autonomous substructures (Davidson [1982] 2004, p. 181), or two conflicting sets of first-order preferences (Shefrin and Thaler 1980; Thaler and Shefrin 1981). In the diachronic case, the two or more parts of the divided agent are present over time, that is, at least at two nonidentical points in time; in the synchronic case, they are present at one and the same point in time. These ontological suggestions are rarely given without additional causal or explanatory details, as the claim that an agent is divided does not, per se, fully describe weakness of the will but demands further specification. In the diachronic case, the ontological claim raises the question of what and how the change over time comes about. In the synchronic case, it calls for more details about how and why one part takes priority over another at a given instance. In short, to fully describe weakness of the will, we need to at the very least specify how changes between the two or more parts come about. Answers to these questions do not depend on the kinds of entities invoked that represent the two respective sets of behaviour, preferences, and values (cf. Mele 1987, pp. 80–4). Therefore, I am not going to investigate the ontology of weakness of the will any further here. Instead, to describe weakness of the will along the lines indicated by options 1 and 2, we need to describe its mechanism, for instance by giving causal or explanatory details. At the very least, this requires an account of how an agent apparently reverses their values, preferences, and corresponding behaviour, either over time or at a given moment. The synchronic view needs an account of why and how one set of values, preferences, and actions dominates even though there is, at the same time, a rival set with greater values, stronger preferences, and a corresponding course of action.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

agency in descriptive research 73 Why and how is the set associated with stronger preferences and greater values overcome by the other if they are both present at one at the same time? Why and how does the former determine the agent’s action but not the latter? The diachronic view requires an account of why and how one set of values, preferences, and actions dominates at one point in time but a rival set dominates at another point in time. Why and how does one set prevail on one occasion, yet the other on another occasion? In particular, why and how does the set of allegedly lesser values and weaker preferences sometimes translate into action and sometimes not? In short, on both views a description of weakness of the will within our economic framework of valuation and decision-making is one that describes preference reversals either over time or at a given point in time. Indeed, some approaches understand weakness of will as a reversal of preferences. Amongst them is the presumably earliest account of weakness of will in the history of Western philosophy, attributed to Socrates (Plato, Protagoras 356A–358D):1⁷ although an agent normally prefers to, say, not indulge, when the opportunity presents itself, the power of immediate pleasure associated with indulging overcomes the person, and they give in to temptation. Shortly afterwards, when the agent’s attention is no longer dominated by the salient pleasure, the dieter regrets their lapse and prefers abstaining once more (Penner 1997). On this view, then, weakness of will is a reversal of preferences over time. In conclusion, once we adopt the economic framework of valuation and behaviour, we are conceptually prohibited from describing weakness of will as a case in which the agent’s values, preferences, and choices do not align. Within the framework, our best option is to adopt a Socratic stance that stipulates preference reversals. Many authors have indeed chosen this view. For support, they can rely on the predictive and econometric power of delay discounting theory. The following chapter takes a closer look at delay discounting theory, and at how it describes preference reversals.

1⁷ Cf. Section 3.1.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

5 Discounting Discounting theories were initially developed within modern economics. The economic account of human agency that provides an axiomatic framework for discounting theory was our topic in the previous chapter. Within this framework, weakness of will is arguably best understood as a certain kind of preference reversal. For example, the weak-willed dieter1 initially decides to skip dessert after dinner to promote their health but then revises this decision when temptation calls. Delay discounting theories are powerful accounts to describe these preference reversals. This chapter takes a closer look at them in Sections 5.1 to 5.3. More specifically, Section 5.1 introduces basic assumptions and conceptions of delay discounting theory, and Sections 5.2 and 5.3 then explain how the theory models preference reversals and weakness of will, respectively. Very roughly, delay discounting theory describes how discounted value may change with time and delay. If the (discounted) values of two options change such that the initially less valuable option is now more valuable than the other, a preference reversal occurs. To illustrate, when the dieter initially decides to skip dessert, they discount its value with the time until dinner, and the discounted value of abstaining is greater. But at dinnertime, indulging is no longer delayed and its value is thus no longer discounted. Then it trumps the value of skipping. The remainder of this chapter critiques this approach. In particular, it raises three main concerns. The first, described in Section 5.4, affects delay discounting theory as a model for preference reversals. It exposes a common myth about the ability of delay discounting models to describe preference reversals: one particular version of delay discounting theory, namely hyperbolic discounting, is commonly regarded as superior over rival models, notably exponential discounting models, in its ability to describe preference reversals. As it stands, this view is at best misleading, therefore I shall qualify it. Second, Section 5.5 argues that there can be weakness of will without preference reversals, and preference reversals without weakness of will. This may worry us if we hoped that preference reversals as described by delay discounting theories could provide us with necessary and sufficient conditions for weakness of the will. Third, Section 5.6 explains that there are some plausible examples for weakness of will which do involve preference reversals and yet cannot be described by delay

1 Cf. Chapter 1.

Weakness of Will and Delay Discounting. Nora Heinzelmann, Oxford University Press. © Nora Heinzelmann 2023. DOI: 10.1093/oso/9780192865953.003.0005

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 75 discounting theory as it stands. Specifically, these are cases in which an agent reverses her initial choice of a larger but delayed reward over an immediately available but smaller one. In sum, this chapter explains and points out limitations of the scientific approach to weakness of will described in Part II. It thus provides the foundations for Part III, which details how this approach can shed new light on our philosophical understanding of weak-willed delay discounting and weak-willed actions.

5.1 Delay Discounting Theory Imagine Eve offers you an apple. While the apple has many properties, let us just focus on its wizenedness for now. Depending on whether it is more or less shrivelled, you will probably be more or less inclined to accept the offer. A fresh and crunchy apple will have a higher value than an old and wizened one. In other words, you discount the prospect of receiving Eve’s apple with its wizenedness. The more shrivelled the apple, the lower will be its (discounted) value. Conversely, the fresher and crisper the apple, the higher will be its value. More generally, ‘discounting’ refers to the phenomenon that the value of a prospect changes with some of its features.2 Mathematically, discounting theories describe the discounted value of a reward as a product of the un-discounted value and some discount factor: discounted value = discount factor × un-discounted value

(1)

We can abbreviate equation 1. To do so, let us write ‘E’ for the discounted value of a reward, ‘V ’ for its un-discounted value, and ‘f ’ for the discount factor: E = f ×V

(2)

In our example, the discounted value of the apple is described as the value of an apple regardless of its wizenedness, multiplied by a discount factor that varies according to the wizenedness of the apple. Wizenedness is certainly an interesting feature and crucial to our discounting of the value of apples. However, other properties of rewards have gained considerably more attention, such as probability or temporal delay.

2 We are thus concerned with what has been called ‘pure’ discounting, that is, the discounting of value itself, not of money or of a commodity (see Section 7.3 for further details). Note that this does not rule out the possibility that pure discounting is in turn determined by some factor other than the feature with which the agent discounts.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

76 weakness of will and delay discounting Time or delay discounting will be our focus from now on. Like ‘wizenednessdiscounting’, delay discounting is typically negative: the more delayed a reward is from the present, the smaller will be its discounted value E. But this need not be so. Some older wines are more valuable than younger ones. We may thus discount wines positively rather than negatively: the longer we wait until consuming them, the higher their value. Either way, in delay discounting the discount factor f varies with the delay of the reward. That is, f is determined by the time or delay d it takes until the reward materializes. Mathematically, the discount factor f is itself a function of d: f = f (d). Accordingly, the value of the delayed reward E is also a function of the delay d. By replacing f with f (d) in equation 2, we get: E(d) = f (d) × V

(3)

Let’s call f a ‘discount function’. Note that the discount function is f, not E. Let us call E an ‘expected-value function’. The equation and nature of the discount function have been the object of much debate and research. They have been subjected, not always systematically, to normative as well as descriptive desiderata. For instance, when Samuelson (1937) first introduced discount theory to economics, he gave an expression for f that ensures strict stationarity. A strictly stationary discount function is independent of the time when the evaluation is made (Farmer and Geanakoplos 2009, p. 2). In other words, Samuelson’s model assumes that an agent is at all times equally patient. In contrast, empirical research typically aims to find an expression for f that fits the data best. Research has found patterns in delay discounting behaviour by human and non-human animals. For instance, other things being equal, they tend to value receiving gains earlier, and suffering losses later (Estle et al. 2006; Kahneman and Tversky 1979; Thaler 1981). Figure 1 depicts discount curves, the graphs of expected-value functions. It illustrates negative delay or time discounting for a reward. The (expected) value on the y-axis is plotted as a function of time or delay on the x-axis. The left panel (a) shows discounting with delay. It illustrates that the value of the expected reward decreases with delay: the more delayed a prospect, the lower is its (expected) value. The right panel (b) shows discounting over time. It illustrates that, the closer in time an agent gets to the expected time of realization of the reward (tR ), the larger will be its expected value. For example, in the dieter’s case, enjoying dessert after dinner (at tR ) is a reward. The closer in time the dieter is to having dessert, the less they discount it, and the higher is the expected value of the dessert. Similarly, enjoying dessert is most valuable when the dieter can do so immediately and without delay. The longer they have to wait for it, the more they discount it and the lower the value will be that they ascribe to it.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 77 (a)

(b) V Expected value

Expected value

V E

Delay

E

tR

Time

Figure 1 (a) Discounted value as a function of delay. For a delay of 0, the expected value E is equal to the un-discounted value V. The larger the delay, the smaller the discounted value. (b) Discounted value as a function of time. At the time of realization (tR ), when the agent receives the reward and the delay is 0, the expected value E is equal to the un-discounted value V. The longer the agent has to wait until tR , the smaller the discounted value E.

As is plain, the functions whose graphs are depicted in Figure 1 differ in important respects in the two cases. For one thing, the curves slope downwards when the discounted value is plotted as a function of delay (a), and they slope upwards when the discounted value is plotted as a function of time until realization (b). Let us call the functions whose graphs are depicted in Figure 1 (a) ‘synchronic discount functions’ and those whose graphs are depicted in Figure 1 (b) ‘diachronic discount functions’. The former describe synchronic, the latter diachronic delay discounting. Synchronic and diachronic delay discounting will be the same on the assumption that the discount function f is strictly stationary, that is, independent of the time t at which the agent discounts. However, many models do not assume stationarity, so we shall retain the distinction between synchronic and diachronic delay discounting. In sum, we have now seen how discounting theories model expected value of delayed rewards. Next, let us consider how they can account for preference reversals and, consequently, weakness of will.

5.2 Preference Reversals Weakness of will is best understood as a kind of preference reversal within an economic framework of decision-making and valuation.3 This section provides 3 As argued in Section 4.3.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

78 weakness of will and delay discounting further details about those preference reversals on which the subsequent Section 5.3 builds to explain how delay discounting theory accounts for weakness of will. Imagine that an agent prefers vanilla ice cream now over chocolate ice cream now. Imagine that the very same agent, asked at the very same moment, prefers chocolate ice cream next Sunday over vanilla ice cream next Sunday. This agent displays a synchronic preference reversal:⁴ their preferences regarding the two ice cream flavours are, at one single point in time, reversed. Strictly speaking, these synchronic preference reversals are not preference reversals at all. This is because the two preferences do not concern precisely the same two options. They do concern the same rewards, namely vanilla and chocolate ice cream. Yet the options given in the first choice are between immediate rewards. The options in the second choice concern rewards that the agent will receive next (a)

(b) V(choc)

V(choc) E(choc)

E(choc)

V(vanilla) E(vanilla)d1 E(vanilla)di E(vanilla)d2

E(choc)d1 E(vanilla)

E(choc)di E(choc)d2

V(vanilla) E(vanilla)

E(vanilla)t2

E(choc)t2 E(choc)t1

E(vanilla)t1 d0

d1

di

d2

Delay

t1

t2 tA

tB

Time

Figure 2 (a) Synchronic preference reversal. The discount curves depict expected values of rewards discounted with their delays d. The y-axes depict expected value or utility (E), the x-axis delay. The two curves indicate how the values of two rewards, vanilla and chocolate ice cream, decrease with delay. When there is no delay (d0 = 0), chocolate is more valuable than vanilla, V(vanilla) < V(choc). If the rewards are realized after a delay of di , they have equal value, and we might expect that the agent whose preferences are depicted is indifferent between vanilla and chocolate. If they have a delay of d2 , vanilla is more valuable than chocolate. So if they were due at d1 , the agent would probably choose chocolate over vanilla but if they were due at d2 , the agent would choose vanilla rather than chocolate. (b) (Diachronic) preference reversal. The discount curves depict the expected values of two delayed rewards. The y-axes depict expected value or utility (E), the x-axis time. The two curves indicate the values that two rewards, vanilla and chocolate, have at any point in time. As the times of their realization (tA and tB , respectively) approach, the (expected) values of the rewards increase. At t1 , the agent whose preferences are depicted values chocolate more than vanilla (V(choc) > V(vanilla)), so they would presumably prefer and choose chocolate over vanilla. At t2 , however, vanilla is more valuable than chocolate, so the agent would presumably prefer and choose vanilla rather than chocolate. ⁴ I borrow the terms ‘diachronic’ and ‘synchronic preference reversal’ from Frederick, Loewenstein, and O’Donoghue (2002, p. 361, n. 14).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 79 Sunday. The first choice concerns what Hare called a ‘now-for-now preference’, the second one concerns a ‘now-for-then preference’ (Hare 1981, pp. 101–2). Imagine that Sunday arrives and the agent is then given a choice between having either chocolate or vanilla ice cream immediately. Imagine that they prefer chocolate ice cream. Remember, though, that last time they were offered a choice between chocolate and vanilla immediately, they preferred vanilla. The agent thus displays a diachronic preference reversal: their preferences reverse over time. Figure 2 shows two panels depicting synchronic and diachronic preference reversals, respectively. The reversal is in each case illustrated by a crossing of the two discount curves. Figure 2 (a) shows a synchronic preference reversal; Figure 2 (b) a diachronic preference reversal. The former shows an agent’s preferences between chocolate and vanilla ice cream with varying delays at one and the same point in time; the latter shows the agent’s preferences between those two rewards over time. More complex reversals occur over time but also concern delayed rewards. For instance, it seems possible that an agent prefers chocolate ice cream the day after tomorrow over vanilla ice cream now but, on Sunday, prefers chocolate ice cream tomorrow over vanilla ice cream immediately.

5.3 Weakness of Will Delay discounting theory has been invoked by philosophers and behavioural scientists alike to model or even explain weakness of will. Within an economic framework of agency, it is best understood as a kind of preference reversal as specified in the previous section. The present section explains how delay discounting theory helps us to spell out this suggestion in detail. For illustrative purposes, consider the dieter’s case once again. They have to decide between the following two options. On the one hand, there is the option of enjoying a delicious but unhealthy dessert; on the other hand, there is the option of foregoing it, thereby promoting eating habits that contribute to their goal of a healthier lifestyle. Given the assumptions of our economic framework of human agency,⁵ we can suppose that the dieter assigns an expected value to a healthy lifestyle, and a different value to the dessert. As these expected values reflect the dieter’s relative preferences, which they in turn reveal in their behaviour, we can infer that, in the morning, the dieter prefers skipping dessert over having it, yet that, in the evening, they prefer having dessert over skipping it. We can use delay discounting theory to state this more formally. Let f be the dieter’s discount function. Let us assume that it is the same for both delayed

⁵ Cf. Section 4.3.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

80 weakness of will and delay discounting rewards. f specifies how the dieter discounts the values of skipping and having dessert. Let Vhaving dessert and Vskipping dessert be the un-discounted values of having dessert and skipping it, respectively. We can express the expected values of indulging in and of skipping dessert, Ehaving dessert and Eskipping dessert , as functions of Vhaving dessert , Vskipping dessert , and the discount function f:⁶ Ehaving dessert = f × Vhaving dessert Eskipping dessert = f × Vskipping dessert The dieter discounts both values with temporal delay and the discount function f is thus a function of delay.⁷ We can thus further specify the expected values of having and skipping dessert as: Ehaving dessert = f (d) × Vhaving dessert Eskipping dessert = f (d) × Vskipping dessert The delay, d, changes as time elapses. We can conceptualize d as the difference between a later and an earlier point in time: d = tlater − tearlier . In the dieter’s case, we are particularly interested in the delays in the morning, when the dieter initially decides to skip dinner, and at dinnertime, when the dieter reverses their preference. Let t1 be morning, t2 dinnertime, and t3 a point in time further in the future when the dieter benefits from foregoing dessert. For instance, imagine t3 is a point in time in the morning after the sumptuous dinner. At t3 the dieter, if and only if they manage to abstain from overindulging in dessert and therefore escape an upset stomach and restless night, will awake early, feel light and energized, and enjoy a morning run. Let us now consider the expected values of skipping and indulging in dessert. In the morning, the dieter decides to skip dessert, thus we can infer that they expect a greater value from skipping than from indulging. Indulging in dessert is delayed by t2 − t1 , the time until dinner, and enjoying the benefit of skipping is further delayed, by t3 − t1 , the time until the next day. So we have, in the morning (at t1 ): Ehaving dessert < Eskipping dessert



f × Vhaving dessert < f × Vskipping dessert



f (t2 − t1 ) × Vhaving dessert < f (t3 − t1 ) × Vskipping dessert

⁶ Cf. equation 2, Section 5.1.

⁷ Cf. equation 3, Section 5.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 81 In contrast, at dinnertime (t2 ), the dieter chooses to have dessert rather than to skip it. This indicates that they have reversed their initial preference and that the (expected) value of indulging is now greater than the expected value of skipping the food. With the passing of time, the relevant delays have changed as well. At dinnertime, the expected value of having dessert is no longer discounted because the delay has fully elapsed. This delay is now 0. However, enjoying the benefits of skipping dessert is still delayed until the next day. The delay of this enjoyment is t3 − t2 at dinnertime. Thus, for the expected values, we have: Ehaving dessert > Eskipping dessert



f × Vhaving dessert > f × Vskipping dessert



f (0) × Vhaving dessert > f (t3 − t2 ) × Vskipping dessert In short, delay discounting theory allows us to describe weak-willed preference reversals like the dieter’s in mathematical form. Illustrating this graphically, Figure 3 shows plots of the two expected-value functions Ehaving dessert and Eskipping dessert . The two discount curves depict, for each point in time, the expected values of the two rewards, and thereby indicate the dieter’s relative preferences, including their diachronic preference reversal. As the dieter’s example illustrates, delay discounting theory models and even predicts weak-willed behaviour. More specifically, it allows us to characterize weakness of the will in any of the following three ways. E(skipping)

V(skipping)

E(indulging)

V(indulging)

t1

t2

t3

Figure 3 Weakness of will in the dieter’s case. The discount curves depict expected values of two rewards (skipping dessert for health benefits versus indulging in it), discounted with their delays. The y-axis depicts expected value or utility (E), the x-axis time (not to scale). In the morning (at t1 ), skipping dessert is the preferred option (E(indulging) < E(skipping)), yet at dinnertime (t2 ), it is less preferred (E(indulging) > E(skipping)). At the point in time when the two discount curves cross, one might expect the dieter to be indifferent between the two options.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

82 weakness of will and delay discounting First and familiarly, weakness of will can be described as a certain kind of preference reversal (Levy 2014; Broome 2012, p. 142; Elster 1985, p. 251): the weakwilled agent initially chooses the larger delayed reward but reverses this decision after some of the delay has elapsed. For instance, the dieter chooses to skip dessert in the morning rather than to indulge; however, they reverse this decision at dinnertime when they indulge after all. Second, weakness of will can be described as the choice of a smaller, sooner over a larger, later reward (Ainslie 1975, p. 463; Fujita 2011). In this vein, the dieter can be seen as weak-willed because they choose the dessert at dinnertime, although the pleasure they derive from its enjoyment is smaller than the one they would in the future gain from a happy morning run. Third, weakness of will may be understood as an overly steep discounting of a future reward (Schelling 1984, p. 62). That is, the value of one of the two rewards (the more valuable one, if they are of unequal un-discounted value) is discounted more than the value of the other one. Mathematically, the discount factor f is smaller for the future reward.⁸ The idea is that a weak-willed agent, such as the dieter who indulges in a heavy dessert, discounts the future—say, the prospect of better health—with a very high interest rate, undervaluing future or delayed benefits when compared with present or immediate ones. Although they know that the future benefit would be greater, the dieter fails to act on this knowledge, as temptation seduces them to overturn their earlier resolve. The three descriptions are by no means mutually exclusive—on the contrary: the first entails the second and the third description. Here is why. According to the second description, weakness of will is the choice of a smaller, sooner over a later and larger reward. It is thus broader than the first, which in addition also requires an initial choice in the other direction. According to the third description, weakness of will is an overly steep discounting of a future reward. This overly steep discounting is also entailed by a preference reversal: a preference reversal occurs if one of two rewards is preferred at one point in time or with a certain delay, and if the other is preferred at another point in time or with some other delay. This implies that, at either of the two points in time or for either of the delays, one of the two rewards has to be discounted more than the other, even if it is of the same size or larger. The Appendix shows this implication mathematically.⁹ Hence, if there is a preference reversal in a case of weakness of will, there must also be an overly steep discounting—thus the first description of weakness of will within the discounting framework implies the third one. As we have set up the dieter’s example here, it is a case of a diachronic preference reversal: at one point in time (in the morning), they prefer one option, namely

⁸ The difference in f may in turn be due to various reasons; e.g. the discount rates for the two rewards may differ. We return to this point below in Section 5.4.1. ⁹ Cf. section A.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 83 skipping dessert, over its alternative, namely having dessert. Nevertheless, at a later point in time, their preferences are exactly reversed: they now prefer having dessert over skipping it. Diachronic preference reversals are typically used to model weakness of the will. However, weak-willed agents like the dieter may also exhibit synchronic preference reversals. For instance, whenever the dieter is asked whether they would like to have dessert later, they may reply that they would prefer to skip it. However, if you offer them a choice between having dessert or skipping it now, they may prefer to have it. This is, arguably, weakness of will. It is no surprise that delay discounting theory has been a popular approach to weakness of the will, as it is an extremely elegant and powerful model: it provides us with three ways of characterizing weakness of will, it connects them with econometric and conceptual rigour, and given the assumption that valuation, preferences, and decision-making are intimately linked, it can predict with mathematical precision how much an agent values a reward when, and at what time they will reverse their preference between this reward and another. Discounting models have accordingly been widely used not only in economics but also in the behavioural sciences—and in philosophy. Let us return to Mele’s account1⁰ of weakness of the will to revisit and assess by way of example how a philosopher uses delay discounting theory and related empirical work to support his account of weakness of the will. Roughly, on Mele’s view, weak-willed action is “action against a consciously held better judgment” (Mele 1987, p. 7). He argues that this action can be partially explained in terms of delay discounting theory, notably the approach taken by Ainslie (1975, 1982). In particular, the empirical findings describe what Mele calls the agent’s levels of motivation to perform various possible actions and how these levels of motivation may change over time in such a way that an agent may be initially motivated to do one thing but then become more motivated to do something else instead. If an action is weak-willed, then the agent is more motivated to perform it than an alternative they judge to be better. With the conceptual background of agency and further details of delay discounting theory on the table, we can take a closer look at these claims. In particular, we may now notice two issues. First, Mele is primarily interested in describing weakness of will as a case in which a judgement and an action occur simultaneously, that is, “action against a consciously held better judgment about something to be done here and now” (Mele 1987, p. 7; see also Mele 2012, p. 8). For convenience, let us refer to this class of weak-willed actions as ‘instantaneous weakness of will’.11 Instantaneous weakness of will is neither a reversal of preferences over time (that is, a diachronic preference reversal) nor a curious combination of preferences concerning delayed

1⁰ Cf. Section 3.4. 11 Section 5.5 returns to these cases and the problem they raise for accounts that regard weakness of will as a preference reversal.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

84 weakness of will and delay discounting rewards (that is, a synchronic preference reversal).12 Yet delay discounting theory is only concerned with either of these two classes of cases. Ainslie’s theory in particular does not, and indeed does not aim to, describe the phenomenon that Mele has in mind. To my best knowledge, there is no discussion of it in Ainslie’s work; from his behaviourist outlook, a divergence between a better judgement and a simultaneous action is simply not conceivable.13 Rather, Ainslie’s approach accounts for diachronic preference reversals only (as Peijnenburg 2005, p. 657, points out and Ainslie 2005, p. 669, concurs). Challenged to explain instantaneous weakness of will, Ainslie (2005) replies that these seemingly instantaneous cases are in fact diachronic preference reversals. Briefly, he suggests that the belief counteracted in an instance of weakness of will can be modelled as the expected value of the larger, later reward: “all the examples that authors have described of doing what you simultaneously believe not to be best can be seen as obeying an impulse in violation of a personal rule. You ‘really’ wanted the long-range rewards defended by your rule, but in the case at hand the rule was not strong enough” (Ainslie 2005, p. 669). However, even if this response is plausible, it is unclear whether it would support Mele’s view. If it turned out, as Ainslie argues, that all seemingly synchronic cases of weak-willed action are in fact diachronic, then Mele’s account would describe something that does not exist. On the other hand, if it turned out that Ainslie’s claims are untenable and that there are genuine cases of synchronic weakness of will, then Ainslie’s model would not apply to these cases. Then Mele could not rely on them for support of an account of instantaneous weakness of will. Either way, Ainslie’s discounting model describes a phenomenon different from the one Mele wishes to explain. Thus it remains unclear how Ainslie’s model could support Mele’s claims. Whether this issue could be addressed by developing Mele’s account of weakness of the will in greater detail, or by modifying it where necessary, remains a question open for further research. Second, one may wonder whether Mele’s account lives up to his aspiration to address Davidson’s so-called ‘paradox of irrationality’ (Davidson [1982] 2004, p. 303) for weakness of will. The paradox calls for an account that neither explains irrationality away nor assigns it overly dogmatically. Mele (1987, ch. 6) maintains that his approach resolves the paradox. It supposedly does so because it can on the one hand account for the irrationality of the weak-willed action and on the other hand allow for the fact that the weak-willed agent acts for a reason. The weakwilled action is irrational because it is action against a better judgement. At the same time, it is performed for a reason as explained by the empirical evidence on delay discounting and attention. This evidence describes the agent’s levels of 12 Cf. Section 5.2.

13 As explained in Chapter 4.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 85 motivation to perform each of several actions, and in particular how the option the agent is most motivated to perform can change over time. However, invoking empirical research seems to help Mele less in this respect than he might have hoped. Here is why. As we have seen,1⁴ some approaches apply the economic framework of human agency universally, that is, to any and not just to rational agents. Importantly, this holds for empirical accounts, like those by Ainslie and Mischel, which Mele draws on. They do not only describe what we might regard as irrational actions, they also describe potentially rational ones. Although Mele relies on this strand of work, it provides him with a description not only of irrationality but also of rationality. In order to explain irrationality, then, further details are required about what distinguishes rational from irrational agency within the framework.1⁵ For instance, rational agents could be distinguished from irrational ones in that the former discount the future less steeply than the latter (cf. Bickel, Athamneh, et al. 2019; Bickel, Koffarnus, et al. 2014; Kirby, Petry, and Bickel 1999; Noda et al. 2020),1⁶ where ‘less steeply’ should be further specified. Or alternatively, rational agents might reverse their preferences only for good reasons, whatever those might be (cf. Holton 1999, p. 249). Now, Mele does not seem to provide us with such details. His explanation of weak-willed action does not differentiate rational from irrational agency within the empirical framework. Therefore, he does not give a full explanation of irrationality at all and, a fortiori, does not resolve the paradox of irrationality. To briefly sum up, this section has explained how discounting theory may be and has been invoked to describe weak-willed action. However, we have also seen that this approach faces challenges. The next three sections raise one further caveat each.

5.4 Preference Reversals in Discounting Models The first caveat about using discounting models to describe weakness of will concerns a claim made about two different families of discounting theories, which differ in their discount function f. This is the claim that hyperbolic delay discounting theories are superior to exponential delay discounting theories. Before considering this claim in greater detail in Section 5.4.2, the next Section 5.4.1 introduces the different discount models in some detail.

1⁵ We discuss this question in Chapter 7. 1⁴ In Chapter 4. 1⁶ Cf. Section 2.2 for the suggestion that addicts have steeper discount rates.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

86 weakness of will and delay discounting

5.4.1 Exponential and Hyperbolic Models The present section introduces exponential and (quasi-)hyperbolic discounting models in turn. Exponential discounting theory is also known as the ‘discounted utility’ or ‘DU model’. Initially proposed as a purely conceptual theory, it was soon also regarded as a descriptive model. The classic account of and rationale for exponential discounting is due to Samuelson (1937). Samuelson suggests to discount the value of money according to the interest one could earn on it. For instance, if you are given £1 today, you could invest it into a suspiciously generous savings account, earning 10% interest on it. In a year from now, you would have £1.10. If I offered you a choice between £1 now and some other amount of money in a year from now, you should take the £1 now iff the amount of money I offer you in a year from now is at least as much as you can earn when you put £1 into your savings account now, namely £1.10. So the value of £1.10 today is equal to the value of £1 received in a year from today. This assumes that, e.g. you trust your bank, the stability of the pound, etc. Let us generalize. Call i the interest rate you can earn during one period of time d. For instance, in our example, i = 10% and d = one year. So, for some future year d from now, you will get: Fi (d) = P(1 + i)d F are all of your future assets, that is, the sum of your current savings plus the interest and compound interest you earn on them. P are the savings you have at present, £1. We can further generalize this rationale to any possible gain: imagine that Eve offers you a choice between an apple now and ten apples in ten years. If you are a skilled gardener, you could take the apple now, grow an apple tree out of it and have way more than ten apples in ten years. Any future reward in terms of apples better make up for the possible gain you forego by not choosing the apple now. You discount delayed rewards according to the gains you forego by waiting for them. In this vein, the discounted value E of a future reward can be defined as the utility of an un-discounted present reward V divided by (1 + i)d : Ei (d) =

V = (1 + i)−d × V (1 + i)d

(4)

Next, Samuelson defined, arbitrarily as he points out, the following discount rate r: def

r = ln(1 + i)

(5)

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 87 Solving 5 for i and replacing it in 4 yields1⁷ Er (d) = e−rd V

(6)

Equation 6 states an expected-value function1⁸ of the form E(d) = f (d) × V with the discount function f (d) = e−rd

(7)

A function of this form is called an ‘exponential discount function’. In equation 7, it is stated as a synchronic delay discounting function.1⁹ For diachronic delay discounting, the exponential discount function in its common form is f (t1 , tV ) = e−r(tV − t1 )

(8)

with tV being the expected time of realization and t1 the point in time at which the expected value in question is assessed (‘now’). The diachronic discount function thus also specifies the discount factor f as a function of a delay, and this delay is just tV − t1 . However, in contrast to synchronic delay discounting, the discount factor approaches 1 (and the expected value thus is close to the non-discounted one) near tV , the expected time of reward realization. In the synchronic case, the discount factor is largest (that is, equal to 1) in the present moment, as a reward without delay is not discounted at all. One reason why Samuelson might have defined the discount rate r in its arbitrary way is that the exponential discount function ensures that an individual’s preference ordering stays the same over time (Samuelson 1937, p. 160). For instance, if you prefer a reward B with un-discounted value V(B) over a reward 1⁷ ln(x) is the natural logarithm of x, the exponent to which e must be raised to produce x. e is a mathematical constant named after the mathematician Leonhard Euler. We can thus write ln(x) = y as ey = x, and: r = ln(1 + i) er = 1 + i i = er − 1

⇔ ⇔

Ei (d) = (1 + i)−d × V r

−d

Er (d) = (1 + e − 1) V

⇔ ⇔

= e−rd V 1⁸ Cf. equation 3 in Section 5.1. 1⁹ Cf. Section 2 for the distinction between synchronic and diachronic discounting.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

88 weakness of will and delay discounting A with un-discounted value V(A) today and if you discount both values with the same discount rate r according to an exponential discount function, then you will always prefer the discounted reward B over the discounted reward A as well. In short, the exponential discount model has been developed based on conceptual or normative considerations. In contrast, the hyperbolic model hails from empirical research. The hyperbolic delay discount model originated within a behaviourist strand of research into value-based decision-making and learning. In particular, it was found that an animal choosing between two options responds at roughly the same rate at which the options yield a reward (Ainslie 2001; Davison and McCarthy [1988] 2016). For example, if a pigeon receives twice as much food if it pecks the left button than if it pecks the right button, it tends to peck the left button twice as frequently than the right one (Herrnstein 1961). In other words, the animal ‘matches’ its response rate to the rate of reinforcement. This relationship has been expressed in the so-called ‘matching law’: P1 R = 1 P2 R2 P1 and P2 are the response rates for option 1 and option 2, respectively, and R1 and R2 are the respective reinforcement rates. Moreover, it was found that if there is a delay d between a response and the realization of the respective reward, behaviour is sensitive to the delay and the undiscounted value V of the reward (cf. Ainslie 2001; Chung and Herrnstein 1967; Killeen 1972; Shull, Spear, and Bryson 1981). Mathematically, this was expressed as P1 V d = 1× 2 P2 V2 d1

(9)

V1 and V2 are the un-discounted values of the two rewards and d1 and d2 their respective delays. In other words, the more frequently an option is chosen over another, ceteris paribus, the greater is its un-discounted value or the less it is delayed compared to its alternative. Because choice reveals preference and thus expected value,2⁰ the relationship stated by equation 9 has also been taken to provide an expression of expected or discounted value E, E=

2⁰ Cf. Section 4.1.

V d

(10)

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 89 that is, the expected value of a delayed reward is its un-discounted value V divided by the delay d. But equation 10 was soon refined again in the face of several challenges. For one thing, it turned out not to describe behaviour adequately in a range of conditions. For example, it does not account for individual differences in sensitivity to delay, that is, how strongly delay affects behaviour, which can vary from one subject to the next. Equation 10 also states that the expected value E becomes infinite if a delay approaches 0, which seems impossible. To address issues like these, equation 10 was refined thus (Mazur 1987): E=

V 1+k×d

(11)

k is the discount rate, which may differ between individuals. Equation 11 also plausibly states that, when a reward is not delayed and thus d = 0, the expected value is identical to the un-discounted value V: E(d = 0) = V. Equation 11 is an expected-value function21 of the form E(d) = f (d) × V with the discount function f (d) =

1 1+k×d

(12)

Equation 12 is a so-called ‘hyperbolic’ discount function, here stated in its synchronic form. For diachronic delay discounting, the hyperbolic function is usually given as f (t1 , tV ) =

1 1 + k(tV − t1 )

(13)

with tV as the expected time of realization and t1 the point in time at which the expected value in question is assessed. The hyperbolic discount model is historically older and mathematically related22 to the ‘quasi-hyperbolic’ one (Laibson 1997). Its synchronic function is typically given as f (d) = 𝛽 × d𝛿

21 Cf. equation 3 in Section 5.1.

22 Cf. Appendix B.

(14)

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

90 weakness of will and delay discounting with the delay d and two constant terms 𝛽 and 𝛿 between 0 and 1. Because these terms characterize the quasi-hyperbolic model, its discount function f is sometimes called a ‘beta-delta’ function. It is supposed to combine the conceptual tractability of exponential models with the descriptive quality of hyperbolic functions. There is some controversy about the degree to which the beta-delta model is empirically superior to the hyperbolic one (Ainslie 2012; Berns, Laibson, and Loewenstein 2007). It often tends to give a better fit for empirical data (Loewenstein 1996, 1999; McClure, Ericson, et al. 2007; McClure, Laibson, et al. 2004) but it also has two free parameters (𝛽 and 𝛿) rather than just one (k). This manuscript focuses primarily on exponential and hyperbolic models.

5.4.2 Comparing Exponential and Hyperbolic Models Exponential models are sometimes regarded as inferior to hyperbolic ones in their ability to describe weak-willed choices. Relatedly, some authors claim that exponential discounting is rational whilst hyperbolic discounting is irrational (Greene and Sullivan 2015; Sullivan 2018). In the current section, we focus on the first claim.23 Ainslie (1975, 1992) is often credited for arguing along those lines (e.g. Mele 2012, pp. 98–9; Bratman 1999a, p. 39; Green and Myerson 1993, p. 38; Elster [1979] 2013, p. 43). For instance, Ainslie says about what he calls ‘impulsiveness’ (1975, p. 471): When a choice is based on different amounts of the same reward at different delays, a temporary preference for one alternative could occur only if the delay function of the reward was more concave than an exponential curve.

In other words, Ainslie’s impulsive agent temporarily prefers a specific amount of a reward with a specific delay over a different amount of the same reward with a different delay. That is, at some times, they prefer the larger amount over the smaller amount; however, at other times, they have the reverse preference. As Ainslie states elsewhere, to fully account for impulsiveness, “there has to be a reversal of choice. Graphically, this is to say that the [discount curves describing] behavior as a function of time must cross one another” (p. 470). Figures 2 and 3 depict such crossings. In short, impulsiveness may be characterized by a reversal of preferences between a smaller, sooner, and a larger, later reward. Furthermore, Ainslie states that an exponential delay discounting model cannot account for such preference reversals and crossings of discount curves (cf. Greene

23 We return to the second claim in Section 7.3.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 91 and Sullivan 2015; Sullivan 2018). Only a “more concave” curve can do so. Such a curve, Ainslie continues to argue, is provided by a hyperbolic discount model. Let us now return to Ainslie’s statement that a preference reversal can only occur if the discount function is steeper than that provided by an exponential function. Let us consider this claim for synchronic and diachronic delay discounting separately. When applied to synchronic delay discounting, this claim is incorrect as it stands.2⁴ Whether a synchronic delay discount function can yield a preference reversal illustrated by a crossover of discount curves does not depend on its particular nature, whether exponential or hyperbolic. This claim can be proved mathematically;2⁵ however, Figure 4a illustrates it as well: in the top two panels of

Exponential functions

Hyperbolic functions

(a)

Exponential functions

Hyperbolic functions

(b)

Figure 4 Delay discount curves depicting pattern-coded rewards of various sizes. All panels show value (y-axis) over delay or time (x-axis). The top row shows discount curves for the same discount rate for all rewards, the bottom row shows curves for different discount rates for different rewards. The first and third columns depict graphs of exponential discount functions, the second and fourth columns depict those of hyperbolic ones. (a) Synchronic delay discount curves. Crossing points (preference reversals) only occur when discount rates differ, regardless of the discount function used. (b) Diachronic delay discount curves. Crossing points occur when the discount rates differ (bottom row), or, in the hyperbolic case, when the larger rewards occur later than the earlier ones. 2⁴ Whether synchronic preference reversals are a good model for weakness of the will is a further and separate question. Here, we are concerned with the question of whether exponential or hyperbolic models are better at describing preference reversals. I have argued (cf. Section 5.2) that synchronic preference reversals are not, strictly speaking, preference reversals at all. A fortiori, I think that they are not a good model for weakness of the will as preference reversals either. According to a dissenting view stated by several authors, however, synchronic preference reversals may be irrational or even weakwilled (cf. Bratman 1999a, p. 38; Parfit [1984] 1987; Rawls [1971] 1999, § 64; Lewis 1946, p. 493). For further discussion of rationality, see Chapter 7. 2⁵ The proof is in Appendices B and C.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

92 weakness of will and delay discounting the figure, the discount curves never cross, independently of whether the discount function is hyperbolic or exponential. In the bottom two panels, in contrast, the discount curves do cross, for both the exponential and the hyperbolic model. What accounts for the crossovers is not the exponential or hyperbolic discount function but the fact that the larger rewards are discounted much more steeply than the smaller rewards, for both the exponential and the hyperbolic case. This, in turn, is so because the discount rates differ between rewards.2⁶ For the exponential function, this is the constant term r, for the hyperbolic function, it is the term k. So we can describe synchronic preference reversals with an exponential model as well as with a hyperbolic one. To be fair to Ainslie, though, it seems that we could read what he says as concerning the diachronic case only, so his claim should maybe not be applied to the synchronic version at all. Let us therefore now consider diachronic delay discounting. Again, regardless of the discount function used, crossing points occur whenever the discount rates differ with reward size. Exponential models can account for this case just as well as hyperbolic models can, as illustrated in the bottom two panels of Figure 4b. Thus, preference reversals and crossings occur when the constant terms k or r vary with reward size. More precisely, this is the case exactly when later, larger rewards are discounted less steeply than smaller, sooner rewards. This can be shown mathematically.2⁷ Importantly, this pattern, known as ‘the magnitude effect’, has also been found empirically: agents actually do discount smaller rewards more than larger ones (Green, Myerson, and McFadden 1997; Green, Myerson, Oliveira, et al. 2013; Green, Myerson, and Ostaszewski 1999; Kirby 1997; Raineri and Rachlin 1993). That is, k or r is greater when the un-discounted value V of a reward is smaller. It is thus highly plausible to assume that the discount rates vary negatively with reward size. However, there is one important difference between hyperbolic and exponential models in diachronic discounting, and this is likely the one that has led authors to favour the former so much over its rivals.2⁸ This is the fact that the hyperbolic function but not the exponential one can describe crossing points on the assumption that the discount rates are constant for rewards of different sizes. That is, the hyperbolic model but not the exponential one permits us to describe preference reversals even if we assume that all amounts of the same reward are discounted equally steeply. Ainslie explicitly makes this assumption (1975, p. 470). Figure 4b

2⁶ The discount rates may differ for a variety of reasons. For one thing, agents may discount different kinds of rewards with different rates: they may discount apples more steeply than oranges. For another, discount rates may differ with number or size of reward: agents may discount hundred apples less steeply than ten apples (for this so-called ‘magnitude effect’, see below). 2⁷ The proof is in Appendices B and C. 2⁸ The hyperbolic discount functions also typically provide the best fit to empirical data, although this claim is contested (Ainslie 2012, p. 10; Berns, Laibson, and Loewenstein 2007; Bénabou and Tirole 2004; Loewenstein 1996).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 93 illustrates this: its top two panels show that a hyperbolic but not an exponential function can yield crossing points when discount rates are kept constant. Let us conclude. The present section has examined a common view, voiced by Ainslie and many others. On this view, a hyperbolic discount model is better at describing weakness of will than an exponential model because it is able to describe crossing points, that is, preference reversals. However, as this section has argued, it seems that the hyperbolic model is not superior to the exponential one: both models are able to describe preference reversals on an assumption that is empirically plausible to make. This is the assumption that discount rates differ with reward size. Only on the empirically inadequate assumption that all rewards are discounted with the same rate and only for diachronic discounting does the hyperbolic approach fare better. If our concern is to find a descriptive model, as it seems to be the case for Ainslie and many empirical researchers, then we should not hold on to an assumption which might be plausible as a normative requirement but is empirically inadequate. But once we abandon the assumption, an exponential model is in principle no less able to describe preference reversals than a hyperbolic one. However, even if hyperbolic models were better suited to describe preference reversals than their exponential rivals, this might not give them a real advantage over them. As the following section will explain, understanding weakness of will as preference reversals has limits.

5.5 Preference Reversals Versus Weakness of Will Preference reversals are neither necessary nor sufficient for weakness of will. Or so I shall argue in this section. Consider first the claim that preference reversals are sufficient for weakness of will. I shall give two kinds of counterexamples to show that it is wrong. The first kind of examples are cases in which an agent seems to reasonably revise her preferences. The second kind are cases in which an agent seems to revise her preferences for no good reason but without being weak-willed. First, consider reasonable revisions. Here is an example2⁹ for the diachronic case: imagine an agent who prefers a baby pram over a children’s bike now but, three years later, prefers a children’s bike over a baby pram. The agent changes their preferences over time but this is due to the fact that their child is growing up and needs a pram now but a bike in a few years from now. The preference reversal seems entirely innocuous and has nothing to do with weakness of will.

2⁹ I thank Arif Ahmed for this example.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

94 weakness of will and delay discounting At this point, one might object that the case is inadequately described; once re-stated, the objection goes, it becomes clear that there is no preference reversal here. The parent prefers, at all times, a pram for an infant and a bike for a young child. So the agent does not reverse their preferences at all. I acknowledge that, when the case is described in this way, there is no longer any preference reversal. The reversal has been explained away. However, this strategy could be put to use for almost any case in which there seems to be a preference reversal, including clear cases of weakness of will. Consider, for instance, a dieter who endorses a ban on candy but succumbs to temptation anytime they are offered some, and has the candy anyway. There are no preference reversals here, either, if we describe the case as follows: at any point in time, the dieter prefers a healthy lifestyle to an unhealthy lifestyle. Even when they eat candy, the dieter values a healthy lifestyle very much. At the same time, the dieter likes candy a lot. Whenever they are given a chance, they will choose and eat it. At all times, they prefer eating candy then over foregoing candy. The upshot is that, in both cases, there is a preference reversal under the following descriptions: • Initially, the parent prefers the pram over the bike, later on, they prefer the bike over the pram. • At times, the dieter prefers a healthy lifestyle over eating candy, at other times, they prefer eating candy over a healthy lifestyle. For both cases, one can envision a situation in which the two pairs of preferences clash: for the parent, this is a case in which they are offered a pram without any specification of whether it is for an infant or a young child. For the dieter, this is a case where they can either promote their health or indulge, but not both. Both agents now face the challenge to decide which of the preferences take priority in this dilemma. However, when filled in with greater detail, the descriptions make the reversal disappear: • At all times, the parent prefers a pram for an infant then over a bike for an infant then and a bike for a young child then over a pram for a young child then. • At all times, the dieter prefers low blood sugar then to high blood sugar then and enjoying candy then over foregoing candy then. These descriptions do not conflict because the agents’ options are in each case relative to certain considerations and reasons (Davidson [1970] 1980c, pp. 37–9). Using fine-grained descriptions like these, we can remove any preference reversal. However, this strategy also dissolves all hope of describing weakness of will as

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 95 preference reversals at all, as even clear examples of such cases can be re-described. Indeed, this wiggle room for rationalization may contribute to problems like weakwilled behaviour, and I take it that this issue is at the heart of Davidson’s account of weakness of will.3⁰ On his view, the mistake a weak-willed person makes is precisely to assess the situation in an inadequate way along the lines sketched here. It is a failure to recognize the ‘right’ action simpliciter, rather than the right action relative to certain considerations and reasons. So far, we have only considered diachronic preference reversals that are not sufficient for weakness of will. Let us now turn to the synchronic case:31 imagine that a parent tells us that they would prefer a baby pram over a children’s bike now but that they would prefer the bike over the pram in a few years hence. When considering both options without delay, the parent would prefer the pram. When considering them with a delay of a few years each, the parent would prefer the bike. So they synchronically reverse their preferences. But clearly this is not a case of weakness of will. So synchronic preference reversals are not sufficient for weakness of will either. Second, consider cases in which an agent revises their preferences for no good reason but without thereby appearing weak-willed. Again, these are counterexamples to the claim that preference reversals are sufficient for weakness of the will. As a child, I preferred white chocolate over dark chocolate. I liked the sweet white chocolate more, and when asked to decide between white and dark chocolate, I chose the former. However, over time, my taste buds and with them my preferences changed. I now prefer dark chocolate over white chocolate, I like it better, and I favour it when given a binary choice between the two kinds. Yet this preference reversal does not seem to be a case of weakness of the will.32 Thus a diachronic preference reversal is not sufficient for weakness of will. Conversely, imagine I expect my young nephew to undergo a similar preference reversal over the course of his life. When asked whether I would rather buy white or dark chocolate for him, I would reply that I’d prefer to give him white chocolate when he is a child, and dark chocolate when he is in his 20s. Setting aside my own taste and making a decision on behalf of my nephew, I prefer and value white chocolate in three years over dark chocolate in three years, and I prefer and value dark chocolate in twenty years over white chocolate in twenty years. This is a synchronic preference reversal. However, it is not a case of weakness of the will. In short, it illustrates that synchronic preference reversals are not sufficient for weakness of the will either. 3⁰ Cf. Section 3.3. 31 The objection just discussed for the diachronic case applies, mutatis mutandis, to this case, as does its reply. 32 One might object that, because the circumstances have changed in a way I did not anticipate, there is no weakness of the will. But whether circumstances change is irrelevant for the claim that preference reversals are sufficient for weakness of will: on this view, if there is a preference reversal, then there is weakness of will.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

96 weakness of will and delay discounting Overall, then, diachronic or synchronic preference reversals are not sufficient for weakness of will. Let us now turn to the claim that preference reversals are necessary for weakness of will. I shall give two counterexamples to refute it, that is, I suggest that there are two examples for weakness of will without preference reversals. If these are plausible, then it seems there can be weakness of will without preference reversals. First, consider what I called ‘instantaneous’ cases of weakness of will:33 an agent simultaneously endorses a standard and violates it. Common experience supports the claim that we sometimes act against our better judgement whilst endorsing it at the same time. Imagine a dieter who eats candy and at the same time claims that they ought to refrain from what they are currently doing. They are performing two actions3⁴ that are contrariwise in the pursuit of the agent’s self-interest (Spitzley 1992, p. 219). Another case in point is Bratman’s wine drinker who tells his friend between sips that it would be best to abstain (Bratman 1979, p. 156). However, assume that we are committed to the view that preference reversals are necessary for weakness of will. Then we have to say either that the person in question is not weak-willed, that several preference reversals somehow occur in very fast succession (as a diachronic case), that the agent considers the options with differing delays (as a synchronic case), or that the agent’s utterance is not a free action after all (Aristotle, Nicomachean Ethics 1147a20). In contrast, it seems much more plausible to interpret the case as one of instantaneous weakness of will. But there is simply no room for this claim on a view that regards preference reversals as necessary for weakness of will. Consider now the second class of counterexamples. These are examples of a failure to abide by a reasonable preference reversal. This case has both a synchronic and a diachronic aspect. Imagine a smoker makes a New Year’s resolution:3⁵ they will smoke up to and including December 31, yet they will not touch a cigarette on January 1 or any day after that. In other words, they resolve to reverse their preference: according to their plan, they will prefer smoking over non-smoking on December 31 but on January 1, they will prefer non-smoking over smoking. Imagine that the smoker makes the resolution by giving a promise to themselves but does not communicate it to others. However, assume that the smoker continues to smoke on January 1. There is no change in behaviour: the smoker smokes every day in December, and continues to do so in January. There is no preference reversal either, as the smoker seems to prefer smoking to non-smoking in the old and in the new year. The smoker fails to reverse their preference as planned.

33 Cf. Section 5.3; Mele calls them “core” cases of weakness of will (Mele 2012, p. 8). 3⁴ I assume that claiming something is an action. 3⁵ I thank Richard Holton for this example.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 97 It seems very plausible, though, that the smoker is weak-willed: they resolved and wanted to quit but they failed to do so. We can imagine that they try and struggle to skip but finally give in to temptation and light up. They may regret and condemn themselves for breaking their New Year’s resolution so readily. The smoker thus seems to exemplify a case of weakness of will without a preference reversal. In conclusion, preference reversals are neither necessary nor sufficient for weakness of will. This has implications for approaches that use delay discounting theory to model weakness of will. For, whilst delay discounting theory can provide powerful models for preference reversals, these reversals do not map onto weakness of the will without qualification. As a consequence, some cases of weakness of will may not be described by delay discounting models at all. Therefore, an account of the weakness of the will in terms of delay discounting does not meet the stringent requirements for conceptual analysis in philosophy as traditionally understood.3⁶ Of course, this need not worry readers sympathetic to a naturalist methodology or conceptual engineering, as many cases of weakness of the will do involve preference reversals, and vice versa. However, even these readers should be concerned about the next issue, which concerns cases of weakwilled preference reversals.

5.6 Weakness of Will Concerning Immediate Rewards Up to now, we have critiqued approaches that understand weakness of will as a preference reversal or use discounting theory to model such reversals. In this section, we shall consider a class of cases that are plausibly weak-willed preference reversals. As we have come to know delay discounting theory so far, we would expect it to account for these cases. We shall refer to them as ‘marshmallow cases’, after the presumably most prominent example in this category. The first half of this section introduces the cases, the second half explains why delay discounting theory, at least as we know it, cannot account for them. In the cases of interest, an agent chooses a delayed reward over an immediately available one but later on changes their mind (Dasgupta and Maskin 2005; Mischel, Shoda, and Rodriguez 1989; O’Donoghue and Rabin 1999; Schelling 1984; Strotz 1955). Structurally, such choices are highly similar to the dieter’s case: the agent initially favours a larger, later reward over a smaller, sooner one, yet reverses this preference over time. In the special class of cases that are problematic for our approach, the earlier and smaller reward is available immediately. Let me give three

3⁶ Cf. Section 2.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

98 weakness of will and delay discounting examples: savings, procrastination, and the marshmallow experiment. I shall take them in turn. First, consider savings. Many human and non-human animals save commodities or money for future consumption. This behaviour seems to reveal a preference of a delayed reward over an imminent one: the agent foregoes some immediate pleasure in exchange for a future one. Thus the discounted value of the future reward should be greater than the un-discounted value of the present one. However, in some cases the agent does not wait for the envisioned occasion but uses their savings prematurely. For example, early in the year an employee might set aside money to buy Christmas presents but then spend it on a summer vacation instead (Dasgupta and Maskin 2005; Strotz 1955). Such cases are plausibly examples of weakness of will: the agents may judge or resolve to spend their savings in a certain way for the greatest benefit but then act differently when temptation calls. Second, procrastination is commonly regarded as an example of weak-willed behaviour. It concerns negative rather than positive rewards: an agent needs to choose between an earlier, somewhat unpleasant or undesired chore, and a later but even more unpleasant one (cf. O’Donoghue and Rabin 1999). For example, a student may put off studying for tomorrow’s exam and then fail, or a patient may delay a doctor’s appointment until they become painfully ill. Presumably, the delayed negative reward is in each case greater than the immediate one. Therefore, the discounted negative value of the delayed reward would be greater than the undiscounted value of the imminent one. Accordingly, we would expect the agent to have a preference for the immediate chore or ordeal over the delayed one. Yet even if we can ascribe such a preference to the agent initially, at some point they clearly reverse it and act in avoidance of the smaller negative reward. Third, situations like the marshmallow experiment seem to reveal weak-willed delay discounting. Partially because of this, these cases are of core interest to philosophical research into weakness of will. But what is more, philosophers frequently rely on a strand of empirical research from developmental psychology which has studied those cases extensively. This research strand has investigated children’s abilities to delay gratification and their possible strategies to resist temptation (Mischel and Ebbesen 1970; Mischel, Ebbesen, and Raskoff Zeiss 1972; Mischel, Shoda, and Rodriguez 1989). Philosophy aside, for a full understanding of how humans discount delayed rewards, one would hope empirical research into delay discounting and delay of gratification to converge. However, this is conceptually impossible given orthodox discounting models. To appreciate the issue, let us focus on a typical experiment (Mischel, Shoda, and Rodriguez 1989, p. 934): young children were left to wait for an unspecified amount of time in a room by themselves—in fact, for around 15 minutes—until the experimenter returned. They had been told that upon return they would receive a treat that they wanted, such as pretzels, cookies, or marshmallows. They were also able to summon the experimenter prematurely and would then be given a

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 99 less preferred reward. Researchers assigned children to different groups, studying various manipulations concerning the display of rewards, strategic methods to delay gratification, etc. Imagine a child initially resolves to wait. We can thus infer that, at the beginning of the waiting period, they prefer the larger reward (for instance, two marshmallows). Let us also assume that, in the group the child has been assigned to, the smaller reward—e.g. one marshmallow—is always on display and readily available, so the agent can reverse their choice and eat the one marshmallow at any time. Finally, let us assume that, after having waited for a few minutes, the child succumbs to temptation, summons the experimenter, and eats the smaller treat. Philosophers have regarded this case as a prime example of weakness of the will (Holton 2009; Levy 2011; Mele 2012). Using delay discounting theory, how can we describe it? Initially, the value of eating the smaller reward immediately is smaller than the expected value of eating the larger reward later on. This is what the child’s initial decision indicates. But at some point during the waiting period, the value of the smaller reward becomes greater than the expected value of the larger one. This is when the child gives in to temptation. Note that, according to delay discounting theory, the expected value of the bigger reward is smallest at the outset and grows larger the closer the agent gets to the expected point of realization. But if the value of the smaller reward was initially smaller than the expected value of the larger one—which continues to grow—, how is it possible that the agent reverses their choice? Delay discounting theory, at least in the form philosophers make use of it,3⁷ cannot account for marshmallow cases like this one. This can be shown mathematically,3⁸ Figure 5 illustrates it graphically, and we can informally explain it as follows. Classic discounting theory assumes that, the more delayed a reward, the smaller will be its discounted value E. In other words, the discount factor f will be smaller when the reward is discounted with a greater delay d, and vice versa: E(d1 ) < E(d2 ) and f (d1 ) < f (d2 ) iff d1 > d2 . In marshmallow cases, the agent initially (at t1 ) resolves to wait for a later, larger reward like two marshmallows. Let d1 be the delay (d1 = tR − t1 ). We can thus assume that the discounted value of this reward is greater than the (undiscounted) value of the smaller and readily available one: E(d1 , 2M) > V(1M). But later on (at t2 , with delay d2 = tR − t2 ), when this smaller reward is chosen after all, its (un-discounted) value must be greater than the expected value of the delayed reward: E(d2 , 2M) < V(1M). From this choice pattern, we can infer that E(d2 , 2M) < V(1M) < E(d1 , 2M) and therefore that f (d2 ) < f (d1 ) and d2 > d1 . But the reverse is true: d1 > d2 , the delay is greater at the beginning of the test. Hence, classic discounting models cannot account for such marshmallow cases. 3⁷ Described above in Sections 5.1–5.3.

3⁸ The proof is in Appendix F.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

100 weakness of will and delay discounting Expected value V(2M)

E(2M,tR)

E(2M,t1) E(1M)

V(1M) E(2M,t2) t1

t2

tR

Time

Figure 5 Marshmallow case. The relative values of two prospects (one marshmallow immediately, two marshmallows at tR ) are shown. The smaller treat is always readily available and therefore not discounted, as illustrated by a flat horizontal line: E(1M) = V(1M) and f = 1. At t1 , the child resolves to wait for the additional treat. They thus prefer two marshmallows over one: E(2M) > V(1M). However, at a later point in time t2 , they succumb to temptation and eat the one marshmallow: E(2M) < V(1M). This implies that E(2M, t1 ) > E(2M, t2 ). This contradicts orthodox discounting models, which assume that a prospect is discounted less when its delay shrinks. Graphically, discount curves slope upwards or remain flat, yet any discount curve fitted to the three dots would require a downward slope between t1 and t2 (dashed).

In other words, using the delay discounting models described so far, we are unable to describe marshmallow cases, which involve a weak-willed choice between a delayed and an immediate reward. As a consequence, philosophers may wish to either exclude those cases from their account—which is undesirable, as they seem to be prime examples of weakness of will—or to consider more complex discounting models that can describe the challenging cases.

5.7 Conclusion To what extent, if any, is delay discounting theory suited to describe weakness of will? In reply, this chapter has argued that delay discounting theory is a powerful tool to describe certain kinds of preference reversals. These are, in turn, the best models for weakness of will within an econometric framework of agency. Importantly, whether or not discounting theory provides a good model for weakness of will does not depend on the shape of the discount function—may it be exponential or hyperbolic—but on variations of the discount rates with the size of given rewards. However, as we have seen, using delay discounting theory to describe weakness of will faces two challenges: first, within an economic framework of human

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

discounting 101 agency, weakness of will is best understood as a kind of preference reversal; yet preference reversals are neither necessary nor sufficient for weakness of will. Second, the discounting models that we have discussed cannot account for weakwilled preference reversals in what I have called ‘marshmallow cases’. The second challenge can be met by more sophisticated models of delay discounting, as I shall argue in the next chapter. The first challenge, though, cannot entirely be resolved. ‘Weakness of will’ and ‘preference reversal’, as we have understood them, do not have the same scope. However, there is a substantial and, I believe, large overlap of weak-willed delay discounting. These cases are weak-willed as characterized in Chapter 2. Because weakness of will is a defect, they do not include non-defective preference reversals, and Chapter 7 provides further details on how weak-willed delay discounting may be defective. Weak-willed delay discounting also does not include synchronic or instantaneous weakness of will that does not involve a synchronic preference reversal. However, although these cases are conceivable, we may wonder how widespread they are. For one thing, most if not all synchronic cases do seem to be extended in time and thus may actually be diachronic; for example, weak-willed actions do not occur instantaneously. Nevertheless, in focusing on weak-willed delay discounting from now on, we are setting aside some cases that are at least conceivable. Paying this price enables us to draw on a large and growing literature from the empirical sciences as well as on a philosophical approach to weakness of will that invokes biases. This, in turn, enables us to develop a new understanding of weak-willed action and of strategies to target them. As I hope will become clear, this is well worth ignoring some cases on the margins.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

PART III

SCIENCE MEETS PHILOSOPHY Philosophical accounts of weakness of the will and delay discounting theories from behavioural science both target the same phenomenon and problem. The remainder of this monograph connects both approaches, thereby developing new suggestions about how weak-willed delay discounting can be understood, criticized, and addressed in practice. In doing so, it also aims to showcase a piece of interdisciplinary research and advantages of this methodology. The previous Part II ended in aporia: it found a number of limitations in philosophical delay discounting theories.1 To address them, Chapter 6 begins by introducing recent discounting models primarily to a philosophical readership. The key claim of these models is that delay discounting is due to uncertainty, notably uncertainty about when and if future events will unfold as expected. From a philosophical perspective drawing on these models, the weak-willed actions they describe can be understood as biased actions. That is, they are actions from a specific cognitive bias, viz. the tendency to discount delayed benefits with uncertainty about whether and when the benefits materialize. Or so I shall argue. Cognitive biases and behaviour resulting from them have been the object of much recent research in both philosophy and science. A contested research question is under what circumstances, if any, they are irrational. Chapter 7 tackles this issue with a focus on the cognitive bias determining weak-willed actions. It begins with a brief introduction to rationality primarily for readers not familiar with the philosophical literature which it then applies to the case at hand. I argue that weakness of will is irrational according to popular accounts of rationality. A fortiori, weak-willed actions resulting from a cognitive bias to discount delayed benefits are irrational, too. Nevertheless, as will become clear, the challenge remains to explain why, exactly, they are irrational. This is so because it is difficult or even impossible within the scope of this book to determine the conditions under which delay discounting is irrational. If those biased, weak-willed actions are irrational, what can and should we do about them? Chapter 8 outlines answers for individuals as well as societies.

1 Cf. Sections 5.4–5.7.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

104 science meets philosophy Cognitive biases are challenging to address because they are often beyond the ken and control of both the agent and third parties. However, it is possible to avoid or exploit them, most notably by adjusting the environment. More specifically, the particular cognitive bias that is our core interest in this book can be targeted by changing the size or incentives of delayed benefits, delays, uncertainties, or the agent’s individual sensitivity towards them.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

6 Describing Weakness of Will Ever since it was discovered that agents discount future rewards and benefits in a systematic way, research has discussed possible explanatory mechanisms for it. The present chapter discusses just one such approach to explain delay discounting. It can be seen as an extension of the exponential discounting model. Like the exponential model, it suggests that a decision-maker discounts the value of a reward with its delay because of the gains they forego by not making use of the reward from now until they get it.1 It goes beyond the exponential model in claiming that, because realization of the delayed benefit is uncertain, the discount factor is not an exponential but a hyperbolic function of the actual delay. In a nutshell, the idea is this. Discounting the value of a future reward or benefit with its delay seems plausible given the uncertainty of the future. In particular, possible hazards might interfere and prevent the realization of the reward or benefit. Therefore, some authors have suggested that delay discounting could be explained by hazards. In the following Section 6.1, I present a suggestion along those lines by Sozou (1998). In the subsequent Section 6.2, I discuss a somewhat more complex model that builds on it (Dasgupta and Maskin 2005). To my best knowledge, neither of the two has so far received any attention from philosophers. Therefore, Section 6.3 explains how the economic models may support a philosophical account of weak-willed delay discounting. Finally, Section 6.4 argues that weak-willed actions resulting from it can be understood as biased, i.e. as determined by a cognitive bias.

6.1 Hazards (Sozou’s Suggestion) Waiting for a delayed reward or benefit always involves a probability to not obtain it. For instance, while waiting for the reward or benefit to materialize, it might disappear or one might simply die. When that probability is known (e.g. 0.5 or 1 in 10), it is often called a ‘risk’. When the probability is unknown, it is usually called an ‘uncertainty’. Delays may involve both known and unknown probabilities. Many authors have pointed out the similarity of probability and delay discounting (Chakraborty, Halevy, and Saito 2020; Green and Myerson 2004; Halevy

1 Cf. Section 5.4.1, Appendix C.

Weakness of Will and Delay Discounting. Nora Heinzelmann, Oxford University Press. © Nora Heinzelmann 2023. DOI: 10.1093/oso/9780192865953.003.0006

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

106 weakness of will and delay discounting 2008; Prelec and Loewenstein 1991). Consider a probability to obtain a reward or benefit that is constant per unit of time. Then, other things being equal, the smaller that probability is, the longer an agent has to wait for that benefit to materialize. Conversely, the more delayed a benefit is, the less probable it is for the agent to receive it at any given moment. There is empirical evidence for a positive correlation of delay and probability discounting, e.g. risk-seeking individuals are more patient, and vice versa (Myerson et al. 2003; Reynolds et al. 2003; Richards et al. 1999). Indeed, it seems plausible that a probability of not obtaining the delayed benefit or reward might determine its discounting (Candolin 1998; Houston and McNamara 1986; Houston, Kacelnik, and McNamara 1982; Iwasa, Suzuki, and Matsuda 1984; Rotter 1954). Discounting a delayed benefit or reward in the face of a looming hazard might even be rational.2 For illustration, imagine that you wait for Eve in paradise. She has promised to bring you an apple. While you wait, there is always a certain constant probability that her looting the apple tree will be discovered by an angel and she will get kicked out of paradise—and you will never get your apple. Imagine that the daily probability is a constant p. For instance, imagine that, every day, the angel throws a fair die. If six comes up, he will check on the apple tree that very day, Eve will be 1 thrown out of paradise, and you will never get the apple. In that case, p = , the 6 probability of six coming up when a fair die is thrown. The probability p that you 1 lose the apple for good is , or about 17%, every day. 6 Then, it seems, it makes sense to discount the value of the apple accordingly. If V the value of an apple today is V, the value of an apple tomorrow is at most V − . 6 More generally, it is V − p × V, which in turn is equal to (1 − p) × V. In other words, the value of an apple is its un-discounted value multiplied by the probability of not losing it (i.e. (1 − p)). Because the hazard may occur every day, the probability of losing the apple for good is larger in the further future than in the closer future. That is, it is more likely that you lose the apple within the next two weeks than within the next two days. To ‘survive’ the next two weeks is much less likely than to ‘survive’ the next two days: in the former case, the angel will throw his die fourteen times, in the latter, he will only throw it twice. The probability p is thus sensitive to passage of time. Mathematically, we can define a function that provides us with a value for p for any delay. For instance, let us call s(n) the ‘survival function’ which tells us, for any day n, the probability of making it to day n without losing the apple for good. For example, in our example 1 the probability of making it to tomorrow is 1 − , or about 83%, because if six 6 comes up on the fair die, then you will not make it to tomorrow. The probability 1 1 of making it to the day after tomorrow is (1 − ) × (1 − ), i.e. roughly 69%, 6

1

6

1

1

6

6

the probability of surviving three more days is (1 − ) × (1 − ) × (1 − ) or 6

2 Whether discounting is rational is a question discussed in Chapter 7.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

describing weakness of will 107 1 n

about 57%, etc. In general, the probability to survive until day n is (1 − ) . If 1

6

we abbreviate the survival rate (1 − ) = (1 − p) with ‘q’, then we can write the 6 survival function s(n) as s(n) = qn . We can then calculate the expected value E(n) that an apple has for us on some future day n by multiplying the un-discounted value of the apple, V, with the probability of surviving until n. In other words, this probability is our discount factor3 f. The probability of surviving until n is qn . The expected value for a reward or benefit delayed until day n is, accordingly, E(n) = f × V = s(n) × V = qn × V. So far, we have been concerned with discrete delays, such as days. But it is possible to generalize our rationale to the continuous time flow with infinitesimally small increments d. We thus seek an expression for s(d) rather than s(n). This approach has been developed by Sozou (1998).⁴ Very roughly,⁵ we can write functions like s(d) = qd as exponential functions of the form s(d) = ed × ln(q) . Sozou thus derives the following discount function for s(d): s(d) = e−rd

(15)

with a constant discount rate r. In other words, Sozou’s s(d) is a familiar exponential discount function.⁶ Accordingly, the delay discounted value is E(d, V) = s(d) × V = e−rd V

(16)

In sum, Sozou’s model interprets the discount function as a survival function: a delayed reward or benefit is discounted with the probability of surviving until its realization. The mathematical expression of the survival function is precisely the exponential one familiar from the literature. Sozou’s model thus provides a rationale and an explanation of the exponential model. It explains an agent’s discounting of future rewards or benefits by their sensitivity to a hazard that might occur while the agent is waiting for the reward or benefit to materialize. However, in its present form it does not explain the kind of preference reversals we are interested in, such as the ones observed in marshmallow cases. These are cases in which an agent seems to discount a delayed reward or benefit more, not less, as the delay elapses.⁷ For, on Sozou’s model, the expected value and the likelihood of survival grow as the delay elapses. That is, the closer in time the agent gets to the point when the reward or benefit materializes, the smaller the probability of a hazard occurring. This is because the time at which the reward or benefit materializes is fixed. Thus

3 Cf. equation 1, Section 5.1. ⁴ See Appendix D for details. ⁵ Cf. Section 5.4.1 for more details. ⁶ Cf. Section 5.4.1, Appendix C.

⁷ Cf. Section 5.6.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

108 weakness of will and delay discounting for every day that the agent has ‘survived’, the probability of the hazard occurring reduces to zero and the delay left becomes smaller. For example, imagine that Eve tells you: “I need nine more days to finish an elaborate apple pie but I shall bring you your apple on the tenth day.” Assume that Eve is absolutely true to her word. Thus you know that you will get the apple on the tenth day (n = 10), provided Eve and you ‘survive’ until then. Initially, the 1 10

probability of surviving is s(n) = s(10) = (1 − ) , or roughly 16%. The expected 6 value of an apple delayed by ten days and threatened by a die-throwing angel is thus just 16% of its un-discounted value. But now imagine that you do, in fact, survive. After one day, you need to ‘survive’ only nine more throws of the die, thus 1 9

your probability of survival is s(9) = (1 − ) , or 19%. After one additional day, 6 you need to ‘survive’ just eight more days and your probability of survival grows to 1 8

s(8) = (1 − ) , i.e. 23%. And so on. On the ninth day, your probability of survival 6 has grown to 83%. The expected value E of the prospect has grown accordingly. The same considerations apply to marshmallow cases: the agent initially chooses the delayed option whose expected value continues to grow as the delay elapses. Based on Sozou’s model, then, the agent should not reverse their preferences as they do. In order to account for these kinds of cases, then, we need to develop the model somewhat further.

6.2 Uncertain Hazards (Dasgupta and Maskin’s Model) The model by Sozou (1998) explains exponential discounting by an agent’s sensitivity to a hazard. It assumes that it is uncertain whether and when the hazard occurs. The present section considers a suggestion by Dasgupta and Maskin (2005) that extends this approach. It further assumes that the time at which the reward or benefit materializes is uncertain as well. That is, the model explains an agent’s discounting of a delayed reward or benefit with the uncertainty about when it will be realized, if at all. To illustrate, imagine again that Eve has promised to give you an apple in ten days. Sozou’s model gives you the expected value of that apple, taking into account the hazard, i.e. that Eve may get kicked out of paradise and you lose the apple for good. In addition, Dasgupta and Maskin’s model also assumes that you might get the apple earlier or later than expected, e.g. tomorrow or in two weeks. Thus, one of the model’s key assumptions is that the agent will most likely receive the anticipated benefit or reward at some expected point in time—but this is not entirely certain; there is a chance that they will get it earlier, or later. The model accounts for this by specifying, for any point in time t, a certain probability density of receiving the benefit or reward. Assume that the probability density for realization at any instance is q. The probability of receiving the benefit or reward by a point in

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

describing weakness of will 109 time t is simply t × q. For instance, imagine that Eve is destined to bring you the apple on any of the following four days. The probability density q for realization is 0.25 thus . The probability of receiving the apple by the second day, i.e. with a delay day

of at most two days, is therefore q × 2 days =

0.25 day

× 2 days = 0.5, that is, 50%.

So far, we have been assuming with Sozou that q is constant. In other words, we have assumed that the probability density is the same for any delay, i.e. realization is equally likely for any realization time. Let us suspend this assumption. For example, we could imagine that Eve cannot meet you on the first and third day at all but is equally likely to appear on the second as on the fourth day. Then the probability 0 0.5 for the second and fourth day and on the other two days. density is day

day

We can generalize further. Let us assume that q is distributed according to a probability density function Q. Q maps points in time to probability densities, it is thus a function of t: Q(t). Figure 6 illustrates how probability densities can be distributed according to such a function. Dasgupta and Maskin’s model is concerned with ‘classic’ choices between a smaller, sooner and a later, larger benefit. More specifically, it proceeds from three assumptions: 1. The agent considers two prospects of unequal value. In other words, the un-discounted value V is larger for one reward (B) than for the other one (A): VA < VB . 2. The agent anticipates the time at which they receive each of the two rewards. Specifically, for each prospect, there is a point in time T at which they Probability density Q

probability of early realization probability of late realization Q

Q(T)

T

Time t

Figure 6 Example of a probability density function Q. The x-axis represents points in time, t, the y-axis probability densities Q(t). The graph shows the distribution of the probability density that the agent receives a benefit or reward at a time t. The probability density is highest at T. The total probability of obtaining the benefit or reward is 1, that is, the agent will certainly receive it—it is just uncertain when they will receive it. The probability that they receive it earlier than anticipated (before T) is represented by the light grey area under the curve, the probability that they will receive it later than anticipated is represented by the dark grey area under the curve.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

110 weakness of will and delay discounting most likely expect to receive the rewards and benefits, TA and TB . At T, the probability of receiving the reward peaks (cf. Figure 6). 3. The more valuable reward is likely to arrive later than the less valuable reward. That is, TA is earlier than TB (TA < TB ). More precisely, the probability density q1 for receiving the earlier reward earlier than anticipated is larger than or equal to the probability density q2 of the more valuable reward. Consider now two simple cases: the case in which the expected reward is realized at time T, and the case where it is realized earlier or later. By integrating a probability density function over some interval d, we can calculate the probability that the reward and benefit in question is realized in this interval d. In this way, we can calculate the probability for early, late, or timely realization.⁸ We distinguish the probability of receiving the reward and benefit at the anticipated time T from the probability of receiving them at some other point in time t (t ≠ T). The model accounts for the fact that T is the anticipated time of realization by stipulating that the probability for realization at T is larger than zero. In other words, there is a so-called probability atom ‘at’ T. Accordingly, we can also distinguish the expected value of receiving the reward and benefit at the anticipated time T and the expected value of receiving it at some time t ≠ T, that is, earlier or later than anticipated. Let’s call the former E(T) and the latter E(t). Then one has: E = E(T) + E(t)

(17)

Building on Sozou’s model⁹, we can express E as a product of the un-discounted value of the reward, V, and a survival function s which provides us with a probability as a function of delay t. Replacing E accordingly, equation 17 becomes: E = E(T) + E(t) = s(T) × V + s(t) × V

(18)

Because Dasgupta and Maskin’s model allows for the probability of receiving the reward and benefit earlier or later than anticipated, the survival function s(t) accounts for this probability in addition to the probability of survival, i.e. the absence of the hazard. To appreciate this point, consider the logical space of possible events during the delay. Imagine that an agent is waiting for a delayed reward or benefit. At any point in time, any of the following three events could happen: (i) the agent receives the reward or benefit, (ii) a hazard occurs and the agent loses the reward or benefit forever, or (iii) the agent just continues waiting. For instance, imagine you are sitting in paradise, waiting for Eve to give you your apple. At any point in time, any of the three could happen: (i) Eve finally shows up and gives you the promised ⁸ Appendix E specifies detailed expressions for the respective probability density function Q. ⁹ Cf. Section 6.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

describing weakness of will 111 apple, (ii) the angel checks on the tree, discovers Eve’s looting, she is kicked out of paradise, and you never get your apple,1⁰ or (iii) not much happens—you just continue waiting. In addition, the model assumes that the two events (i) and (ii) are independent of one another. That is, the hazard occurs independently of the realization of the reward or benefit, and vice versa. The model assumes that agents discount delayed benefits with the probability of either of these two events, that is, they discount with both the probability of not losing the benefit, and with the probability of receiving the benefit (recall that these two are not the same). Let ‘a’ denote the probability of receiving the benefit and ‘b’ the probability of no hazard occurring. According to probability theory (Hájek 2012), the probability of two independent events occurring is the product of their respective probabilities.11 We can thus calculate the expected value E of a delayed benefit as E=a×b×V

(19)

V is the un-discounted value of the reward. Both probabilities a and b are in turn functions of the time t with which realization is delayed. We can thus express equation 19 as: E(t) = a(t) × b(t) × V

(20)

Combining equations 18 and 20 gives us: E = a(T) × b(T) × V + a(t) × b(t) × V = (a(T)b(T) + a(t)b(t)) × V

(21)

In sum, equation 21 gives us the expected value E as a function of the undiscounted value V, the probabilities of receiving the benefit and reward at the anticipated time T or some point in time t earlier or later than T, and the probabilities of no hazard occurring.12 Crucial for our purpose here is that this model can describe preference reversals even in cases when the earlier reward or benefit is available immediately, such as in the marshmallow experiment. Let us consider this in detail. A preference reversal occurs at some point in time t∗ at which the agent is indifferent between the two prospects. t∗ has been called the ‘indifference point’.13 In the case we are particularly interested in, the agent

1⁰ In the unlikely case of (i) and (ii) occurring at the very same point in time, (ii) is assumed to take priority, that is, the reward or benefit is lost for good. 11 That is, for two independent events A and B, P(A ∧ B), the probability of them both occurring, is the product of P(A) and P(B), i.e. the product of the two probabilities for each of them occurring: P(A ∧ B) = P(A) × P(B). 12 Appendix E contains further details about this equation, especially functions a and b. 13 Cf. Section 5.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

112 weakness of will and delay discounting prefers the larger and later benefit at some point in time t1 before t∗ (t1 < t∗) but prefers the smaller and earlier benefit after t∗ (t∗ < t2 ). For example, at t1 a weak-willed child in the marshmallow case prefers to receive two marshmallows but at t2 they prefer to have just one. Here, reward A is one immediately available marshmallow and reward B is two delayed marshmallows. Mathematically, it can be shown that EA (t1 , VA ) is smaller than EB (t1 , VB ) but that EB (t2 , VB ) is smaller than EA (t2 , VA ).1⁴ In other words, the expected value for A is smaller than the expected value for B before the indifference point, and the opposite is true after the indifference point. A preference reversal occurs at the indifference point t∗. How can we understand this informally? The longer an agent has waited for a reward or benefit to arrive, the closer they get to the anticipated time of realization, T. However, the longer the agent has been waiting and the closer they get in time to the anticipated point of realization, the smaller is the probability of early realization (that is, realization at some point t < T). But if the probability of early realization shrinks, so does the expected value of the prospect in question. As the latter is proportional to the un-discounted amount of value V, this effect is larger for the more delayed reward. For, the un-discounted value of the more delayed reward is larger than that of the earlier reward (VB > VA ). This is why, at some point in time t∗, the agent becomes indifferent between the two prospects, and reverses their preference. For instance, in the marshmallow experiment, the probability of getting a second marshmallow earlier than anticipated decreases the longer the child has to wait. In contrast, the value of the single marshmallow does not decrease because it is available immediately. There is no expected value derived from obtaining it earlier than anticipated. At some point in time t∗, then, the discounted value of the two delayed marshmallows eventually falls below that of the single marshmallow. This is why the child reverses their initial resolve to wait, and succumbs to temptation at t∗. In sum, Dasgupta and Maskin’s delay discounting model describes and thereby explains preference reversals with the agent’s sensitivity to uncertainty. The next section argues that this proposal provides us with a possible mechanism of weakness of the will.

6.3 Uncertainty Processing as a Mechanism of Weakness of Will A recent approach from the economic and empirical literature claims that delay discounting is determined by uncertainty processing (Dasgupta and Maskin 2005;

1⁴ The detailed proof is given in Dasgupta and Maskin (2005, pp. 1295–6).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

describing weakness of will 113 Fehr-Duda and Epper 2012; McGuire and Kable 2015). The previous section has presented a model and rationale along those lines. The present section proposes that uncertainty processing may also be a mechanism of weakness of the will. The suggestion is this. Temporal delay always involves some uncertainty about when, if at all, the anticipated event will happen. When agents discount an expected benefit or reward, they are implicitly responding to this uncertainty. This mechanism determines some cases of weakness of will. For example, imagine a child in a marshmallow case initially resolves to wait but then succumbs to temptation. The present proposal explains this as follows: from the beginning, the child is uncertain about when and whether the experimenter will return with the second marshmallow. Upon leaving, the experimenter tells the child that they will be away “for a while”; “sometimes, I’m gone a long time” (Mischel and Ebbesen 1970, p. 332). In fact, the experimenter will return after fifteen minutes. But the child does not know this. They have some vague expectations based on prior experience, although they may not consciously deliberate about them or make a precise prediction about the expected waiting time. They may trust that the experimenter will return very soon, or they may seriously doubt that they will come back at all.1⁵ As the delay elapses, the child updates their belief about when the promised treat will arrive. Initially, they could be relatively optimistic about receiving the second marshmallow soon. But with every second they have to wait, the hope to receive the larger treat at that time is disappointed. As the moment elapses, the probability to receive the second marshmallow at that point in time reduces to zero. The expected value of the prospect is reduced accordingly. While the child keeps waiting and updating their expectations in this way, the probability of receiving the second marshmallow early becomes smaller, and consequently the expected value of the prospect shrinks as well. Eventually, the child discounts the value of two marshmallows so much that it is smaller than the value of the lesser reward. At this point, impatience gets the upper hand and the child consumes the single marshmallow. The crucial difference between this way of understanding weak-willed discounting and the one philosophers have invoked so far1⁶ is the following. According to the latter, the child discounts the expected value of the second marshmallow exclusively with the delay. As this delay shrinks, the expected value grows. According to the proposed model, though, discounting is primarily determined by uncertainty. However, as delays naturally involve uncertainty, the uncertainty is in turn determined by delay.

1⁵ We discuss individual differences further below.

1⁶ Cf. Chapter 5.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

114 weakness of will and delay discounting This proposal is not reductive: delay does not reduce to uncertainty, or vice versa. Instead, the proposal extends the existing account by introducing uncertainty as an additional factor that is in turn determined by time. As philosophers have framed weak-willed discounting so far, how much agents value a reward or benefit depends on how delayed it is, and agents may undervalue delayed rewards or benefits so much that they perform a weak-willed action. In contrast, the present proposal claims that how much agents value a reward or benefit depends on when and whether they expect to receive it, and they may inaccurately guess when or that the reward or benefit will materialize, especially as they update their expectations during the delay. Accordingly, they may undervalue a delayed reward or benefit, and perform a weak-willed action. We can further specify this proposal on the sub-personal level in different ways. Here are two examples. The first approach builds on dual-process models from the behavioural sciences. On this view, several mental processes may compete and need to be integrated prior to decision-making, and a self-control process is required to monitor and adjudicate between them (Haas 2018; Levy 2011; Sripada 2014). This might happen in either of two ways: on the one hand, a control process may regulate or block impulses from other networks like the valuation system. On this view, it might permit or prevent an impulse in reaction to uncertainty to influence action. On the other hand, self-control processes might be a part of the valuation system and integrate signals about uncertainty into the decision-making process. The prefrontal cortex has been identified as a neurocorrelate of cognitive control (Casey et al. 2011; Figner et al. 2010). Second, opportunity-cost models provide an alternative description of delay discounting on the sub-personal level. On this view, the agent waiting for a delayed reward or benefit foregoes a number of opportunities, such as the opportunity to enjoy an immediately available reward or benefit. Foregoing these opportunities constitutes a cost. The brain estimates such opportunity costs, and updates them dynamically during the delay (Dasgupta and Maskin 2005; Kurzban et al. 2013; McGuire and Kable 2015). The longer the agent waits, the greater the opportunity costs become. The expected value of the delayed option relative to the immediate one thus changes over time. If the value of the delayed reward or benefit falls below that of the immediate one, the delayed reward or benefit is no longer worth the costs. In this event, the agent stops waiting. For instance, when waiting for the experimenter’s return, the child in the marshmallow experiment constantly updates the expected value of the two delayed marshmallows with the expected remaining waiting time and compares them to the value of eating the immediately available marshmallow. Given the uncertainty in the environment, the expected value cannot be precisely determined at any point in time and therefore changes as the delay elapses. Initially, the child chooses to wait thinking that costs of foregoing the immediately available marshmallow are greatly outweighed by the benefit they will derive from the delayed and greater

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

describing weakness of will 115 treat. However, over time the costs of waiting climb higher and higher. The child eventually realizes that the experimenter has still not returned as expected, and that the costs of waiting any longer would outweigh the benefits derived from the two marshmallows. At this point, the child decides to reverse their initial choice. According to the present proposal, then, how an agent discounts a future benefit depends on uncertainty about not obtaining that benefit, or receiving it earlier or later than expected. This implies that, ceteris paribus, an agent will discount a reward or benefit more steeply with its delay in a more uncertain environment than in a less uncertain environment. In addition, agents differ in their personal experience and prior expectations about the rewards or benefits and the delay. One person will discount the same reward or benefit more with its delay than another person in the same situation, other things being equal. Let us consider each of these claims in turn. First, discounting varies between individual agents. On the view advocated here, individual differences in delay discounting vary with individual differences towards uncertainty, other things being equal. In the context of the marshmallow task, this means that two children who differ in their sensitivity to uncertainty in an environment will also differ in their discounting and, in turn, in their waiting time before they succumb to temptation. Individual differences in discounting, and their links to cognitive and academic competence (Duckworth, Tsukayama, and Kirby 2013; Shoda, Mischel, and Peake 1990; Watts, Duncan, and Quan 2018), health (Duckworth, Tsukayama, and Kirby 2013), wealth, and public safety (Moffitt et al. 2011) are well documented. More specifically, according to the present proposal agents who are averse to uncertainty will discount delay more steeply. Conversely, agents who are less averse to uncertainty are also less averse towards delay. There is some evidence that may support this claim: risk-averse individuals are less patient, and vice versa (Fehr-Duda and Epper 2012; Reynolds et al. 2003; Richards et al. 1999).1⁷ For instance, in one lab study, experimenters measured participants’ indifference points and certainty equivalents for two delayed prospects and a risky one, respectively (Epper, Fehr-Duda, and Bruhin 2011). After the task, some of the decisions were randomly selected and implemented, so the participants made actual rather than hypothetical choices. Modelling utility functions using regression analysis, the authors found that an individual’s parameter for risk-taking was a highly significant predictor of their attitude towards delay. That is, the more aversive a participant was towards risk, the more impatient they were for delays. This result indicates that probability weighting determines hyperbolic discounting.

1⁷ Traditional psychiatric conceptions of impulsivity or impulsiveness, which comprise both risk seeking and impatience, have been superseded. In the current Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the American Psychiatric Association explicitly distinguishes impulsivity from risk taking (p. 780).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

116 weakness of will and delay discounting Second, according to the view advocated here, ceteris paribus, delay discounting varies with the uncertainty of the environment. Specifically, in a more unstable or uncertain environment any agent is likely to discount rewards and benefits more steeply with their delay, compared to a less uncertain setting. Empirical evidence is consistent with this claim: delay discounting has been found to be sensitive to the stochasticity of an environment and may be dynamically adapted. That is, agents flexibly respond to changing probability distributions of events. For instance, participants in one study (McGuire and Kable 2012) were given a series of choices between smaller, immediate rewards and larger ones with longer and variable delays. These delays were randomly drawn from different probability distributions for different groups of participants, simulating a realistic waiting situation. For instance, imagine waiting for a bus that is scheduled to arrive at 5 pm. In such a situation, the probability distribution is Gaussian: the bus is most likely to arrive at 5 pm, and very likely to arrive immediately before or afterwards. In contrast, imagine waiting for a reply to an email. Here, the probability distribution is heavy-tailed: it is highest initially and then drops (Barabasi 2005). For each group of participants, we can therefore expect slightly different waiting behaviour. For instance, in the bus stop situation, most people will wait until 5 pm at least. At 5 pm and shortly thereafter, they will expect the bus more eagerly, any minute. In contrast, in the email situation, participants will initially expect the response to arrive any minute but when it does not come, they will assume that the reply will take rather longer than anticipated and expect it less eagerly. As results showed, participants behaved exactly as expected and adapted their behaviour to the situation. They waited longer in Gaussian environments and shorter in heavy-tailed ones. Furthermore, this sensitivity to the probability distribution of delays has been found to correlate with neural signals in the ventromedial prefrontal cortex (McGuire and Kable 2015). That is, the brain seems to encode the uncertainty in an environment. Applied to the marshmallow task, we may thus expect children to discount the delayed marshmallow more and stop waiting earlier if uncertainty is higher, other things being equal. Evidence supports this claim. For one thing, children’s ability to delay gratification in the marshmallow task is mediated by their beliefs about the reliability of the experimenter (Kidd, Palmeri, and Aslin 2013). In one study, children around four and a half years of age were randomly assigned to either a ‘reliable’ or an ‘unreliable’ group. Children then interacted with an experimenter for an arts project. The experimenter repeatedly left the child alone to fetch new tools. In the ‘reliable’ condition, they returned with the promised objects. In the ‘unreliable’ condition, the experimenter returned without the promised objects. For both groups, this manipulation was then followed by a classic marshmallow

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

describing weakness of will 117 task. Children in the ‘unreliable’ group waited on average for about three minutes until they ate the smaller reward, and only one child waited until the experimenter returned after fifteen minutes. In contrast, children in the ‘reliable’ condition waited on average for about twelve minutes, and over 60% of all children waited for the full fifteen minutes. In short, results indicate that uncertainty in the environment determines children’s discounting and willingness to wait, ceteris paribus. Relatedly, another study found that socioeconomic status critically affects performance in the marshmallow experiment: children whose mothers had no college degree waited significantly shorter than those of mothers with a degree (Watts, Duncan, and Quan 2018). The families of children where mothers did not have a college degree had a lower income-to-needs ratio, indicating that uncertainty over having one’s needs met was greater than in families with degreed mothers. Unemployment was higher and mothers were less likely to be married. That is, kids of mothers without college degrees were on average more likely to grow up in less socially and financially stable conditions. In the marshmallow experiment, only 45% of these children waited for the maximum delay of seven minutes but 68% of children whose mothers had a college degree did. Of the latter group, only 10% waited less than twenty seconds but nearly a quarter of the other group were similarly impatient. All these findings were highly statistically significant (p t1 ). According to time-slice rationality theory, there is no such diachronic norm. You are not irrational merely because you believe that p at t1 and that ¬p at t2 . Whether rationality only makes demands that are synchronic as time-slice rationality theory has it or whether it also places diachronic demands on us is a matter of ongoing debate (see, e.g. Builes 2020; Döring and Eker 2017; Podgorski 2016; Snedegar 2017 for criticism, and Hedden 2016 for a defence). Here, we shall not take a stance in it. For the remainder of this chapter, we focus on the rationality of choices that concern at least one delayed reward, i.e. where delay discounting is possible. The question of what the rational choice is in these cases arises regardless of whether we think that some norms of rationality are diachronic or not. Consider, for example, the choice between eating dessert now or foregoing it. According to time-slice rationality theory, what is rational to do depends entirely on the agent’s mental states at the time when they make their choice. Whether they have previously resolved to not eat dessert, say, is irrelevant to the rationality of their decision. However, it might matter that they remember their resolution and that they still take themselves to be bound by it, or that they predict that they will later on regret their decision. Absent any such mental state at the time of decision, though, from the perspective of time-slice rationality theory it is not irrational for the agent to choose dessert. ⁸ Cf. Chapter 2, especially Section 2.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

criticizing weakness of will 131 In contrast, according to a diachronic norm of rationality, an agent might be required to, say, intend to skip dessert if they earlier intended to do so and did not gain new evidence relevant to the decision (cf. Broome 2013; Holton 1999). What the dieter’s beliefs and desires are at the moment of choice is, on this view, only partially and perhaps not decisively relevant to what would be rational for them.

7.2 Irrational Weakness of Will Weakness of will has been regarded as a prime example of irrationality at least since Davidson (1980, 2004). Here, we shall first examine why weakness of will is irrational according to the accounts of rationality specified in the previous section. Afterwards, we discuss possible exceptions. Let us begin with a broad understanding of weakness of will. On this view, weakness of will is a failure by the agent’s own lights; it seems to involve a conflict, it is puzzling, and it reveals a defect.⁹ This defect could be, at least, one of practical irrationality.1⁰ More specifically, it may be a defect in the agent’s coherence of mental states, or a failure to respond to their reasons. Prima facie, weakness of will is irrational on both accounts of rationality. On the one hand, weakness of will is irrational if rationality is understood as a kind of coherence because weakness of will involves a conflict. Being conflicted plausibly entails lacking coherence. Thus weakness of will is irrational. On the other hand, that the weak-willed agent fails by their own lights and is defective explains why the agent is irrational if rationality is a kind of reasonsresponsiveness. For, the agent presumably has good reason to abide by their own standards; after all, the mere fact that they endorse these norms seems to constitute evidence for their reason to adhere to them. Moreover, agents seem to have reasons to not be defective, so the weak-willed agent fails to adequately respond to these reasons as well. On a general understanding of weakness of will, then, it seems plain why it is irrational, regardless of whether rationality is a kind of coherence or a kind of reasons-responsiveness. Once we adopt a more specific understanding of weakness of will, though, this might change. Different philosophers have given various accounts of weakness of will over the millennia,11 and whether or why weakness of will is irrational may differ accordingly. Due to space limitations, we shall take a closer look at just one of these accounts by way of example: Davidson’s. On Davidson’s account, an agent is weak-willed (or incontinent) in acting intentionally for some reason r iff they at the same time believe that they have a better reason that includes r and more to do something else.12 For example, the

⁹ Cf. Section 2.2.

1⁰ Cf. Section 7.1.

11 Cf. Chapter 3.

12 Cf. Section 3.3.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

132 weakness of will and delay discounting dieter is weak-willed when intentionally eating dessert because it is yummy; for, at the same time, they believe that they have overriding health reasons to abstain. Why is this agent irrational? Davidson replies that such a person is irrational because he “goes against his own second-order principle that he ought to act on what he holds to be best, everything considered” (Davidson [1982] 2004, p. 177). In other words, if there is a principle—as Davidson seems to assume—that one ought to act on what one holds best, everything considered, then the agent acts against that principle when acting incontinently, and therefore they are irrational. That is, although the agent has a normative reason to act on what they hold best, everything considered, they fail to respond appropriately to that reason when they act incontinently. Thus, if it is irrational to fail to respond to one’s reasons, then it seems that any person who is weak-willed on Davidson’s account is irrational as well. Moreover, if it is irrational to be incoherent, then it seems that a person who is weak-willed on Davidson’s view is irrational, too. If an agent acts against their principle, then it seems that they are incoherent: on the one hand, they endorse the principle, on the other hand, they violate it. If the agent violates their own second-order principle in acting incontinently, then they are incoherent. But if incoherence is irrational, then the weak-willed agent is irrational. Im sum, on either account of rationality, an agent who is weak-willed on Davidson’s view is irrational. The same is true for most modern authors following him: weakness of will has become a prime example of irrationality (Holton 2009, ch. 7; Mele 1987; Stroud 2014). However, there are what may seem to be exceptions and we shall consider them in the remainder of this section. Some philosophers have argued that weakness of will may be rational in cases of inverse akrasia13 (Aristotle, Nicomachean Ethics 1146a20–31; Arpaly 2000; Audi 1990; McIntyre 1990). For example, Arpaly (2000) imagines an agent who judges that it would be best for them, all things considered, to become a hermit. However, they are mistaken: it would be very bad for them to become a hermit. The agent then fails to act in accordance with their best judgement and they fail to become a hermit. They do not fail to become a hermit because they respond to the reasons they actually have for not becoming a hermit. Instead, they fail to become a hermit because of some other mechanism, such as the temptation to go out with friends (Arpaly 2000, p. 503). This agent is weak-willed according to popular accounts of weakness of the will, like Davidson’s (1980). But they seem rational on a reasonsresponsiveness account: they have a normative reason to not become a hermit (it would be very bad for them), and they act accordingly. So it seems that this is an example of an agent who is rational in being weak-willed.

13 Cf. Section 2.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

criticizing weakness of will 133 I disagree that cases like this are counterexamples to the claim that weakness of will is irrational. First, the agent does seem irrational because their mental states are not coherent. After all, their best judgement and their action are in conflict with each other. Arpaly (2000, p. 491, n. 8) herself acknowledges this: “I am willing to agree that there exists inconsistency or incoherence in the akratic’s mind”. Thus, if it is irrational to be incoherent, then the would-be hermit is irrational. Second, the agent is irrational even if we consider rationality solely as reasonsresponsiveness. As the case implies, the agent in question has a reason to not become a hermit: it would be bad for them. But the agent judges that, all things considered, it is best for them to become a hermit. Therefore, it seems highly unlikely that the agent is responding to the relevant normative reason. On the contrary, they do not respond to this reason at all, as Arpaly (2000, p. 503) highlights. They fail to become a hermit because they are, say, too lazy or scared. Then they are merely lucky. Their behaviour aligns with their true reason just by chance, yet they fail to respond to it. Thus the agent is irrational even if we focus on rationality as reasons-responsiveness. Therefore, I think that weakness of will is irrational according to standard accounts of rationality, and I shall proceed on this assumption in what follows.

7.3 Irrational Delay Discounting Why and under what conditions is weak-willed delay discounting irrational? The present section considers four answers to this question. Weak-willed delay discounting may be irrational because it may lead to incoherent preferences (Section 7.3.1), because it may prevent agents from responding to the reason to not make oneself worse off (Section 7.3.2), because it makes them vulnerable to exploitation (Section 7.3.3), or because of an irrational bias against uncertainty (Section 7.3.4). Before turning to each of these approaches in detail, note a caveat. We can and do discount a variety of objects with their delay, such as money, marshmallows, benefits, or harms. It may be tempting to treat all these objects the same. However, we ought to distinguish discounting of commodities and money from discounting of value. Commodities have positive or negative value to us, and this value may vary with the point in time at or the delay after which we receive them. For example, £100 may be less valuable in a year’s time than they are now. Value itself, viz. harm and benefit or utility and disutility, may be discounted as well. For example, the value we derive from £100 may be less valuable in the future than it is now. The latter kind of discounting is commonly called ‘pure’ discounting (Broome 1999, p. 46; Ramsey 1928).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

134 weakness of will and delay discounting Delay discounting that is not pure may be rational for at least a couple of reasons.1⁴ First, it may be rational because of interest rates or return on investment. Second, it may be rational because of uncertainty or hazards. It may seem plausible that these considerations regarding the discounting of commodities apply, mutatis mutandis, to discounting of value. However, this is not straightforward. For one thing, commodities and money commonly have diminishing marginal value but values or benefits do not. Here is why. For commodities and money, we can distinguish their amount or number from the value or utility they have for an agent. For example, a marshmallow has a certain value for a child, and a sum of money has a certain value for their parent. Typically, the larger the amount of a commodity or money, the smaller will be the additional value per increment. When a child has five marshmallows and receives a sixth, the sixth marshmallow adds a certain value to the net benefit. However, if the child already has fifty marshmallows, the additional fifty-first marshmallow will barely add to the overall value. Similarly, for a wealthier agent, a fixed sum of money will add less to their overall well-being than for an agent who is less wealthy. In short, any added unit of money or a commodity has diminishing marginal value. This is why some authors have argued in favour of delay discounting of future gains of money or commodity. They assume that future generations will be wealthier than present ones because of continued economic growth. If this is true, then any added unit of money or commodities will be worth less to future than to present generations. Similarly, if we assume that an agent will be wealthier in the future than at the moment, they may discount this future wealth. The same sum of money is thus more valuable to them now than later when they will be wealthier. However, diminishing marginal value concerns money and commodities only, and therefore we cannot apply the same argument to pure discounting. Compare two agents who have to choose between two options each. The first agent chooses between a benefit and a benefit twice as large. The second agent chooses between a commodity and a commodity twice as large. Plausibly, the second agent derives a certain benefit from the commodity. But because of the diminishing marginal value of the commodity, a commodity twice as large as another one will not provide the agent with a benefit twice as large. The benefit will be less than that. In contrast, in the first case, the larger benefit does not have diminishing marginal value; by definition, the larger benefit is twice as large as the smaller one. Therefore, we shall largely set aside discounting of commodities or money and focus on pure delay discounting.

1⁴ For details, see Appendix C and Section 6.1, respectively.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

criticizing weakness of will 135

7.3.1 Incoherent Preferences Consider rationality as a kind of coherence.1⁵ Then delay discounting may seem irrational if it leads to incoherent mental states such as incoherent preferences or preference reversals (Andreou 2020, § 1.2–3). As we have discussed preference reversals in detail,1⁶ we shall be brief here. Delay discounting may lead to incoherent preferences at one and the same point in time, i.e. ‘synchronic preference reversals’. It may also lead to incoherent preferences over time, i.e. ‘diachronic preference reversals’. Take the synchronic case first. Imagine that an agent has to choose between two benefits, A and B, every year on their birthday. One week before their birthday, they briefly consider the two benefits and have a clear preference for A. When asked to decide in advance, they would choose A over B for their birthday. However, imagine that the agent also thinks about the next birthday after the upcoming one, i.e. fifty-three weeks from now and has a strong preference for B. When asked to decide for fifty-three weeks from now, they would choose B over A. In short, we have a synchronic preference reversal: at one and the same point in time, the agent prefers A over B and B over A. As this pair of mental states is incoherent, the agent might be irrational. A similar example can be given for diachronic preference reversals. Imagine that an agent finds themselves with a strong preference for A over B on their birthday. Imagine further that, exactly a year later, they prefer B over A on their birthday. This is a diachronic preference reversal: the agent has switched their preference over time. As the agent seems to have a pair of incoherent mental states—preferring A over B and also preferring B over A—they might be irrational. For both synchronic and diachronic preference reversals, an account of rationality as coherence needs to explain what the irrational incoherence consists in.1⁷ For one thing, for the synchronic case the account needs to explain why the agent’s preferences concern the same objects. In our example, one might think that one preference concerns options in one week from now and the other concerns options in fifty-three weeks from now. Perhaps this could be done by arguing that the relevant objects are the benefits themselves, and that some of their properties are relevant for coherence whereas others are not. In particular, properties like size would be relevant but properties like delay would not. 1⁵ Cf. Section 7.1. 1⁶ In Chapter 5. 1⁷ Section 5.5 has argued that there may be preference reversals without weakness of will and weakness of will without preference reversals. Here, we are concerned with the question of whether weak-willed preference reversals may be irrational in that they are incoherent. For both discussions we may construct highly similar cases. However, the discussions differ substantially: previously, we sought to determine whether a given case is one of weakness of will or not. For example, an agent may reverse their preferences without thereby violating their own standards, being in a conflict, or appearing problematic or puzzling (cf. Section 2.2). In contrast, our present discussion seeks to determine whether a given case is one of incoherence and thus of irrationality.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

136 weakness of will and delay discounting Similarly, in the diachronic case, the account needs to explain in what respect reverse preferences are incoherent over time. In particular, one may wonder how this incoherence can be distinguished from indifference or incommensurability, where an agent may randomly choose different options at different points in time. To address this issue, one could specify criteria for incoherent and coherent preference reversals, perhaps by drawing on other mental states that an agent has, such as those concerning past intentions, reconsideration, or new evidence (Bratman 2014; Broome 2001; Holton 2009). As a final point before we conclude this section, let us briefly consider the suggestion that exponential delay discounting is not irrational but hyperbolic delay discounting is (Greene and Sullivan 2015; Sullivan 2018). On this proposal, exponential delay discounting is not irrational because it does not lead to preference reversals; hyperbolic delay discounting is irrational because it does. However, these assumptions are incorrect as they stand.1⁸ Exponential delay discounting may lead to both synchronic and diachronic preference reversals. Hyperbolic delay discounting need not lead to either. Therefore, we shall disregard the suggestion that it is the discount function that crucially determines whether delay discounting is irrational. In sum, if rationality is a kind of coherence, delay discounting may be irrational because it leads to preference reversals. This view requires further details on incoherence, such as criteria for distinguishing and identifying objects of preferences or for diachronic incoherence.

7.3.2 Reasons to Promote One’s Well-Being Consider now rationality as reasons-responsiveness. That something promotes one’s well-being is generally regarded as a so-called ‘prudential’ reason for it. That is, other things being equal, if given multiple options agents seem to have a reason to choose the option that makes them at least as well off as any of the other ones. Recall1⁹ that the agent must have access to this reason (Lord 2018; Markovits 2014). Accordingly, it seems rational for agents to choose and act in such a way that makes them at least no worse off than if they had chosen or acted differently. However, weak-willed delay discounting may lead to choices and actions that make the agent relatively worse off. This may be irrational. Let us consider this more closely, focussing first on agents understood as four-dimensional entities, i.e. extended over time. On this view, it seems plausible that an agent has reason to choose whatever brings them the greatest benefit. The agent seems to have good reason to, say, forego

1⁸ Cf. Section 5.4 and Appendices B and C.

1⁹ Cf. Section 7.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

criticizing weakness of will 137 an earlier but smaller benefit in exchange for a greater, later benefit. This makes them better off, overall. Then delay discounting may never be rational. After all, my well-being as an agent seems to be entirely unaffected by the temporal order in which benefits or harms happen to me. What matters is whether they increase or decrease my total well-being. There seems to be no reason to discount beneficial or harmful prospects with their delay or timing. In other words, there seems to be no reason for time biases (Greene and Sullivan 2015; Sullivan 2018). An agent is time biased if they prefer benefits or harms depending purely on when they experience them. For example, if an agent prefers to enjoy a benefit in the future rather than the past, then they are future biased. If an agent prefers to suffer a harm later rather than earlier, then they are near biased. Sometimes, near and future biased agents make decisions that make them worse off overall. Therefore, having these biases may be irrational. If we have reason to make ourselves no worse off, over the course of our entire lives, then it seems we have reason to disregard the mere timing of benefits or harms. However, our intuitions about certain cases suggest otherwise. Imagine two lives with the same net amount of well-being. Imagine, furthermore, that one person’s well-being increases over their lifetime and the other person’s well-being decreases. Even though both persons are equally well off, most of us would prefer increasing levels of well-being. If this is justified, then it seems that we are justified to take into account the timing of benefits and harms after all. Consequently, on this view time bias and delay discounting may sometimes be rationally permissible. For instance, our example indicates that it may be permissible to discount future benefits negatively. Examining the rationality of time biases in detail may be a task for research that goes beyond the scope of our enquiry here. Let us turn to the view that rationality concerns time slices. Consider a time slice who must choose between a smaller benefit for themselves now (at t0 ), or a larger benefit for one of their future time slices. For example, the agent could eat some delicious cake now but a future time slice would then suffer from excruciating pain because the cake contains poison. It may seem that it could be rational for the time slice at t0 to choose the smaller benefit for themselves. If this agent-now can now enjoy some benefit or not, why should they forego it, merely because doing so would be better for some other time slice? It seems that the agent-now has no reason to make this sacrifice, even if they thereby benefit some other time slice later on. On this view, then, discounting future benefits may never be irrational. However, time slices plausibly have reasons for concern for future time slices. For example, if the agent-now at t0 has reason to care about the well-being of some agent-later, then this is a reason they must be responsive to at t0 . One kind of reasons to care about later time slices could be moral reasons: perhaps a time slice has a moral obligation of beneficence towards their future time slices, just as they have a moral obligation of beneficence towards other agents and

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

138 weakness of will and delay discounting their time slices. For example, there may be a general moral obligation for all time slices not to act in such a way that other time slices experience excruciating pain. Another kind of reasons to care about one’s later time slices could be further prudential or self-interested reasons: perhaps the agent-now does empathize with their future time slices and thus they have a reason not to violate their own empathic feeling. For instance, the agent-now seems to have a prudential reason to avoid feelings of uneasiness and worry, and the prospect of a future time slice being in pain may cause the agent-now to feel uneasy and worried. On this kind of time-slice rationality theory, weak-willed delay discounting would be irrational if it constituted a failure to respond to moral or prudential reasons of concern for other time slices. More specifically, it may appear that one time slice has reason to discount the benefit of other time slices according to the degree of concern for those other time slices. For example, if my concern for my future self is half of the concern I have for my present self, then it seems that I have reason to discount the benefits for my future self by 50% and, accordingly, to be indifferent between some benefit for my future self and a benefit half as big for my present self. One challenge for this approach is to explain why a time slice has reason for concern for other time slices. One may object that I have no reason for being concerned about my future self at all. Or perhaps there is reason for the opposite: to be more concerned about a future self than about one’s present self. Perhaps one could argue that there is reason to treat all time slices the same, regardless of whether they are future, present, or past. The claim that this is rational may be analogous to the moral claim in some egalitarian theories of ethics that every person counts for one and exactly as much as another person, no more and no less. Similarly, one time slice may count for as much as one other time slice, no more and no less. On the view that we have reason to treat all time slices the same, delay discounting may never be rational (Dougherty 2015; Parfit [1984] 1987; Sullivan 2018, but see Dorsey 2019; Kauppinen 2018). A further difficulty concerns decisions about potentially transformative experiences. These are experiences that change future agents or time slices both epistemically and personally (Bykvist 2006; Paul 2014; Pettigrew 2019; UllmannMargalit 2006; Williams 1970). For example, a transformative experience changes their knowledge of what it is like to be a certain person or to perceive something that was previously unfamiliar, and it changes their values, preferences, and point of view. Decisions about potentially transformative experiences include deciding to become a parent, choosing between a career in the army and a career in the church, or converting to a different religion. Decisions like these pose a challenge for accounts of rationality for at least two reasons. First, a transformative experience provides the agent with new information and new reasons but at the time when the decision has to be made those reasons are not yet available. Second, a transformative experience changes

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

criticizing weakness of will 139 the preferences and values of the agent. At the time of decision, the agent does not yet have those values and preferences, and therefore it is unclear whether and how they could take them into account. For example, an agent deciding between joining the army or a monastery may think that they have no reason to choose the monastery because their religious faith is not sufficiently strong. However, in fact it may turn out that, if they joined the monastery, their faith would grow in strength. What our reasons are and what is rational in situations like these remains an intensely debated question in current research. We have to leave it open. In sum, delay discounting may in several respects be an inadequate response by a time slice to their reasons. For example, weak-willed delay discounting could be irrational in that it constitutes a failure to respond at all or adequately to reasons of concern for other time slices. Again, specifying these reasons and conditions of adequate responsiveness to them may be a question for further research.

7.3.3 Exploitation Weak-willed delay discounting may be irrational because it is exploitable, i.e. it makes the agent vulnerable to getting abused or taken advantage of. Typical examples concern commodities and agents with cyclic preferences, as in socalled ‘money pumps’ (cf. Davidson, McKinsey, and Suppes 1955). However, the rationality of choices about money or commodities does not straightforwardly and analogously apply to the rationality of choices about benefits or harms.2⁰ Therefore, consider an example of exploitation for benefits. Imagine that an agent has the following preferences:21 at t0 , they prefer receiving a larger benefit at t2 over receiving a smaller benefit at t1 (t0 < t1 < t2 ). They also prefer, at t1 , to receive the smaller benefit then rather than the larger one at t2 . Now, given these preferences, the agent will agree to give up a relatively small benefit at t0 in order to receive the large benefit at t2 , and subsequently they will agree to again give up the same benefit at t1 to switch back to receiving the smaller benefit then. The agent ends up worse off than at the beginning. They are being exploited. Exploitation arises partially because the agent lacks information or is shortsighted (Ahmed 2017; Andreou 2016). If they knew in advance the sequence of binary choices they would be offered, they might be able to resist exploitation. For example, if the agent in our example knew at t0 that they would be given an opportunity at t1 to revise their initial choice, they might make a different initial choice or adopt measures to prevent its revision.

2⁰ Cf. Section 7.3. 21 This case is similar to the marshmallow example (Section 5.6); see also Anand (1993, 2009), Dougherty (2011, 2014), and Rabinowicz (2000).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

140 weakness of will and delay discounting Assuming that the agent lacks foresight, though, let us now consider why exploitability is irrational. First, consider rationality as reasons-responsiveness. On this view, exploitability may be irrational because we have prudential reasons not to make choices that leave us worse off, overall. Yet exploitability may leave the agent worse off, overall. This leads us back to the issues discussed above in Section 7.3.2. For one thing, it entails that it is irrational to have certain time biases. For example, the agent in the last example has a near bias: when a benefit can be enjoyed immediately (at t1 ), they prefer it even if it is smaller than a delayed benefit (at t2 ). If it were not irrational to have time biases like this, then it would not be irrational to make these exploitable decisions. We obtain a similar result when we consider time slices as agents. Generally, a time slice who makes a choice that leaves future time slices worse off could be rational if it increases their well-being at that instance. If weak-willed delay discounting is irrational in that it is exploitable, then, on this time-slice rationality view, time slices must have further moral or prudential reasons to take the wellbeing of future time slices into account. In the remainder of this section, let us briefly turn to rationality as coherence. Exploitability per se does not seem to be incoherent to me. However, incoherence may be a prerequisite for exploitation in many cases. For example, in our case above, the agent has incoherent preferences over time: at t0 , they prefer the later benefit over the sooner one but at t1 they have the reverse preference. For those cases, irrational exploitability may be due to irrational preference reversals, as considered in Section 7.3.1. However, not all cases of exploitation may involve incoherence, and it remains contested whether incoherence leads to exploitation (Christensen 1991; Hedden 2013; Mahtani 2015; Ramsey [1926] 1931; Vineberg 2016). Thus, in my view, the claim that weak-willed delay discounting is irrational because it is exploitable supports a reasons-responsiveness account of rationality better than an account of rationality as coherence.

7.3.4 Uncertainty Bias One may think that acting from biases, or biases themselves, are irrational. If this is correct, then actions from weak-willed delay discounting may be irrational because they are actions from an irrational uncertainty bias. If we take this approach, we may draw on the extensive literature and ongoing research into decision-making under risk and uncertainty. Longstanding approaches comprise expected utility theory (Neumann and Morgenstern [1944] 1953), decision theory (Ramsey [1926] 1931; Savage 1954), and further

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

criticizing weakness of will 141 developments like bounded or ecological22 rationality theory (Gigerenzer 2008a; Simon 1982–97), evidential (Ahmed 2014; Jeffrey 1983) and causal decision theory (Gibbard and Harper 1978; Lewis 1981). These approaches have developed criteria for when and why it is irrational to discount benefits and harms with risks or uncertainties. These criteria differ from one theory to another, and under what conditions an uncertainty bias is irrational thus differs accordingly. For example, while an uncertainty bias may be ecologically rational for agents in uncertain environments, it may be irrational, other things being equal, on standard views of expected utility theory. In what follows, we shall focus on this latter approach as just one illustrative example. For, unfortunately, we do not have space to discuss all proposals concerning decision-making under risk and uncertainty. According to expected utility theory, it is rational to act and choose in a way that maximizes one’s expected utility. The expected utility is, for each option, the sum of all expected outcomes multiplied by their respective probabilities. Imagine you are offered a choice between four variously delayed benefits of different sizes. If you choose the first option, you may, after some years, receive a benefit B; on the second option you receive a benefit that is 85% of B after a shorter delay; on the third the delay is longer but the benefit is 50% greater than B; and on the last option you may receive a benefit three times as great as B but you have to wait considerably longer. Because each option is delayed, and because delay involves uncertainty over when, if at all, you will receive the benefit, each option is to some degree uncertain. Assume you may, roughly, estimate the probability of receiving the benefit as 50% for the first option, 60% for the second, 33% for the third and 25% for the last. The expected utility—size of benefit multiplied by probability—for each of the four delayed options is thus, roughly, .5B, .51B, .5B, and .75B, respectively. Accordingly, it would be rational for you to choose the last option and it may be irrational to choose, say, the first. In short, if expected utility theory specifies the correct criteria for rational decisions under uncertainty, then weak-willed delay discounting may be irrational according to these criteria. In the remainder of this section, I shall briefly outline a controversy about expected utility theory. It has largely arisen because it has been found empirically that agents are prone to risk and uncertainty biases (Allais 1953; Kahneman and Tversky 1979; Thaler 1981). Imagine I present you with an urn containing ninety balls.23 Thirty of those balls are red, the remainder are either black or yellow—their proportions are uncertain. Thus the probabilities of picking a black or yellow ball are not known to you.

22 Cf. Section 7.1.

23 This example is similar to the Ellsberg paradox (Ellsberg 1961).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

142 weakness of will and delay discounting Table 2 Choosing option 1 in the first case and option 2 in the second seems to reveal an uncertainty bias. There are thirty red balls in the urn and sixty balls that are either black or yellow

Case 1 Case 2

Option 1

Option 2

win if you draw a red ball win if you draw a red or a yellow ball

win if you draw a black ball win if you draw a black or a yellow ball

Now I offer you two choices (shown in Table 2). First, you are to choose between winning a benefit B if you draw a red ball from the urn versus winning the same benefit if you draw a black ball. Second, you are to choose between winning B if you draw a red or yellow ball versus winning B if you draw a black or yellow ball from the urn. If you are like most people, you will first choose option 1 and secondly choose option 2 (Camerer 1995; Hertwig and Erev 2009). For convenience, let us refer to the proportions of yellow and black balls in the urn as ‘y’ and ‘b’, respectively. That is, the probability of winning with a yellow ball is y, and the probability of 2 winning with a black ball is b. We know that y + b = although we are uncertain 3 about y and b. In the second case, whatever you choose, you win if you draw a yellow ball. So your chance of winning is at least y, either way. Thus this probability should not make any difference to your decision in the second case.2⁴ Your choice should depend entirely on whether you prefer winning with a red or with a black ball. That you choose the second option reveals that you prefer winning with a black ball and 1 probability b over winning with a red ball and probability . However, your choice 3 in the first case reveals the opposite preference: you prefer winning with a red ball 1 and probability over winning with a black ball and probability b. 3 A plausible explanation for this set of preferences is that you have an uncertainty or ambiguity bias. That is, your preferences are not entirely determined by the size or the probabilities of the benefits but also by the uncertainty or ambiguity involved. More specifically, in both cases, you decline the option that involves uncertainty. In other words, you are uncertainty averse. 2⁴ This is based on the assumption that, when you are to make a choice between (i) A and B, and (ii) A and C, and if you prefer (i), then, when given a choice between (iii) B and D, and (iv) C and D, you should prefer (iii). This assumption has been called the ‘sure thing principle’ (Savage 1954). The ‘sure thing’ in each decision (A in the first, D in the second) should be irrelevant to it. For further discussion see e.g. Buchak (2013), Gibbard (1990), Hong and Wakker (1996), and Norcross (1996).

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

criticizing weakness of will 143 Controversy is ongoing over whether it is irrational to have biases against uncertainty like this. If it is not, then approaches like classic expected utility theory may not describe rational decision-making under uncertainty after all. Then we need alternative criteria for when and why uncertainty biases are irrational. According to a view of rationality as coherence, uncertainty biases may be irrational if they lead to incoherent mental states. For example, in the urn example above, you may have incoherent beliefs. You may believe that you are more likely to win in the first case if you choose the first option rather than the second but you also believe that you are more likely to win in the second case if you choose the second option over the first. A view that treats rationality as reasons-responsiveness may specify reasons about how to respond to uncertainty. For example, it has been suggested that anticipated regret may be a relevant reason (Bell 1982; Bratman 2014): if you would regret your choice more if you took the uncertain option and lost, you may have reason to choose the certain option; conversely, if you would regret missing out on a greater, albeit uncertain, benefit then you may have a reason to choose the uncertain option. On this view, your choices in the scenario just described may adequately respond to reasons you have and thus they may be rational. In short, that weak-willed delay discounting is irrational because it leads to biased actions may imply that it is irrational to violate rules specified by theories of decision-making like expected utility theory. Which of these theories is best suited to guide decision-making under risk and uncertainty remains a question of ongoing research. Its findings may have substantial implications for assessing the rationality or irrationality of weak-willed actions.

7.4 Conclusion This chapter has sketched implications of my account of weak-willed delay discounting for the debate on rationality. Philosophers typically understand rationality as a kind of coherence or responsiveness to reasons (Section 7.1). Weakness of will is commonly regarded as irrational, and I have argued that it is irrational in that it violates coherence and reasons-responsiveness even in cases of seemingly rational weakness of will (Section 7.2). A fortiori, weak-willed, biased actions are irrational as well. However, it has turned out (in Section 7.3) that determining more specifically why and under what conditions delay discounting is irrational depends crucially on one’s account of rationality. Delay discounting may be irrational in that it leads to incoherent preferences as revealed by preferences reversals, it may violate reasons we have to promote our well-being, it can make us vulnerable to exploitation, or it may be due to an irrational uncertainty bias.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

8 Practical Takeaways Breaking one’s dieting rule or resolution to quit smoking, procrastination, outbursts of rage, convenient lies, even the failure of entire nations to follow through with plans to cut greenhouse gas emissions, balance the budget, or keep a pandemic in check—all of these phenomena may be due to what we have called ‘weak-willed delay discounting’: future benefits are valued too little and present or nearby ones too much, and although we are aware that our norms and standards require us to behave or not behave in a certain way, we knowingly violate them. We have considered the suggestion that weak-willed delay discounting is likely determined by processing of risk1 or uncertainty.2 As delay involves uncertainty, delayed prospects are discounted. That we tend to engage in delay discounting, like many non-human animals, may thus be due to a cognitive bias. For example, we break a dieting rule or fail to quit smoking because we underestimate whether and when we shall suffer from health issues linked to lifestyle, such as strokes or cancer. Nations delay cutting greenhouse gas emissions because they misjudge the likelihood and timing of economic costs relative to those of greater but later and therefore more uncertain losses due to climate change. Both individuals and collectives seem to make mistakes in delay discounting, and this is in turn due to a bias in uncertainty processing. How can we deal with these issues? This question will be the topic of the current chapter. Here, we explore strategies for individuals and policymakers to address weak-willed action based on our account of weak-willed delay discounting developed thus far. In Section 8.1, we discuss biases and strategies to deal with them in general, focusing on adjustments of the environment in Section 8.2. In Section 8.3, we apply these strategies to weak-willed delay discounting more specifically.

8.1 What to Do about Biases Cognitive biases are an extensively researched topic in both the behavioural sciences and philosophy.3 In this chapter, we draw on this literature to develop

1 In this chapter, we largely bracket risk. For a distinction, see Section 6.1. 2 In Chapter 6. 3 Cf. Section 6.4.

Weakness of Will and Delay Discounting. Nora Heinzelmann, Oxford University Press. © Nora Heinzelmann 2023. DOI: 10.1093/oso/9780192865953.003.0008

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

practical takeaways 145 suggestions about how we as individuals or groups may act to avoid falling prey to the bias that leads us to weak-willed action. One immediate implication of treating weak-willed behaviour as the result of a cognitive bias may be that it mitigates the stigma of failure and transgression commonly associated with weakness of will. Weakness of will, I have argued,⁴ is commonly regarded as a defect: there is something wrong with the weak-willed agent or action. They seem irrational, imprudent, or even immoral. However, if weak-willed action is due to a cognitive bias, then the defect may be less serious than previously assumed. To see this, consider a perceptual bias that causes an optical illusion. Although we tend to think that there is something defective about, say, claiming that a pen in a glass of water has been bent at the water surface, the mistake seems a natural or innocuous one to make. Generally, we blame people much less for falling prey to illusions or biases than for, say, obesity or chronic procrastination. That is, treating weak-willed actions as the result of a cognitive bias may lead us to change our perspective about what is at issue. This change of viewpoint does not, of course, excuse or even justify weak-willed behaviour. Just as it is wrong to claim that a pen in a glass of water has been bent it remains wrong to claim that overeating or polluting the climate is better than having a healthy body or planet later on. This perspective may also help us to better address the problem weakness of the will presents us with. Let us now consider by way of example how we address practical issues arising from biases in general. Consider the perceptual bias to take the relative size of objects into account when judging their distance: we tend to judge smaller objects to be farther away than larger ones (cf. Plato, Protagoras 256C–D). For instance, the far end of the Potemkin stairs appears further away from us than it actually is because the architects narrowed the steps (Figure 7.a). The reverse is also true, i.e. we tend to take the distance of objects into account when judging their size, a bias exploited by optical illusions like the Ponzo figure (Figure 7.b). This bias can lead to practical problems. For instance, many rear mirrors in vehicles are convex rather than planar to provide a greater range of vision. These mirrors ‘shrink’ images. They show more of their surroundings but at a smaller size. What we see in the mirror appears smaller to us than it actually is. That the objects appear smaller in the mirror is not a cognitive bias but merely an optical effect. However, this effect in combination with our bias to judge the distance of objects by their sizes may lead to problems. A driver looking at a vehicle behind their own in the mirror sees that vehicle as smaller than it actually is. But because of their perceptual bias to judge the distance of that vehicle by its size, the

⁴ In Section 2.2.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

146 weakness of will and delay discounting

(a)

(b)

Figure 7 (a) The Potemkin stairs in Odessa, Ukraine, are 142 metres long but appear much longer (left). The architects achieved this visual effect by narrowing the stairway as it ascends: the top step is 12.5 metres (41 feet) wide while the bottom step is 21.7 metres (70.8 feet) wide, as can be seen from an aerial perspective (right). This illusion exploits the visual bias to take size into account when processing the distance of an object: steps at the rear appear further away than they actually are because they are smaller than those up front. (b) Ponzo illusion. The upper dotted line appears longer than the lower one although they have the same length. The ‘railway lines’ create an impression of distance so that the upper line seems farther away than the lower one. The illusion arises due to our visual bias to take distance into account when processing the size of an object, which is in turn why the upper line seems longer than the lower one. Image sources: https://upload.wikimedia.org/wikipedia/commons/5/59/Pot%C4%9Bmkinovy_ schody.jpg, Google Earth.

driver may think that the vehicle is further away than it actually is. That is, a driver may mistakenly judge that there is a greater distance to the car behind them than there is. This could be problematic. For example, the driver may decide to suddenly come to a stop, thinking that the car behind them will have sufficient leeway to brake or swerve. This assumption, though, could be wrong, and the other car might not have enough space to avoid a collision. We could address this problem in various ways. For example, we could try and mobilize cognitive resources like our memory or imagination. The driver in our example could remember that objects in a convex mirror are smaller than in a flat mirror, and that smaller objects appear to be further away from us. They could visualize how close the vehicle behind them actually is. This strategy could also work in other contexts. For instance, when a tourist is asked to estimate the length of the Potemkin stairs, they might take into account the optical trick used by the architects to make it appear longer, and they might accordingly discount their initial guess. In other words, one strategy to deal with biases is to deploy cognitive resources to counteract them. Another strategy is to adjust the environment. For example, to address the problem caused by convex rear mirrors, some jurisdictions including the United

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

practical takeaways 147 States, India, Canada, and South Korea require manufacturers to engrave a safety warning on the mirror, like ‘objects in (the) mirror are closer than they appear’.⁵ As the example of our perception of size and distance illustrates, we exploit our perceptual biases as in the case of the Potemkin stairs, or address them by trying to counteract their effect as in the safety warning on rear mirrors. Either way, we take the bias as a given and then try to work around it. We make a cognitive corrective effort, or we construct the environment in which we act on the basis of our biased perception in the way that serves our purpose. I believe that this generalizes to all kinds of biases, and I shall illustrate the point with another example from the corporate domain. The marketing industry exploits our cognitive biases in order to get us to make decisions that serve the financial interests of companies. Subscriptions offered by the magazine The Economist to US customers in the 2000s illustrate how advertising can exploit a bias known as ‘the anchoring effect’. This is the bias to take a piece of information (the ‘anchor’) into account in decision-making even if it may be unrelated or irrelevant (Tversky and Kahneman 1974). In the case at hand, The Economist offered three subscriptions: an online-only subscription for $59, a print-only subscription for $125, and both for $125. In a survey, most participants chose the third option (Ariely 2009).⁶ In a second survey where only the first and the third options were available, far fewer participants chose the third option. The print-only option served as an anchor. Presumably, customers took it into account when considering the print-and-online subscription, which looked much better compared to the print-only option. In turn, this may have increased the likelihood of customers to choose the third option, and revenue for The Economist. The advertisement used by The Economist exemplifies how we sometimes exploit cognitive biases. However, we also often try to mitigate or counteract them. For instance, in many countries like the United States or the United Kingdom it has become uncommon to include a photograph in a job application. This is due to anti-discrimination and labour laws that require companies to demonstrate that hiring is free from profiling based on appearance or race. These laws aim to counteract implicit biases like the ‘attractiveness’ or ‘beauty bias’, a tendency to prefer good-looking candidates in hiring procedures (see Dion, Berscheid, and Walster 1972 for a seminal study and Langlois et al. 2000 for a review). As these examples illustrate, we have developed measures to exploit or circumvent cognitive and perceptual biases in different domains. Let us now see how we can apply these strategies to a bias that determines weak-willed action.

⁵ Lending itself to metaphorical interpretation, the phrase has widely featured in artwork like the movie Jurassic Park (1993) or titles of songs, albums, and novels (Weber 1995). ⁶ Jeong et al. (2021) replicated this effect using somewhat different choice options, finding a smaller but statistically significant difference between the two conditions (p = 0.004).

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

148 weakness of will and delay discounting

8.2 Adjusting the Environment Just like other biases, we can exploit or counteract the uncertainty bias that determines delay discounting, and we can do so by employing cognitive resources or by adjusting the environment. As the title of this section indicates, it focuses on the latter. This is because I think that adjusting the environment is more efficient than cognitively addressing the bias that determines weak-willed discounting. To some degree, it is possible to employ cognitive resources to counteract biases. In some circumstances, this is the only strategy available. For example, children in the marshmallow experiment who distracted themselves from the temptation were better able to delay gratification (Karniol and Miller 1983; Mischel and Ebbesen 1970; Mischel, Ebbesen, and Raskoff Zeiss 1972; Mischel and Moore 1973). It may be possible to acquire relevant cognitive skills and to train willpower at least to some extent (Cubillo et al. 2021; Schunk et al. 2022). However, this strategy is far less successful than adjusting the environment. For example, its success rate is in the single digits for quitting smoking or vaping (Werch and Owen 2002; West and O’Neal 2004). Agents who succeed in their pursuit of long-term goals, such as studying or exercise, are actually worse to resist temptations in the moment (Hofmann et al. 2012; Imhoff, Schmidt, and Gerstenberg 2014). Instead, they derive their success from organizing their environment in a way that is conducive to their needs (De Ridder et al. 2012; Fujita 2011; Galla and Duckworth 2015). Therefore, the remainder of this section focuses on an approach called ‘nudging’. Nudging addresses a wide range of biases, not just those we are concerned with. It is difficult to define but roughly refers to designing choice architecture in such a way that it influences choices but without changing the choice options themselves (Thaler and Sunstein [1999] 2008).⁷ For example, one ‘nudge’ is to deliberatively set the desired option as the default, exploiting the so-called ‘default bias’. The default bias is people’s tendency to choose whatever the default option is. If the intended option is the default, it tends to be chosen more often. For example, in countries where the default option for organ donation was donating (an ‘opt-out’ setup), the organ donation rate was above 85% in 2003 (Johnson and Goldstein 2003). In countries where not being an organ donor was the default option (‘opt-in’), the donation rate was below 30%. Switching from an opt-in to an opt-out system is a nudge that increases organ donations (Abadie and Gay 2006; Gimbel et al. 2003; Rodríguez-Arias, Wright, and Paredes 2010). ⁷ Some authors have argued that nudging, biases it exploits, or behaviour influenced by it may be rational (Fisher 2020; Levy 2019). Whether this is true or not depends in turn on how we understand ‘rational’, a question that goes beyond the scope of our discussion here. See Chapter 7 for a more detailed discussion.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

practical takeaways 149 The default bias can also be exploited to target delay discounting more specifically. For example, enrolling employees automatically in a retirement savings programme facilitates their choosing later and larger over smaller and sooner financial benefits (Benartzi and Thaler 2007, 2013 but see Beshears et al. 2010). In other words, participants save money for retirement more easily with this setup instead of spending it earlier on. As in the example of organ donations, participants’ choice options are not limited in such a scheme because they can always opt out of automatic retirement saving. Other nudges change the physical environment in which agents act. Again, the marketing industry is an example in point. Many grocery shops, for instance, place small and readily available treats near the till where customers have to wait for their turn to pay. Foregoing the financial and health costs of those treats is often more valuable than the pleasure they provide, yet the customer does not have to bear these costs immediately. That they tend to discount these costs too little thus leads to purchasing decisions that increase the revenue of the shops. Conversely, restructuring the physical environment can also mitigate some weak-willed behaviour (Duckworth, Gendler, and Gross 2016; Duckworth, Milkman, and Laibson 2018). Consider a typical choice between a smaller, sooner and a later, larger reward. In these examples, the agent has to make a decision about whether or not to opt for the smaller but immediately available reward when the value of the alternative reward is not particularly salient: the single marshmallow or the dessert is in front of the agent’s eyes, the two marshmallows or the health benefits are nowhere to be seen. A simple and highly effective nudge has therefore been to literally remove the near but small option from the agent’s field of vision. For example, increasing the distance of an unhealthy food item decreases the probability and amount of consumption (Maas et al. 2012; cf. Arno and Thomas 2016 for a review and meta-analysis). Overall, many nudges are effective and for that reason popular with policymakers (Benartzi, Beshears, et al. 2017; Halpern 2015). However, nudging has been criticized for supposedly undermining the autonomy or liberty of those being nudged (Dworkin 2020; Noggle 2020). Nudging seems to exemplify the diktat of a nanny state that manipulates and incapacitates its citizens. It smells of paternalist hypocrisy because it seems to falsely assume that the nudgers know the interests of those being nudged better than they themselves. But even more crucially, the objection goes, nudging tries to justify illegitimate means with beneficial outcomes. Regardless of whether nudging yields results that actually are in the interest of those being nudged, it does so by violating their autonomy. I do not find this objection very convincing, for at least three reasons. First, note that nudging is not a strategy that is exclusive to governments. In principle, anyone can nudge anyone and, as the examples above illustrate, nudging is already widely used by companies and industries. Nudging ourselves or asking others to nudge us can be a way to enhance rather than diminish our agency by providing us with

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

150 weakness of will and delay discounting more means to guide our decisions and actions. Recall Odysseus⁸ who asked his fellow travellers to tie him to the ship’s mast before sailing past the sirens so that he could at the same time enjoy their tempting songs and prevent himself from hurrying into their deadly traps (Elster [1979] 2013). By setting up his physical environment in the right way, Odysseus was free to both listen to the beautiful singing and to escape the sirens unharmed. Thus, nudging can actually increase the liberty and autonomy of those being nudged rather than undermine them. Second, any agent offering a choice to other agents, may that be a government, a seller, a teacher, or a party host, has to set up the choice architecture somehow. How should choice architecture be designed? Presumably, much opposition against nudging stems from the worry that nudging is not in the interest of those being nudged. However, what should presumably guide the design of choice architecture is precisely the interests of those who are to choose. To take an example used earlier, suppose a government has to decide on a default option for organ donation: donating or not donating. Choosing an opt-in system might be against the interest of the people it governs; in most surveys, the majority of participants approve of organ donation (Johnson and Goldstein 2003; McKenzie, Liersch, and Finkelstein 2006). Therefore, it would be in the interest of those being nudged, and thus arguably required of the government, to set up an opt-out system for organ donations. At the very least, it seems to be against the known wishes of most people to set up an opt-in system. Therefore, it seems that far from being problematic, nudging may in fact be required on at least some occasions. Third and relatedly, someone opposed to nudging might, at this point, suggest to choose the default option randomly. For instance, a government deciding about regulation of organ donations could flip a fair coin, and if heads comes up, set up an opt-in system but otherwise an opt-out system. This procedure seems to be free from any arbitrary or manipulative intention. However, as studies have shown, people are opposed to this kind of random treatment in many contexts (Heck et al. 2020; Meyer, Heck, et al. 2019a,b; Mislavsky, Dietvorst, and Simonsohn 2019). For example, one study described two possible ways in which hospital safety precautions could be displayed: on doctors’ badges or on a poster on the wall; it was not known which display would be more successful to increase patients’ survival rates (Meyer, Heck, et al. 2019a). Participants rated how appropriate they found each of the two options as well as processes that randomly assigned participants to either of them. A far greater percentage of survey respondents disapproved of the procedures that included random assignment.

⁸ Cf. Section 3.4.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

practical takeaways 151 The aversion to being randomly assigned to one of several conditions is further illustrated by the backlash against randomized controlled trials that studied the influence of Facebook’s newsfeed on emotions and voting (Bond et al. 2012; Kramer, Guillory, and Hancock 2014). As their name indicates, randomized controlled trials randomly assign participants to different treatments. When creating a profile, Facebook users consent to an algorithm determining the content of their newsfeed. Controversy arose when that content was instead randomized for scientific studies (Goel 2014; Meyer 2014; Verma 2014). Notably, criticism concerned consent to a randomized treatment only and not consent to any other treatment that did not involve randomization. Similar debates arose over tests of a software design used by the education service provider Pearson Education, of a matching algorithm employed by the dating website OkCupid, and of babies’ medical treatments by physicians (Meyer, Heck, et al. 2019a; Rosenbaum 2016). Treating people randomly thus seems to be regarded as worse than nudging them. For instance, Facebook users do not seem to mind having newsfeed content imposed on them unless that content is selected randomly. Choosing the default option randomly therefore does not seem to be a viable alternative to nudging. If there are issues of consent over random newsfeeds, then surely there are issues of consent over a government that randomly assigns its citizens to be organ donors by default or not. In sum, adjusting the environment can change behaviour by exploiting or mitigating biases. More specifically, so-called ‘nudging’ changes the choice architecture without changing the choice options themselves. Concerns of autonomy and liberty that have been raised for nudging can be refuted; nudging is, therefore, a promising way of addressing biases.

8.3 Addressing Weak-Willed Delay Discounting Let us return to weak-willed delay discounting more specifically, which is determined by an uncertainty bias.⁹ Beyond cognitive control strategies and adjustments of the environment in general, which we considered in the previous section, research into delay discounting offers at least three more specific ways in which we can exploit or avoid our biases. These will be our topic in the current section. Research into delay discounting has identified three aspects that determine how agents value delayed rewards: the relative amounts of the different benefits on offer, their temporal delay and the uncertainty that the delay involves, and the agent’s individual sensibility to uncertainty. Let us take them in turn. ⁹ As argued in Chapter 6.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

152 weakness of will and delay discounting

8.3.1 Changing the Relative Amounts of Benefits To begin, the size of possible benefits that are on offer crucially determines the choice a person will make. Even if an agent discounts delayed benefits, their relative size still influences the decision: a benefit ten times the size of an alternative is still larger when it is discounted by half but a benefit just slightly larger seems inferior when discounted by the same rate. A straightforward way to address time-biased discounting, then, is to change the relative amounts of the benefits on offer (Duckworth, Gendler, and Gross 2016). In other words, both the agents themselves and others aiming to influence them may adjust the incentives by increasing or decreasing the value of an option. A less favourable benefit can be made even less favourable (Giné, Karlan, and Zinman 2010; Royer, Stehr, and Sydnor 2015). For example, participants in one study were offered cash-back on healthy food purchases (Schwartz et al. 2014). In addition, they could opt-in to lose the cashback again if they failed to increase their purchases of healthy options by 5% over the next six months. By signing up for this challenge, participants committed themselves to getting penalized if they failed to buy healthy foods. That is, they made not buying healthy foods even less appealing by adding a cost to it. Participants who took that challenge bought more healthy food than those who did not. On a societal or even global level, regulators may change incentives. For example, they can increase the costs of alcohol, sugar, or tobacco with a tax or by shifting subsidies away from sugar and fat producing industries and towards vegetable and fruit producers. Either way, they may increase the costs of unhealthier options and thus make them less valuable, which lowers consumption (Chaloupka, Powell, and Warner 2019; Elder et al. 2010; Falbe et al. 2016; Ross 2004; US National Cancer Institute and World Health Organization 2016; Wagenaar, Salois, and Komro 2009). Adjusting relevant costs and benefits in relation to each other can also incentivize individuals and corporations to act in a more climate-friendly way. For instance, increasing one’s carbon footprint can be made more costly by taxes on carbon emissions (Gardiner et al. 2010; Mintz-Woo 2022). Adjusting the relative amounts of benefits is a basic measure that can impact actions of individuals or collective agents in general and is not restricted to delayed benefits. Other measures are not as widely applicable. Among those are adjustments of delays, to which we shall turn in the next section.

8.3.2 Adjusting the Delay Another way to address time-biased discounting is to change the timing of the decision, of the realization of the rewards, or the delay between them. These three aspects are inter-related; for instance, changing only the time of the decision

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

practical takeaways 153 automatically changes the delay between the decision and the reward realization. Changing the point in time when one of the rewards materializes also changes the delay between that reward and its rival. Let us begin with the timing of the decision. Choosing the later but larger reward requires the decision to be made at a time when its discounted value is larger than the discounted value of the smaller reward, such as before the indifference point.1⁰ For instance, agents like a dieter may benefit if they decide in the morning whether to have dessert after dinner or not, i.e. at a time when the delay is greater, than during dinner when the delay is relatively short (van de Ven, Gilovich, and Zeelenberg 2010). Another illustrative example are Christmas savings accounts, which were popular before the era of credit cards (Thaler and Sunstein [1999] 2008, p. 51). These are accounts customers open to deposit a fixed amount of money every week. These funds cannot be withdrawn until the end of the year for Christmas shopping. Christmas clubs usually pay no or nearly no interest rate. Therefore, a customer might be as well or even better off to set aside money for Christmas shopping on their own. However, the savings account may protect the user from their own bias to spend money on immediately available but smaller benefits during the year, which would undermine saving for Christmas. Conversely, deciding about the timing of rewards that have negative value, i.e. unpleasant events like surgeries or tasks, from a greater temporal distance may preclude putting them off when they are immediate. In other words, fixing the date and time of an ordeal when it is in the distant future may prevent postponing or procrastinating when it is immediate (Heath and Anderson 2010). For example, a randomized controlled trial in Kenya studied farmers facing the decision of whether and when they should make a costly investment in fertilizer, which pays off later on because of an improved harvest (Duflo, Kremer, and Robinson 2011). Incentivizing farmers to make the costly investment earlier in the season was more successful than incentivizing them (even more) later in the season. Deciding to commit to a chore earlier on thus seems more advantageous than postponing the decision until it may be too late to reap its benefits. The literature on precommitment or commitment devices provides further information about how the timing of a decision affects its outcome. Precommitment literally commits the agent to one of the choice options before (‘pre’) the actual time of decision.11 Human and non-human animals have been found to use this strategy successfully by e.g. eliminating sub-optimal options ahead of time (Ainslie 1974; Wertenbroch 1998). For example, if participants in a controlled experiment were able to eliminate a smaller but sooner reward before a delay period, they were

1⁰ Cf. Figure 2, Section 5.2. 11 We have encountered examples of precommitment in Section 3.4 (Elster [1979] 2013; Mele 2012).

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

154 weakness of will and delay discounting (a)

(b) V(LL)

V(SS)

V(LL)

V(SS)

E(SS)tR

E(LL)tR

tR tS

tL

Time

tS

tL

Time

Figure 8 Discounted values of a later, larger (LL) and a smaller, sooner (SS) reward with two different times of realization tS and tL . tS is earlier in (a) than in (b) and the time span between tS and tL is greater in (a) than in (b). Accordingly, only in (a) there is an indifference point where the discount curves cross, and some point in time tR when the agent prefers SS over LL. At that time, the discounted value of SS is greater than that of LL: E(SS)tR > E(LL)tR . In (b), the agent prefers LL over SS at all times.

more likely to choose a later but larger alternative than if they could either reverse their initial choice or not precommit to it (Crockett et al. 2013). Changing the time of realization of a reward or the delay between it and its alternative is a further approach that may determine relative preference and choice. For example, delaying the smaller reward further and thereby narrowing the time period between its realization and that of its alternative may avoid indifference points and thus preference reversals altogether. Figure 8 illustrates this: in scenario (a), the agent temporarily prefers the smaller reward over the larger one; in scenario (b), the time of realization is later and the agent thus never reverses their preference. Similar results can obtain if the realization of the larger reward is changed to an earlier point in time. Here is a case that illustrates how this approach can be put to use. Imagine that you are booking a table for dinner with friends when you realize that, the same evening, there is a popular sports match scheduled for 8 pm. You know that you and your guests will be tempted to watch the game and either be late for dinner or skip it entirely. Most likely, at the time when everyone reads the invite, they may plan to attend the dinner. But on the evening, they will probably switch on their TV and get sucked into watching the game. Suppose you cannot change the date of the dinner but you can book a table for either 8.30 pm or 9 pm. Which one should you choose? The timing may not be decisive for the devoted friends or ardent sports fans, for whom the difference in value between the two is so great that delay discounting does not affect it. However, you may sway some of the more indecisive lot with an earlier dinner. You may push the realization of the later reward (dinner) close enough to that of the earlier one (the game) that its relative value becomes greater than that of the alternative. Thereby, at least some of your guests may opt for the dinner rather than the TV.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

practical takeaways 155 Often it is not possible to adjust delays or timing of decisions or rewards in ways like these. In most cases, we can shift self-imposed deadlines at leisure or consume a treat that tempts us in that moment. The problem here is that when the time of action comes, delay and uncertainty are disproportionately smaller for what we previously and later on tend to regard as the suboptimal option. For example, although a dieter might think in the morning and the next day that declining dessert after dinner would be best for their health, at the time when they would have to decline it, the dessert is much more appealing than some remote health benefit. To address issues like these, strategies are available to reduce uncertainty in the moment of temptation.

8.3.3 Reducing Uncertainty Reducing uncertainty tends to prevent agents from deciding anew on the spur of the moment. This, in turn, facilitates sticking to and carrying out a decision or resolution made in the past. For instance, psychologists have developed a cognitive device called ‘implementation intentions’ (Gollwitzer 1999). This strategy requires that an agent can decide in advance how to act at a later point in time. For example, a dieter may decide in the morning to decline dessert in the evening. As they foresee that there will be a later occasion to decide anew and be lured by temptation, they may also form a specific intention to resist. Such an implementation intention takes the form of a conditional like ‘when I am offered dessert, I politely decline’. In other words, the agent devises a plan to implement the decision made in advance. Such planning reduces uncertainty at the time of action because it tends to prevent the agent from revising their initial decision. A large body of research has shown that implementation intentions may indeed help agents to circumvent time biases to some extent (Gollwitzer and Brandstätter 1997; Gollwitzer and Sheeran 2006; Verplanken and Faes 1999). For example, they support people in casting their vote at elections (Nickerson and Rogers 2010) or getting a flu shot (Milkman et al. 2011). Disadvantaged children have successfully used them to improve their academic performance (Duckworth, Kirby, et al. 2013). Habits also reduce uncertainty. Habits are semi- or fully automatic actions typically triggered by a cue (Wood and Rünger 2016). For example, most agents have a habit of buckling their seat belt when they get into a car. We form habits like this by repeatedly executing a behaviour, like buckling the seat belt, when cued to do so by the environment, like the interior of a car. Psychologically, habits are formed through classical or operant conditioning. In classical conditioning, an un-related stimulus, like a sound, is repeatedly paired with a reward, such as a snack. Eventually, the organism associates the stimulus with the reward. For example, Pavlov famously trained his dogs to associate food with the sound of a bell (Barsalou 2014; Pavlov 1927; Rescorla 1988).

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

156 weakness of will and delay discounting Operant or instrumental conditioning connects a stimulus, a voluntary action, and a reward. For example, a dog trainer might give a verbal command to signal that the dog ought to perform a specific action, e.g. sit down. If the dog executes the behaviour correctly, the trainer may reward it with a treat (positive reinforcement). If the dog fails, the trainer might punish it (negative reinforcement). Over time, the dog learns to associate the command with a certain course of action and will execute it automatically when cued. Both classical and operant conditioning can be used to create habits that circumvent time biases (Galla and Duckworth 2015; Lally et al. 2010; Neal, Wood, and Drolet 2013). For example, an agent might wish to develop a habit of exercising in the morning to improve their fitness and well-being. However, they usually find themselves with a temptation to head straight to the breakfast table with the prospect of better health and fitness lingering only vaguely in their mind. To establish a habit of exercising, they might place their exercise gear in front of the door to the room where they tend to have breakfast. The gear serves as a stimulus that cues exercising. If the agent successfully executes the intended behaviour, they pick up a fresh croissant from the corner shop. This treat serves as a reward for exercising. Repeatedly completing the cycle of cue, action, and reward facilitates exercising over time, thus establishing a habit. Reducing uncertainty thus influences delay discounting and in turn may prevent weak-willed action. While this approach is promising especially in cases where we can change our environment according to our needs, an approach targeting our individual sensitivity towards uncertainty may be even more widely applicable across different situations. We turn to this strategy next.

8.3.4 Changing Individual Sensitivity towards Uncertainty The extent to which an agent discounts benefits with their delay depends not only on their relative amounts, delay, or uncertainty but also on the agent’s individual sensitivity to uncertainty.12 More specifically, it depends on their sensitivity to the uncertainty inherently involved in temporal delay. Therefore, one way in which delay discounting can be changed is by changing the agent’s sensitivity towards uncertainty, i.e. how easily and profoundly their choices and actions are affected by uncertainty. Changing this sensitivity may be difficult, even impossible, and it may at best happen over a longer period of time. Growing up, agents presumably learn to expect that the uncertainty of their environment falls within a certain range, even though it may fluctuate to some degree (Kidd, Palmeri, and Aslin 2013; McGuire

12 Cf. Chapter 6.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

practical takeaways 157 and Kable 2012). If children are provided with stable surroundings that involve little uncertainty, they may grow more tolerant of uncertainty, and consequently discount delayed benefits less steeply. The more an adolescent views their family as organized and cohesive, for example, the less likely they are six years later to engage in unhealthy behaviour like smoking, substance abuse, or violence (Fisher and Feldman 1998; cf. Repetti, Taylor, and Seeman 2002). Conversely, insecurity in a family such as frequent residential relocations or separation from a parent is associated with emotional instability in children, independently of their socioeconomic status (Adam 2004). On a societal level, policy measures that target social and economic insecurity may indirectly improve people’s individual sensitivity to uncertainty. For one thing, obesity rates tend to be lower and personal savings tend to be higher when financial security is greater (Loewenstein 2018). Conversely, poverty promotes short-sighted and risk-averse decision-making (Bernheim, Ray, and Yeltekin 2015; Haushofer and Fehr 2014). Welfare states mitigating social and economic uncertainty may soften discounting, and thus ease problems stemming from it.

8.4 Conclusion How can we deal with weak-willed delay discounting, which is determined by a bias against uncertainty? In this chapter, I have argued that although attempts to resist immediate temptations may at times be successful, other strategies are more promising, as our approach to biases in other domains shows. Specifically, by adjusting the environment of our actions in a way that serves our long-term goals we increase our chances to reach them more efficiently. These strategies can be employed on an individual as well as on a societal level. Within a discounting framework, we can pursue the following three options. First, we may adjust the relative amounts of the benefits on offer. Second, we may adjust the delay and thus the uncertainty it involves by, say, shifting the point in time at which a decision is made. Third, we may change the sensitivity of individuals towards uncertainty.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

9 Conclusion We conclude this monograph by, first, summing up the main findings of the work it presented and, second, identifying and discussing questions we had to leave open and which might provide avenues for future research. Over two millennia, philosophers have characterized weakness of will in different ways. Recently, some have built on a tradition in economics, psychiatry, and the behavioural and brain sciences that accounts for the phenomenon in terms of time or delay discounting. Delay discounting is, typically, valuing a future benefit less with increased temporal distance. On this view, weakness of will is understood as an (overly) steep discounting of a future benefit. For example, a weak-willed child in the marshmallow experiment initially begins to wait for the experimenter to return with a large reward but then eats a readily available but lesser treat instead. This preference reversal occurs because the child discounts the larger reward too much compared to the smaller one. We have examined philosophical accounts of weakness of will and delay discounting models in detail.1 As it has turned out, delay discounting models allow but do not require that a weak-willed agent reverses their preferences. Relatedly, preference reversals may be caused by exponential delay discounting but need not occur in hyperbolic discounting, and they are neither necessary nor sufficient for weakness of will. Lastly, some but by far not all cases of weakness of will are due to preference reversals and delay discounting. Conversely, not all preference reversals and cases of delay discounting are also weak-willed. Therefore Part III has focused on those phenomena that may be described as ‘weak-willed delay discounting’, i.e. that are examples of both weakness of will and delay discounting. Weak-willed delay discounting may largely occur not due to delay only but rather due to the uncertainty that the delay involves. When deciding about a delayed option, we automatically take into account the uncertainty about when, if ever, it will materialize. Individual agents differ with respect to their idiosyncratic sensitivity towards this uncertainty. In the marshmallow experiment, for example, a child waiting for the experimenter to return with a large treat is uncertain about when it will materialize, if at all. Probably, the child has some initial expectations. As they await the return

1 In Parts I and II, respectively.

Weakness of Will and Delay Discounting. Nora Heinzelmann, Oxford University Press. © Nora Heinzelmann 2023. DOI: 10.1093/oso/9780192865953.003.0009

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

conclusion 159 of the experimenter, they may have to revise these expectations. If hopes for an early return of the experimenter are continually dashed, the child may eventually decide to opt for the secure option and consume the smaller treat. This understanding of weak-willed delay discounting suggests that it is grounded in a cognitive bias against uncertainty. A cognitive bias is a tendency that influences our actions often without our awareness. This suggests three strategies to address problematic delay discounting on an individual or societal level: we may adjust the sizes of delayed benefits, we may tweak their delay or uncertainty, and in the longer term we may change how susceptible our behaviour is to delay or uncertainty. Whatever option we choose, organizing our physical and social environment to fit our purpose can increase our chances of success. The results presented in this monograph have raised at least two questions that could not be answered within the scope of our research. Doing so may be a task for future work. First, although weak-willed delay discounting seems to account for a large portion of the cases that philosophers would refer to as ‘weakness of the will’, it does not cover all of them. There is conceptual space for some examples of weakness of will that are not instances of weak-willed delay discounting. This monograph has developed a suggestion that can account for a greater share of cases than the delay discounting models found in the philosophical literature so far.2 However, this project is not yet finished. Future work may benefit from advances in interdisciplinary research to develop an even more powerful philosophical account of weakness of the will. Second, weak-willed delay discounting is commonly regarded as problematic because it seems irrational. However, it remains unclear why and under what condition delay discounting and, relatedly, biases about time and uncertainty are irrational.3 But answering those questions is impossible as long as we lack a clear definition of rationality. Yet there is no consensus in philosophical research on rationality. Future work addressing this issue promises to advance accounts of weak-willed delay discounting, and the philosophy of weakness of the will more generally.

2 Cf. Chapter 6.

3 Cf. Chapter 7.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Appendices A Models of Weak-Willed Discounting Weak-willed delay discounting can be modelled in three ways: as an overly steep discounting of a delayed reward, as the choice of a smaller, sooner over a larger, later reward, or as a preference reversal in this direction.1 This section shows that the third description implies the second one, i.e. a reversal of preferences between a smaller, sooner and a larger, later reward implies that the larger reward is discounted more steeply than the smaller one. That is, the discounted value E of one reward A at one point in time or with some delay d1 is smaller than that of another reward B, yet the discounted value of A at another point in time or with some other delay d2 is larger than that of B:2 E(d1 , V(A)) < E(d1 , V(B)) ↔ fA (d1 ) × V(A) < fB (d1 ) × V(B)

(i)

E(d2 , V(A)) > E(d2 , V(B)) ↔ fA (d2 ) × V(A) > fB (d2 ) × V(B)

(ii)

V is the (un-discounted) value of a reward and f is a discount function. These equations can only be true if f takes different forms for A and B in at least one instance. Here is why. Either A is less valuable than B, or B is less valuable than A, or they are of the same value. This gives us the following mutually exclusive and jointly exhaustive options: (iii.a) V(A) < V(B) (iii.b) V(A) > V(B) (iii.c) V(A) = V(B) Consider each in turn. (a) If V(A) < V(B): V(A) < V(B) ↔ V(A) × p = V(B), p > 1 fA (d2 ) × V(A) > fB (d2 ) × V(A) × p

(iv.a) (iv.a) in (ii)

fA (d2 ) > fB (d2 ) × p fA (d2 ) > fB (d2 )

(b) If V(A) > V(B): V(A) > V(B) ↔ V(B) × p = V(A), p > 1 fA (d1 ) × V(B) × p < fB (d1 ) × V(B) fA (d1 ) × p < fB (d1 ) fA (d1 ) < fB (d1 )

1 Cf. Section 5.3. 2 Cf. equation 3, Section 5.1.

(iv.b) (iv.b) in (i)

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

162 appendices (c) If V(A) = V(B): fA (d1 ) < fB (d1 )

(iii.c) in (i)

fA (d2 ) > fB (d2 )

(iii.c) in (ii)

In short, within the framework of delay discounting theory, a preference reversal between two rewards of unequal size implies that the value of the larger of the two rewards is discounted more steeply than that of the smaller one.

B Hyperbolic Delay Discounting This appendix describes some further details of the hyperbolic delay discount model. The first section shows that the discount rate k cannot be the same for both the later, larger and the smaller, sooner reward if synchronic preference reversals are possible. The subsequent section further specifies the relative size of the two values for k. The last section specifies the relative size of k for the case of diachronic preference reversals. Hyperbolic discount models specify the discount function f as a hyperbolic function. The graph of such a function is a hyperbola. f typically takes the form f(d) =

1 , d≥0 1+k×d

(22)

with a constant k and delay d. Accordingly, the expected value is, for a reward with value V, E(d) = f(d) × V =

1 ×V 1+k×d

(23)

Quasi-hyperbolic discount models are also popular in the literature (Ainslie 2012, p. 10). Their discount function is (Laibson 1997): f(d) = 𝛽 × d𝛿 with delay d and two constant terms 𝛽 and 𝛿. Mathematically, the hyperbolic model can be seen as a specific transformation3 of a quasi-hyperbolic model with 𝛽 = 1 and 𝛿 = −1.

3 Start with the quasi-hyperbolic discount function as stated in equation 14, and assume that 𝛽 = 1 and that 𝛿 = −1: 1 f(d) = 𝛽 × d𝛿 = 1 × d−1 = d Let us now define d as a positive affine transformation of D: d = k × D + 1. k is a constant term. Replacing d with D above yields 1 1 ⇔ f(D) = d k×D+1 This is just the hyperbolic discount function as stated in equation 22. f(d) =

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

appendices 163

Synchronic Preference Reversals Require Two Different Values for k If synchronic preference reversals are possible, then the agent cannot discount the relevant rewards with one and the same discount rate k. Reductio ad absurdum shows why. Assume that (a) all delayed rewards are discounted with the same rate k, and (b) synchronic preference reversals are possible. That is, it is possible for an agent to have, at one and the same time, a preference for A over B when both rewards are delayed with a delay d2 , and also a preference for B over A when both rewards are delayed with d1 (d1 < d2 ). The un-discounted values of A and B are V(A) and V(B), respectively. Call their with a delay d discounted values ‘E(A)d ’ and ‘E(B)d ’, respectively. Given (b), call ‘d1 ’ a delay for which E(A) > E(B) and ‘d2 ’ a delay for which E(A) < E(B). Assume that 0 < d1 < d2 , that is, E(A)d1 and E(B)d2 are not identical with the un-discounted values V(A) and V(B), respectively. Then: E(A)d1 > E(B)d1 This is, given equations 3 and 12, equivalent to 1 1 × V(A) > × V(B) 1 + k × d1 1 + k × d1 Thus, V(A) must be larger than V(B): V(A) > V(B) Consider now the expected values of A and B at d2 . By assumption, we have E(A)d2 < E(B)d2 This is, again given equations 3 and 12, equivalent to 1 1 × V(A) < × V(B) 1 + k × d2 1 + k × d2 This inequation can only be true if V(A) < V(B). So, from our assumptions, we have to conclude that the un-discounted value of A has to be both smaller and larger than the un-discounted value of B. This is absurd. At least one of the assumptions has to be wrong. Because we have much better and more solid evidence for the existence of synchronic preference reversals, assumption (b) should not be abandoned. So we have to reject (a), the assumption that k is the same for discounting of the values of both A and B.

Synchronic Preference Reversals: Size of k We can further specify the relative size of k for synchronic delay discounting. Let A and B be two rewards with discount rates kA and kB , respectively. Consider two unequal positive delays, d1 and d2 , with d1 < d2 . Assume there is a delay di (d1 < di < d2 ) for which the agent is indifferent between A and B.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

164 appendices We thus have: E(A)di = E(B)di 1 1 × V(A) = × V(B) 1 + kA × di 1 + kB × di



We know that 0 < d1 < di < d2 . We can also safely assume that the discounted values of A and B are smaller than their un-discounted ones. Then the discount rates kA and kB need to be positive. So all denominators are positive. If we then multiply each side of the equation with (1 + kA × di )(1 + kB × di ), we have: (1 + kA × di )(1 + kB × di ) (1 + kA × di )(1 + kB × di ) × V(A) = × V(B) 1 + kA × di 1 + kB × di



(1 + kB × di ) × V(A) = (1 + kA × di ) × V(B) Because we know that, at d1 < di , E(A) > E(B), it seems safe to assume that V(A) > V(B). This is because, when there is no delay, i.e. d0 = 0, E(A) = V(A) and E(B) = V(B). Also, d0 < d1 < di . So, if the equation stated above is true, and if V(A) > V(B), it has to be true that: 1 + kB × di < 1 + kA × di (For instance, given V(A) > V(B), you can assume that V(A) = V(B) × g with g > 1. Replacing V(A) with V(B) × g makes it clear that it has to be the case that 1 + kB × di < 1 + kA × di .) So we can derive (because di > 0): 1 + kB × di < 1 + kA × di



kB × di < kA × di



kB < kA Hence, the discount rate kA for the larger reward A is greater than the discount rate kB for the smaller reward B.

Diachronic Preference Reversals: Size of k This section shows that hyperbolic discounting can describe preference reversals on the assumption that there is just one discount rate k with which an individual is discounting all delayed rewards. Consider a binary choice between two rewards A and B with un-discounted values V(A) and V(B) and discounted values at some point in time t (t < 0) E(A)t and (B)t , respectively. If it is possible that a preference reversal occurs, then there can be some point in time ti at which the agent is indifferent between A and B. Consider some point in time t1 prior to ti , and a second point in time t2 after ti . Let us make the conditional assumptions that, if ti exists, then, at t1 , the agent prefers A over B, and, at t2 , the agent prefers B over A. Assume that 0 < t1 < ti < t2 . If such preference reversals can occur, the following equation will have a solution:

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

appendices 165 E(A)ti = E(B)ti That is, the discounted values of A and B at ti are the same. Given equations 3 and 12, this is equivalent to V(A) V(B) = (24) 1 − (ti − tA )k 1 − (ti − tB )k with a constant k. If this equation has a solution, then the agent prefers A over B before ti . This implies that, at t = 0, V(A) > V(B). But then equation 24 can only be true if: 1 1 < 1 − (ti − tA )k 1 − (ti − tB )k 1 1 < 1 − (ti − tA ) 1 − (ti − tB )



(25) (26)

Because 0 < ti < tA and 0 < ti < tB , it has to be the case that (ti − tA ) < 0 and (ti − tB ) < 0. Therefore, 1 − (ti − tA ) > 0 and 1 − (ti − tB ) > 0. We can thus multiply inequation 26 by (1 − (ti − tA ))(1 − (ti − tB )): 1 1 < 1 − (ti − tA ) 1 − (ti − tB ) (1 − (ti − tA ))(1 − (ti − tB )) (1 − (ti − tA ))(1 − (ti − tB )) < 1 − (ti − tA ) 1 − (ti − tB )

⇔ ⇔

1 − (ti − tB ) < 1 − (ti − tA )



−(ti − tB ) < −(ti − tA )



ti − tB > ti − tA



−tB > −tA



tB < tA It follows that the smaller reward B is expected earlier than the larger reward A. Only if this condition is fulfilled, it is possible to discount both rewards with the same discount rate k. To conclude, the hyperbolic discounting model can describe a diachronic preference reversal between two rewards of different sizes using just one discount rate k, provided that the smaller reward is expected earlier than the larger reward.

C Exponential Delay Discounting This section proves, in two subsections, that exponential models can describe synchronic and diachronic preference reversals, respectively.

Synchronic Preference Reversals The exponential discount model can account for synchronic preference reversals. Consider a binary choice between two rewards of different values, A and B, with V(A) > V(B). Assume

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

166 appendices the agent discounts the values of the rewards with discount rates rA and rB (rA ≠ rB ), respectively. If synchronic preference reversals are possible, then there is some delay di (di > 0) for which the agent is indifferent between the two rewards. This is the case if the following equation has a solution (cf. equations 3 and 7): E(A)di = E(B)di −rA di

e

−rB di

× V(A) = e

⇔ × V(B)

Because V(A) > V(B), we can stipulate that: V(A) = pV(B), p > 1 Replacing V(A) in the equation above, we obtain: e−rA di × pV(B) = e−rB di × V(B) −rA di

pe

−rA di

ln(pe

−rB di

=e

−rB di

) = ln(e

⇔ ⇔

)

ln(p) − rA di = −rB di

⇔ ⇔

ln(p) = rA di − rB di



= di (rA − rB )



ln(p) = rA − rB di Because we know that p > 1, we know that ln(p) > 0. We also know that di > 0. So we ln(p) > 0. Therefore, we can conclude that there is a solution if rA > rB > 0. know that di

That is, an exponential discount model can describe synchronic preferences if the positive discount rate for the larger reward is bigger than the positive discount rate for the smaller reward.

Diachronic Preference Reversals This section shows that the exponential discount model can account for diachronic preference reversals. Recall that the hyperbolic discount model does so on the assumption that the agent makes a choice between a larger, later reward A and a sooner, smaller reward B. We shall make the same assumptions for the exponential model. That is, V(A) > V(B) and tA > tB > 0, the respective times of realization for A and B. For diachronic reversals, there must be a point in time ti > 0 at which the agent is indifferent between the two options: E(A)ti = E(B)ti We can state this, given equations 3 and 8, as: erA (ti −tA ) × V(A) = erB (ti −tB ) × V(B)

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

appendices 167 Because V(A) > V(B), we can stipulate that V(A) = pV(B), p > 1 and because tA > tB : tA = qtB , q > 1 Replacing V(A) and tA accordingly, we have: erA (ti −qtB ) × pV(B) = erB (ti −tB ) × V(B) rA (ti −qtB )

e ln(e

rA (ti −qtB )



rB (ti −tB )

×p=e

rB (ti −tB )

× p) = ln(e

⇔ )



ln(p) + rA (ti − qtB ) = rB (ti − tB )



ln(p) = rB (ti − tB ) − rA (ti − qtB ) We know that p > 1, therefore ln(p) > 0. So it has to be the case that: rB (ti − tB ) > rA (ti − qtB )



(t − qtB ) rB < rA × i (ti − tB ) As we know that tB > ti > 0, we can infer that (ti − tB ) < 0. Similarly, as qtB > ti , it is the case that (ti − qtB ) < 0. As qtB > tB , we can infer that (ti − qtB ) < (ti − tB ) < 0. (t −qt ) Thus i B > 1. So we can conclude that rB ≠ rA . In other words, diachronic preference (ti −tB )

reversals are possible on the assumption that the discount rate r is not the same for both rewards.

D Sozou’s Model Sozou (1998) suggests that agents take an uncertain constant hazard rate into account when discounting the value of future rewards. On this view, the discount function is a survival function s(t). s(t) specifies the probability that no hazard occurs until t.⁴ So, the value E(t) of a reward obtained at t is E(t) = s(t) × V.⁵ s(t) is the probability of ‘surviving’ from now (t0 ) until t or throughout the entire interval Δt = t − t0 . There are two ways to specify this survival. First, we calculate the probability of surviving until t and multiply it with the probability that no hazard occurs in the subsequent time interval until t + dt. Sozou uses a hazard function h(t) to specify the probability that a reward is lost for good. This probability is conditional on the hazard’s not having occurred before t. The risk to lose the reward for good between a point in time t and some small increment of time dt is thus: Δh = h(t + Δt) − h(t) (27)

⁴ For details of this rationale, cf. Section 6.1. ⁵ Cf. equation 16 in Section 6.1.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

168 appendices for any difference Δh between h(t) and h(t + Δt). For very small Δt, when t +Δt ‘approaches’ t, one has: Δh = h′ (t)Δh (28) This is commonly expressed as: dh = h′ (t)dt

(29)

‘dh’ and ‘dy’ indicate infinitesimally small intervals. The probability to survive for another period of time t plus a small increment of time dt, i.e. s(t + dt), is equivalent to surviving until t and no hazard occurring between t and t+dt. This, in turn, is just 1 (certainty) minus the probability of the hazard occurring between t and dt: (1 − h(t)dt). So we can calculate s(t + dt) by multiplying s(t) and (1 − h(t)dt): s(t + dt) = s(t)(1 − h(t)dt) = s(t) − s(t)h(t)dt

(30)

Second, the probability of surviving during some future interval is the probability of surviving until t + Δt minus the probability of surviving until t. That is, we calculate the difference between s(t) and s(t + dt): s(t) − s(t + dt) = s(t) − s(t) − ds = −ds

(31)

Replacing s(t + dt) in 31 with 30 and transforming the result yields: −ds = s(t) − s(t + dt)



= s(t) − s(t) + s(t)h(t)dt



= s(t)h(t)dt



ds − = s(t)h(t) dt ds 1 h(t) = − × dt s(t)

⇔ (32)

Assume that the hazard rate is a constant r for all t: def

h(t) = r

(33)

Then, given 32: ds 1 × dt s(t) ds 1 =− × dt s(t)

h(t) = r = −

ds = −rdt s(t) ds + C = ∫ −rdt + C s(t) 1 ∫ ds + C = ∫ −rdt + C s(t) ∫

(34) ⇔ ⇔ ⇔ ⇔

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

appendices 169 ln|s(t)| = −rt



|s(t)| = e−rt

(35)

For positive delays t > 0, s(t) will be positive as well. Thus: s(t) = e−rt

E Dasgupta and Maskin’s Model Dasgupta and Maskin (2005)’s model specifies the expected value E as a function of the undiscounted value V, the anticipated time of realization T, and the ‘present’ point in time 𝜏: E(V, T, 𝜏). As explained in Section 6.2: E = a(T) × b(T) × V + a(t) × b(t) × V = (a(T)b(T) + a(t)b(t)) × V

(36)

a is the probability of receiving the reward, b the probability of no hazard occurring, and V the un-discounted value of the reward. Both a and b are in turn functions of T, the anticipated time of realization, and t, the time of early or late realization (t ≠ T). Focus on a first. The model assumes that the probability of receiving the reward at some point in time t is distributed according to a probability density function Q(t).⁶ By integrating over the probability density function Q(t), we can calculate the probability for receiving a reward. A probability p that the reward realization takes place during Δt is thus: t2

p(Δt) = p(t1 , t2 ) = ∫ Q(t)dt t1

Because the probability that the reward is realized at some precise point in time 𝜏 approaches 0, Dasgupta and Maskin stipulate an exception for 𝜏 = T, the time for which we anticipate to receive the reward. The model postulates a so-called probability atom at T, i.e.: T+t

lim ∫ t→0

Q(t)dt > 0

T−t

There is one further complication. Q(t) specifies the probability density for reward realization at t. But we seek the probability density of reward realization at t conditional on the reward not having been realized before t. Call this conditional probability ‘Q’̂ . To determine Q,̂ we divide Q(t) by the probability of early realization. Because there is a probability atom at t = T, we consider two alternative cases: t < T and t > T. Focus on t < T first. Assume we start waiting at t0 = 0. Then the probability of receiving the reward before t is: t

∫ Q(x)dx 0

⁶ Cf. section 6.2.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

170 appendices Because the model assumes that the reward will be realized with certainty: ∞

∫ Q(x)dx = 1

(37)

0

Accordingly:

t

1 − ∫ Q(x)dx

(38)

0

is the probability that we do not receive the reward before t. Now consider the second case, t > T. Again, we know that: ∞



t

∫ Q(x)dx = ∫ Q(x)dx + ∫ Q(x)dx = 1 0

0

(39)

t

So, we can express equation 38 as:



∫ Q(x)dx

(40)

t

̂ Combining equations 38 and 40, we can now formulate an equation for Q(t):

̂ = Q(t)

Q(t) ⎧ ⎪ 1 − ∫t Q(x) dx

tT

0

(41)

Accordingly, the probability of receiving the reward at some point in time 𝜏 is ∞



̂ ̂ 1 = ∫ Q(t)dt + (1 − ∫ Q(t)dt) 𝜏

(42)

𝜏

Equation 42 gives us a, the probability of receiving the reward, conditional on not having received it yet, that is, before 𝜏. Let us now turn to b, the probability of no hazard occurring until reward realization. As explained in Appendix D, if there is a constant hazard rate r per unit of time t, the probability of not losing the reward during some interval t is s(t) = e−rt . Accordingly, the probability of not losing the reward until T is e−rT

(43)

and the probability of not losing the reward until some point in time t ≠ T is e−rt

(44)

We can now combine a and b to determine equation 36. The first summand is a(T)b(T). As stated in equation 43, b(T) = e−rT . As stated in ∞ ̂ The second summand is a(t)b(t). As stated in equation equation 42, a(T) = 1 − ∫𝜏 Q(t)dt. ∞ −rt ̂ Taken together, we can thus 44, b(t) = e . As stated in equation 42, a(t) = ∫𝜏 Q(t)dt. express E as:

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

appendices 171 E(𝜏, V, T) = (a(T)b(T) + a(t)b(t)) × V ∞



̂ ̂ = ((1 − ∫ Q(t)dt) × e−rT + ∫ Q(t)dt × e−rt ) × V 𝜏

𝜏 ∞



̂ = ((1 − ∫ Q(t)dt) e

−rT

−rt ̂ + ∫ Q(t)e dt) × V

𝜏

𝜏

F Discounting Models and Marshmallow Cases This section proves by reductio ad absurdum that classic discounting theory cannot account for a choice of a later but larger reward over an immediate but smaller one, which is then reversed, as in marshmallow cases.⁷ Classic discounting theory assumes that, the more delayed a reward, the smaller will be its discounted value E. That is, for two delays d1 and d2 , if d1 > d2 then E(d1 ) < E(d2 ). As E = f(d) × V, with the un-discounted value of the reward V, it follows that: E(d1 ) < E(d2 )

(45)

f(d1 ) × V < f(d2 ) × V

(46)

f(d1 ) < f(d2 )

(47)

In marshmallow cases, the agent initially prefers, say, two marshmallows with delay d1 over one single marshmallow with an un-discounted value V(M). The expected value of the two marshmallows E(2M) is thus larger than V(M): E(2M, d1 ) > V(M)

(48)

But after some time, when the delay is decreased to d2 (d2 < d1 ), the agent reverses their choice: E(2M, d2 ) < V(M) (49) Given equations 48 and 49, it must be the case that: E(2M, d2 ) < E(2M, d1 ) Replacing E in equation 50 yields: f(d2 ) × V(2M) < f(d1 ) × V(2M) f(d2 ) < f(d1 ) This contradicts 47.

⁷ Cf. Section 5.6.



(50)

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

Glossary Agent. In philosophy, someone or something that acts, typically a human person. Akrasia. Literally translates as ‘without’ (‘a-’) ‘power’ or ‘strength’ (‘kratos’) from ancient Greek. Akrasia is thus a lack of control or self-governance. For Aristotle, it is a state of character. In contemporary philosophy akrasia is commonly understood as doing something while believing that something else is better and possible. Sometimes ‘akrasia’ is used as a synonym of ‘weakness of (the) will’. Behavioural economics. A sub-discipline of economics that seeks a descriptive understanding of action using empirical or experimental methods from the behavioural sciences. Bias, cognitive. A categorization, association, or tendency that inclines us to act in one way rather than in another, especially in ways considered problematic. A cognitive bias can influence actions without the agent or an observer being aware of it and it may be difficult if not impossible to control; thus biases are sometimes said to be ‘implicit’. Delay discounting. Changing the value of something with its delay, typically negatively: the more delayed something is, the less valuable it is. Delay discounting theories often specify a discounted value of something as a function of its un-discounted value and some discount factor or function. Diachronic. Literally ‘through’ or ‘over’ (‘dia’) ‘time’ (‘chronos’). For example, diachronic weakness of will is temporally extended over a period of time. Discount curve. Graph of a discounted-utility function. Discount factor. Factor, i.e. number or quantity, by which a value or utility is changed or, typically, reduced. The discounted or expected value E is a product of the un-discounted value V multiplied by the discount factor f: E = f × V. Discount function. Mathematical relation that assigns to an input d a discount factor f: f = f(d). For example, in time or delay discounting, f varies with time or delay d. Exponential, hyperbolic, and quasi-hyperbolic discount functions are common. A discounted or expected value E is a product of f and the un-discounted value V and thus, in turn, a function of d: E = f × V = f(d) × V. Dual-process models. These models claim that the mind, psychology, or brain is divided into two systems or classes of systems. One of them (‘system 1’) is fast, automatic, and effortless, the other (‘system 2’) slow, controlled, and effortful. Enkrateia. According to Aristotle, enkrateia is a state of character that contrasts with akrasia. Like the akratic agent, the enkratic agent is plagued by temptation, yet unlike them, they overcome it and succeed in doing what they judge right.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

174 glossary Exponential discounting. Discounting with an exponential discount function f of the form f = e−rd , with delay d, discount rate r, and Euler number e. Homo economicus or economic man. A human agent deliberating and acting in accordance with their self-interest; they are often characterized as maximizing utility or being perfectly rational. Hyperbolic discounting. Discounting with a hyperbolic discount function f of the form 1 f= , with delay d and discount rate k. 1+k×d

Inverse akrasia. In inverse akrasia, an akratic action is in some relevant sense superior to the action required by the standard they violate. A typical example is Huckleberry Finn’s not turning a runaway slave in to the police although he believes that he must do so. Knobe effect or side-effect effect. The finding that the valence of an action’s side-effect affects whether the action itself is judged as intentional (Knobe 2003). The effect has since been found to generalize to other domains as well. Magnitude effects in discounting occur when the discount rate varies with the size or amount of a reward, other things being equal. For example, it has been found empirically that agents discount smaller rewards more steeply than larger ones. Marshmallow experiment. A study on delay of gratification first conducted at Stanford in the 1970s. In a typical case, a child is offered a choice between a smaller, immediate reward (like one marshmallow) or a larger, delayed reward (like two marshmallows). An agent initially choosing to wait for the delayed reward but then taking the smaller one anyways is often regarded as a paradigm example of weakness of will. Nudging. Roughly, designing the choice architecture or environment in such a way that it influences choices but without changing the choice options themselves. Nudging typically exploits cognitive biases like the default bias. Paradox. In philosophy, a self-contradictory claim or a set of jointly inconsistent claims. Precommitment. A commitment, i.e. binding or assigning of oneself, to an option before (‘pre’) the actual time of decision. Precommitment may be a device or strategy for preventing weakness of will. Preference reversal occurs if one of two rewards is preferred on one occasion, for example at one point in time or with some delay, and the other is preferred on another occasion, such as at another point in time or with some other delay. Pure discounting is the discounting of value itself, not of whatever has value, such as money or a commodity. Rationality. In philosophy, rationality is a property or state of conforming to norms or standards of coherence or reasonableness; it contrasts with irrationality. Self-control. Roughly, a disposition, ability, or capacity to direct, determine, or regulate oneself. It can be more or less stable over time, i.e. like a trait or like a mood. On this broad understanding, it need not contrast with weakness of will; weak-willed actions may be self-controlled. On a narrower understanding, self-control does contrast with weakness of will, viz. if it is regarded as necessary to successfully resolve a conflict.

OUP CORRECTED PROOF – FINAL, 26/6/2023, SPi

glossary 175 Strict stationarity. A strictly stationary discount function is independent of the time when the evaluation is made. Syllogism. An item of reasoning from (typically) two propositions or premises to a third, its conclusion. Syllogisms have been used in philosophy since antiquity. A valid syllogism need not be a logically valid argument. Synchronic. Literally ‘together in time’, i.e. simultaneous or instantaneous (‘syn’: ‘together’; ‘chronos’: ‘time’). In philosophy, ‘synchronic’ concerns instances or points in time. For example, in a synchronic case of weakness of will, the agent is weak-willed at one particular instance in time. Time bias. An agent is time biased if they prefer benefits or harms depending purely on when they experience them. Examples include future and near biases. Uncertainty bias. An agent is uncertainty biased if they prefer benefits or harms depending purely on their uncertainty or ambiguity. Empirically, it has been found that humans are uncertainty averse; they prefer certain over uncertain options, other things being equal. Weakness of will. Roughly, a certain failure by the agent’s own lights that involves a conflict, is puzzling, and regarded as a defect. For example, acting against one’s better judgement or over-readily giving in to a temptation may be weakness of will. This book is largely devoted to the question of how this rough sketch should be specified in greater detail.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Bibliography Abadie, A. and S. Gay (2006). ‘The impact of presumed consent legislation on cadaveric organ donation: a cross-country study’. Journal of Health Economics 25 (4):599–620. Adam, E. (2004). ‘Beyond quality: parental and residential stability and children’s adjustment’. Current Directions in Psychological Science 13 (5):210–13. Ahmed, A. (2014). Evidence, Decision and Causality. Cambridge: Cambridge University Press. Ahmed, A. (2017). ‘Exploiting cyclic preference’. Mind 126 (504):975–1022. Ainslie, G. (1974). ‘Impulse control in pigeons’. Journal of the Experimental Analysis of Behavior 21 (3):485–9. Ainslie, G. (1975). ‘Specious reward: a behavioral theory of impulsiveness and impulse control’. Psychological Bulletin 82 (4):463. Ainslie, G. (1982). ‘A behavioral economic approach to the defense mechanisms: Freud’s energy theory revisited’. Social Science Information 21 (6):735–79. Ainslie, G. (1992). Picoeconomics. Cambridge: Cambridge University Press. Ainslie, G. (2001). Breakdown of Will. Cambridge: Cambridge University Press. Ainslie, G. (2005). ‘Précis of Breakdown of Will’. Behavioural and Brain Sciences 28 (5): 635–73. Ainslie, G. (2012). ‘Pure hyperbolic discount curves predict ‘eyes open’ self-control’. Theory and Decision 73 (1):3–34. Albahari, M. (2014). ‘Alief or belief? A contextual approach to belief ascription’. Philosophical Studies 167 (3):701–20. Allais, M. (1953). ‘Le comportement de l’homme rationnel devant le risque: critique des postulats et axiomes de l’école américaine’. Econometrica:503–46. Alvarez, M. (2017). ‘Reasons for action: justification, motivation, explanation’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2017. Metaphysics Research Lab, Stanford University. American Psychiatric Association (2013). Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Arlington (VA): American Psychiatric Association. Anand, P. (1993). Foundations of Rational Choice under Risk. Oxford: Oxford University Press. Anand, P. (2009). ‘The rationality of intransitive preference: foundations for the modern view’. In Handbook of Rational and Social Choice. Ed. P. Anand, P. Pattanaik, and C. Puppe. New York: Oxford University Press. Andreou, C. (2016). ‘Cashing out the money-pump argument’. Philosophical Studies (6):1–5. Andreou, C. (2020). ‘Dynamic choice’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2020. Metaphysics Research Lab, Stanford University. Anscombe, E. (1957). Intention. Cambridge (MA): Harvard University Press. Aquinas, T. ([1265–73] 1912). Summa theologiae. Ed. Fathers of the English Dominican Province. London: Burns Oates and Washbourne. Ariely, D. (2009). Predictably Irrational. New York: HarperCollins. Aristotle ([n. d.] 1894). Ethica Nicomachea. Ed. I. Bywater. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 177 Aristotle ([n. d.] 1898). Aristotelis Parva naturalia. Ed. W. Biehl. Leipzig: Teubner. Aristotle ([n. d.] 1956). De Anima. Ed. W. Ross. Oxford: Clarendon. Arno, A. and S. Thomas (2016). ‘The efficacy of nudge theory strategies in influencing adult dietary behaviour: a systematic review and meta-analysis’. BMC Public Health 16 (1): 1–11. Arpaly, N. (2000). ‘On acting rationally against one’s best judgment’. Ethics 110 (3):488–513. Arpaly, N. and T. Schroeder (1999). ‘Praise, blame and the whole self ’. Philosophical Studies 93 (2):161–88. Attie, M. and J. Knobe (2017). ‘Replication of study 3 by May, J. & Holton, R. (Philosophical Studies, 2012)’. https://osf.io/s37h6/. Audi, R. (1979). ‘Weakness of will and practical judgment’. Noûs 13 (2):173–96. Audi, R. (1990). ‘Weakness of will and rational action’. Australasian Journal of Philosophy 68 (3):270–81. Austin, J. (1956). ‘A plea for excuses’. In Austin’s Philosophical Papers, 3rd ed. (1979). Ed. J. Urmson and G. Warnock. Oxford: Oxford University Press, pp. 175–204. Bandura, A. (1991). ‘Social cognitive theory of self-regulation’. Organizational Behavior and Human Decision Processes 50 (2):248–87. Barabasi, A.-L. (2005). ‘The origin of bursts and heavy tails in human dynamics’. Nature 435 (7039):207–11. Barsalou, L. (2014). Cognitive Psychology: An Overview for Cognitive Scientists. New York and London: Psychology Press. Beall, J., M. Glanzberg, and D. Ripley (2020). ‘Liar paradox’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Fall 2020. Metaphysics Research Lab, Stanford University. Becker, G. (1976). The Economic Approach to Human Behavior. Chicago: University of Chicago Press. Beebe, J. (2013). ‘Weakness of will, reasonability, and compulsion’. Synthese 190 (18): 4077–93. Beier, K. (2010). Selbsttäuschung. Berlin: De Gruyter. Bell, D. (1982). ‘Regret in decision making under uncertainty’. Operations Research 30 (5):961–81. Bénabou, R. and J. Tirole (2004). ‘Willpower and personal rules’. Journal of Political Economy 112 (4):848–86. Benartzi, S., J. Beshears, et al. (2017). ‘Should governments invest more in nudging?’ Psychological Science 28 (8):1041–55. Benartzi, S. and R. Thaler (2007). ‘Heuristics and biases in retirement savings behavior’. Journal of Economic Perspectives 21 (3):81–104. Benartzi, S. and R. Thaler (2013). ‘Behavioral economics and the retirement savings crisis’. Science 339 (6124):1152–3. Bennett, J. (1974). ‘The conscience of Huckleberry Finn’. Philosophy 49 (188):123–34. Berkeley, G. ([1709–44] 1948–57). The Works of George Berkeley, Bishop of Cloyne. Ed. A. Luce and T. Jessop. London: Thomas Nelson and Sons. Bernheim, D., D. Ray, and Ş. Yeltekin (2015). ‘Poverty and self-control’. Econometrica 83 (5):1877–1911. Berns, G. S., D. Laibson, and G. Loewenstein (2007). ‘Intertemporal choice—toward an integrative framework’. Trends in Cognitive Sciences 11 (11):482–8. Bernstein, S. (2015). ‘The metaphysics of omissions’. Philosophy Compass 10 (3):208–18. Beshears, J. et al. (2010). The Limitations of Defaults. Tech. rep. National Bureau of Economic Research. Bias (2023). In Oxford Dictionary of English. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

178 bibliography Bickel, W., L. Athamneh, et al. (2019). ‘Excessive discounting of delayed reinforcers as a trans-disease process: update on the state of the science’. Current Opinion in Psychology 30:59–64. Bickel, W., M. Koffarnus, et al. (2014). ‘The behavioral and neuroeconomic process of temporal discounting: a candidate behavioral marker of addiction’. Neuropharmacology 76:518–27. Bird, A. and E. Tobin (2022). ‘Natural kinds’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Spring 2022. Metaphysics Research Lab, Stanford University. Blake-Turner, C. (2022). ‘Acting and believing on the basis of reasons’. Philosophy Compass 17 (1):e12797. Block, N. and R. Stalnaker (1999). ‘Conceptual analysis, dualism, and the explanatory gap’. Philosophical Review 108 (1):1–46. Bond, R. et al. (2012). ‘A 61-million-person experiment in social influence and political mobilization’. Nature 489 (7415):295–8. Bostock, D. (2000). Aristotle’s Ethics. Oxford: Oxford University Press. Bratman, M. (1979). ‘Practical reasoning and weakness of the will’. Noûs 13 (2):153–71. Bratman, M. (1999a). Faces of Intention. Cambridge: Cambridge University Press. Bratman, M. ([1987] 1999b). Intention, Plans, and Practical Reason. Stanford: CSLI Publications. Bratman, M. (2014). ‘Temptation and the agent’s standpoint’. Inquiry 57 (3):293–310. Bratman, M. (2018). Planning, Time, and Self-Governance. New York: Oxford University Press. Brogaard, B. (2021). ‘Implicit biases in visually guided action’. Synthese 198 (17):3943–67. Broome, J. (1999). Ethics Out of Economics. Cambridge: Cambridge University Press. Broome, J. (2001). ‘Are intentions reasons? And how should we cope with incommensurable values? essays for David Gauthier’. In Practical Rationality and Preference. Ed. C. Morris and A. Ripstein. Cambridge: Cambridge University Press, pp. 98–120. Broome, J. (2012). Climate Matters. New York: W. W. Norton. Broome, J. (2013). Rationality through Reasoning. Oxford: Blackwell. Brownstein, M. (2017). ‘Implicit bias’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Spring 2017. Metaphysics Research Lab, Stanford University. Brownstein, M., A. Madva, and B. Gawronski (2020). ‘Understanding implicit bias: putting the criticism into perspective’. Pacific Philosophical Quarterly 101 (2):276–307. Brozzo, C. (2017). ‘Motor intentions: how intentions and motor representations come together’. Mind and Language 32 (2):231–56. Buchak, L. (2013). Risk and Rationality. New York: Oxford University Press. Buehler, D. (2022). ‘Agential capacities: a capacity to guide’. Philosophical Studies 179 (1): 21–47. Builes, D. (2020). ‘Time-slice rationality and self-locating belief ’. Philosophical Studies 177 (10):3033–49. Buss, S. (1997). ‘Weakness of will’. Pacific Philosophical Quarterly 78 (1):13–44. Buss, S. (1999). ‘What practical reasoning must be if we act for our own reasons’. Australasian Journal of Philosophy 77 (4):399–421. Bykvist, K. (2006). ‘Prudence for changing selves’. Utilitas 18 (3):264–83. Camerer, C. (1995). ‘Individual decision-making’. In Handbook of Experimental Economics. Ed. J. Kagel and A. Roth. Princeton: Princeton University Press, pp. 587–703. Candolin, U. (1998). ‘Reproduction under predation risk and the trade-off between current and future reproduction in the threespine stickleback’. Proceedings of the Royal Society of London B: Biological Sciences 265 (1402):1171–5.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 179 Caplin, A. and A. Schotter (2008). The Foundations of Positive and Normative Economics: A Handbook. New York: Oxford University Press. Cappelen, H. (2018). Fixing Language: An Essay on Conceptual Engineering. Oxford: Oxford University Press. Carson, T. (2006). ‘The definition of lying’. Noûs 40 (2):284–306. Carson, T. (2010). Lying and Deception: Theory and Practice. Oxford: Oxford University Press. Carver, C. and M. Scheier (1990). ‘Origins and functions of positive and negative affect: a control-process view’. Psychological Review 97 (1):19. Casey, B. et al. (2011). ‘Behavioral and neural correlates of delay of gratification 40 years later’. Proceedings of the National Academy of Sciences 108 (36):14998–15003. Chakraborty, A., Y. Halevy, and K. Saito (2020). ‘The relation between behavior under risk and over time’. American Economic Review: Insights 2 (1):1–16. Chalmers, D. (2012). Constructing the World. New York: Oxford University Press. Chalmers, D. and F. Jackson (2001). ‘Conceptual analysis and reductive explanation’. Philosophical Review 110 (3):315–61. Chaloupka, F., L. Powell, and K. Warner (2019). ‘The use of excise taxes to reduce tobacco, alcohol, and sugary beverage consumption’. Annual Review of Public Health 40 (1): 187–201. Charles, D. (1984). Aristotle’s Philosophy of Action. New York: Cornell University Press. Chin, M. et al. (2020). ‘Bias in the air: a nationwide exploration of teachers’s implicit racial attitudes, aggregate bias, and student outcomes’. Educational Researcher 49 (8):566–78. Chisholm, R. and T. Feehan (1977). ‘The intent to deceive’. Journal of Philosophy 74 (3): 143–59. Christensen, D. (1991). ‘Clever bookies and coherent beliefs’. Philosophical Review 100 (2):229–47. Chung, S.-H. and R. Herrnstein (1967). ‘Choice and delay of reinforcement’. Journal of the Experimental Analysis of Behavior 10 (1):67–74. Churchland, P. (2007). ‘The necessary-and-sufficient boondoggle’. American Journal of Bioethics 7 (1):54–5. Clarke, R. (2014). Omissions: Agency, Metaphysics, and Responsibility. Oxford: Oxford University Press. Cohon, R. (2008). Hume’s Morality. Oxford: Oxford University Press. Cooper, J. (1975). Reason and Human Good in Aristotle. Cambridge (MA): Harvard University Press. Cordner, C. (1985). ‘Jackson on weakness of will’. Mind 94 (374):273–80. Crisp, R. (2008). ‘Well-Being’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2008. Crockett, M. et al. (2013). ‘Restricting temptations: neural mechanisms of precommitment’. Neuron 79 (401):391–401. Cubillo, A. et al. (2021). ‘Intra-individual variability in task performance after cognitive training is associated with long-term outcomes in children’. Developmental Science:e13252. Currie, G. and A. Ichino (2012). ‘Aliefs don’t exist, though some of their relatives do’. Analysis 72 (4):788–98. Dahl, N. (1984). Practical Reason, Aristotle, and Weakness of the Will. Minneapolis: University of Minnesota Press. Dar, R., F. Stronguin, et al. (2005). ‘Craving to smoke in orthodox Jewish smokers who abstain on the Sabbath: a comparison to a baseline and a forced abstinence workday’. Psychopharmacology 183:294–9.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

180 bibliography Dar, R., N. Rosen-Korakin, et al. (2010). ‘The craving to smoke in flight attendants: relations with smoking deprivation, anticipation of smoking, and actual smoking.’ Journal of Abnormal Psychology 119 (1):248–53. Dasgupta, P. (2005). ‘What do economists analyze and why: values or facts?’ Economics and Philosophy 21 (2):221–78. Dasgupta, P. and E. Maskin (2005). ‘Uncertainty and hyperbolic discounting’. American Economic Review 95 (4):1290–9. Davidson, D. (1963). ‘Actions, reasons, and causes’. Journal of Philosophy 60 (23):685–700. Davidson, D. (1980a). Essays on Actions and Events. Oxford: Oxford University Press. Davidson, D. ([1973] 1980b). ‘Freedom to act’. In Essays on Actions and Events. New York: Oxford University Press, pp. 59–74. Davidson, D. ([1970] 1980c). ‘How is weakness of the will possible?’ In Essays on Actions and Events. Oxford: Oxford University Press, pp. 21–42. Davidson, D. ([1978] 1980d). ‘Intending’. In Essays on Actions and Events. Oxford: Oxford University Press, pp. 83–102. Davidson, D. ([1982] 2004). ‘Paradoxes of irrationality’. In Problems of Rationality. Oxford: Clarendon Press, pp. 169–87. Davidson, D., J. McKinsey, and P. Suppes (1955). ‘Outlines of a formal theory of value, I’. Philosophy of Science 22 (2):140–60. Davidson, M. (2015). ‘Climate change and the ethics of discounting’. Wiley Interdisciplinary Reviews: Climate Change 6 (4):401–12. Davison, M. and D. McCarthy ([1988] 2016). The Matching Law. A Research Review. London: Routledge. De Ridder, D. et al. (2012). ‘Taking stock of self-control: a meta-analysis of how trait selfcontrol relates to a wide range of behaviors’. Personality and Social Psychology Review 16 (1):76–99. De Sousa, R. (1974). ‘The good and the true’. Mind 83:534–51. DeMiguel, V., L. Garlappi, and R. Uppal (2009). ‘Optimal versus naive diversification: How 1 inefficient is the portfolio strategy?’ Review of Financial Studies 22:1915–53. N Descartes, R. (1637). Discours de la methode pour bien conduire sa raison, & chercher la verité dans les sciences: plus la dioptrique, les meteores, et la geometrie, qui sont des essais de cete methode. Leiden: Jan Maire. Descartes, R. (1641). Meditationes de prima philosophia, in qua Dei existentia et animae immortalitas demonstrantur. Paris: Michel Soly. Dion, K., E. Berscheid, and E. Walster (1972). ‘What is beautiful is good’. Journal of Personality and Social Psychology 24 (3):285. Döring, S. and B. Eker (2017). ‘Rationality, time and normativity: on Hedden’s time-slice rationality’. Analysis 77 (3):571–85. Dorsey, D. (2019). ‘A near-term bias reconsidered’. Philosophy and Phenomenological Research 99 (2):461–77. Dougherty, T. (2011). ‘On whether to prefer pain to pass’. Ethics 121 (3):521–37. Dougherty, T. (2014). ‘A deluxe money pump’. Thought 3 (1):21–9. Dougherty, T. (2015). ‘Future-bias and practical reason’. Philosophers’ Imprint 15:1–16. Duckworth, A., T. Gendler, and J. Gross (2016). ‘Situational strategies for self-control’. Perspectives on Psychological Science 11 (1):35–55. Duckworth, A., T. Kirby, et al. (2013). ‘From fantasy to action: Mental Contrasting with Implementation Intentions (MCII) improves academic performance in children’. Social Psychological and Personality Science 4 (6):745–53. Duckworth, A., K. Milkman, and D. Laibson (2018). ‘Beyond willpower: strategies for reducing failures of self-control’. Psychological Science in the Public Interest 19 (3):102–29.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 181 Duckworth, A., E. Tsukayama, and T. Kirby (2013). ‘Is it really self-control? Examining the predictive power of the delay of gratification task’. Personality and Social Psychology Bulletin 39 (7):843–55. Duflo, E., M. Kremer, and J. Robinson (2011). ‘Nudging farmers to use fertilizer: theory and experimental evidence from Kenya’. American Economic Review 101 (6):2350–90. Dworkin, G. (2020). ‘Paternalism’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Fall 2020. Metaphysics Research Lab, Stanford University. Edgington, D. (1997). ‘Vagueness by degrees’. In Vagueness: A Reader. Ed. R. Keefe and P. Smith. Cambridge (MA): MIT Press. Egan, A. (2008). ‘Seeing and believing: perception, belief formation and the divided mind’. Philosophical Studies 140 (1):47–63. Elder, R. et al. (2010). ‘The effectiveness of tax policy interventions for reducing excessive alcohol consumption and related harms’. American Journal of Preventive Medicine 38 (2):217–29. Ellsberg, D. (1961). ‘Risk, ambiguity, and the Savage axioms’. Quarterly Journal of Economics:643–69. Elster, J. (1985). ‘Weakness of will and the free-rider problem’. Economics and Philosophy 1 (2):231–65. Elster, J. ([1979] 2013). Ulysses and the Sirens. Cambridge: Cambridge University Press. Epper, T., H. Fehr-Duda, and A. Bruhin (2011). ‘Viewing the future through a warped lens: why uncertainty generates hyperbolic discounting’. Journal of Risk and Uncertainty 43 (3):169–203. Ernst, G. (2020). ‘Two kinds of rationality’. In The Ethics of Belief and Beyond. Ed. G. Ernst and S. Schmidt. Abingdon: Routledge, pp. 177–90. Estle, S. et al. (2006). ‘Differential effects of amount on temporal and probability discounting of gains and losses’. Memory and Cognition 34 (4):914–28. Falbe, J. et al. (2016). ‘Impact of the Berkeley excise tax on sugar-sweetened beverage consumption’. American Journal of Public Health 106 (10):1865–71. Fallis, D. (2009). ‘What is lying’. Journal of Philosophy 106 (1):29–56. Farmer, J. and J. Geanakoplos (2009). Hyperbolic Discounting Is Rational: Valuing the far Future with Uncertain Discount Rates. Cowles Foundation Discussion Paper Series 1719. Cowles Foundation. Fehr-Duda, H. and T. Epper (2012). ‘Probability and risk: foundations and economic implications of probability-dependent risk preferences’. Annual Review of Economics 4 (1):567–93. Figner, B. et al. (2010). ‘Lateral prefrontal cortex and self-control in intertemporal choice’. Nature Neuroscience 13 (5):538. Fink, J. (2013). ‘Editorial’. Organon F 20 (4):422–4. Fisher, L. and S. Feldman (1998). ‘Familial antecedents of young adult health risk behavior: a longitudinal study’. Journal of Family Psychology 12 (1):66. Fisher, S. (2020). ‘Rationalising framing effects: at least one task for empirically informed philosophy’. Crítica, Revista Hispanoamericana de Filosofía 52 (156):5–30. Flanagan, O. (2013). ‘Identity and addiction: what alcoholic memoirs teach’. In The Oxford Handbook of Philosophy and Psychiatry. Ed. W. Fulford et al. Oxford: Oxford University Press, pp. 865–88. Frankfurt, H. (1969). ‘Alternate possibilities and moral responsibility’. Journal of Philosophy 66 (23):829–39. Frankfurt, H. (1971). ‘Freedom of the will and the concept of a person’. Journal of Philosophy 68 (1):5–20.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

182 bibliography Frederick, S., G. Loewenstein, and T. O’Donoghue (2002). ‘Time discounting and time preference: a critical review’. Journal of Economic Literature 40 (2):351–401. Fujita, K. (2011). ‘On conceptualizing self-control as more than the effortful inhibition of impulses’. Personality and Social Psychology Review 15 (4):352–66. Galla, B. and A. Duckworth (2015). ‘More than resisting temptation: beneficial habits mediate the relationship between self-control and positive life outcomes.’ Journal of Personality and Social Psychology 109 (3):508. Gallop, D. (1964). ‘The Socratic paradox in the Protagoras’. Phronesis 9 (2):117–29. Gardiner, S. et al. (2010). Climate Ethics. Essential Readings. New York: Oxford University Press. Gendler, T. (2008a). ‘Alief and belief ’. Journal of Philosophy 97 (2):55–81. Gendler, T. (2008b). ‘Alief in action (and reaction)’. Mind and Language 23 (5):552–85. Gendler, T. (2011). ‘On the epistemic costs of implicit bias’. Philosophical Studies 156 (1): 33–63. Gendler, T. (2012). ‘Between reason and reflex: response to commentators’. Analysis 72 (4):799–811. Gettier, E. (1963). ‘Is justified true belief knowledge?’ Analysis 23:121–3. Gibbard, A. (1990). Wise Choices, Apt Feelings: A Theory of Normative Judgment. Cambridge (MA): Harvard University Press. Gibbard, A. (1999). ‘Morality as consistency in living: Korsgaard’s Kantian lectures’. Ethics 110 (1):140–64. Gibbard, A. and W. Harper (1978). ‘Counterfactuals and two kinds of expected utility’. In Foundations and Applications of Decision Theory. Ed. A. Hooker, J. Leach, and E. McClennen. Dordrecht: Reidel, pp. 125–62. Gigerenzer, G. (2008a). Rationality for Mortals: How People Cope with Uncertainty. New York: Oxford University Press. Gigerenzer, G. (2008b). ‘Why heuristics work’. Perspectives on Psychological Science 3 (1): 20–9. Gigerenzer, G., P. Todd, and the ABC Research Group (1999). Simple Heuristics That Make Us Smart. New York: Oxford University Press. Gimbel, R. et al. (2003). ‘Presumed consent and other predictors of cadaveric organ donation in Europe’. Progress in Transplantation 13 (1):17–23. Giné, X., D. Karlan, and J. Zinman (2010). ‘Put your money where your butt is: a commitment contract for smoking cessation’. American Economic Journal: Applied Economics 2 (4):213–35. Goel, V. (2014, June 29). ‘Facebook tinkers with users’ emotions in news feed experiment, stirring outcry’. New York Times. Gollwitzer, P. (1999). ‘Implementation intentions’. American Psychologist 54:493–503. Gollwitzer, P. and V. Brandstätter (1997). ‘Implementation intentions and effective goal pursuit.’ Journal of Personality and Social Psychology 73 (1):186. Gollwitzer, P. and P. Sheeran (2006). ‘Implementation intentions and goal achievement: a meta-analysis of effects and processes’. Advances in Experimental Social Psychology 38: 69–119. Gorman, A. (2022). ‘What is the difference between weakness of will and compulsion?’ Journal of the American Philosophical Association:1–16. Greaves, H. (2017). ‘Discounting for public policy: a survey’. Economics and Philosophy 33 (3):391–439. Green, A., D. Carney, et al. (2007). ‘Implicit bias among physicians and its prediction of thrombolysis decisions for black and white patients’. Journal of General Internal Medicine 22 (9):1231–8.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 183 Green, L. and J. Myerson (1993). ‘Alternative frameworks for the analysis of self control’. Behavior and Philosophy 21 (2):37–47. Green, L. and J. Myerson (2004). ‘A discounting framework for choice with delayed and probabilistic rewards’. Psychological Bulletin 130 (5):769–92. Green, L., J. Myerson, and E. McFadden (1997). ‘Rate of temporal discounting decreases with amount of reward’. Memory and Cognition 25 (5):715–23. Green, L., J. Myerson, L. Oliveira, et al. (2013). ‘Delay discounting of monetary rewards over a wide range of amounts’. Journal of the Experimental Analysis of Behavior 100 (3):269–81. Green, L., J. Myerson, and P. Ostaszewski (1999). ‘Amount of reward has opposite effects on the discounting of delayed and probabilistic outcomes’. Journal of Experimental Psychology Learning Memory and Cognition 25:418–27. Greene, P. and M. Sullivan (2015). ‘Against time bias’. Ethics 125 (4):947–70. Greenwald, A. et al. (2009). ‘Understanding and using the Implicit Association Test: metaanalysis of predictive validity.’ Journal of Personality and Social Psychology 97 (1):17. Griffin, J. (2010). ‘Ought’ Implies ‘Can’. The Lindley Lecture. University of Kansas. Haas, J. (2018). ‘An empirical solution to the puzzle of weakness of will’. Synthese (12):1–21. Hahn, S. ([2013] 2017). Rationalität. 2nd ed. Münster: Mentis. Hájek, A. (2012). ‘Interpretations of probability’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2012. Haji, I. (2002). Deontic Morality and Control. Cambridge: Cambridge University Press. Halevy, Y. (2008). ‘Strotz meets Allais: diminishing impatience and the certainty effect’. American Economic Review 98 (3):1145–62. Halpern, D. (2015). Inside the Nudge Unit: How Small Changes Can Make a Big Difference. New York: Random House. Hampton, J. (2000). ‘Concepts and prototypes’. Mind & Language 15 (2–3):299–307. Hardie, F. (1968). Aristotle’s Ethical Theory. Oxford: Oxford University Press. Hare, R. (1952). The Language of Morals. Oxford: Clarendon Press. Hare, R. (1963). Freedom and Reason. Oxford: Clarendon Press. Hare, R. (1981). Moral Thinking: Its Levels, Method, and Point. New York: Oxford University Press. Hare, R. (1992). ‘Weakness of will’. In Encyclopedia of Ethics. Ed. L. Becker and C. Becker. Vol. 2. New York and London: Garland, pp. 1304–7. Hare, R. (1998). ‘Prescriptivism’. In Routledge Encyclopedia of Philosophy. London: Taylor and Francis. Hare, R. ([1996] 1999a). ‘Internalism and externalism in ethics’. In Objective Prescriptions and Other Essays. New York: Oxford University Press, pp. 96–108. Hare, R. (1999b). Objective Prescriptions and Other Essays. New York: Oxford University Press. Haslanger, S. (2000). ‘Gender and race: (what) are they? (What) do we want them to be?’ Noûs 34 (1):31–55. Haslanger, S. (2012). Resisting Reality: Social Construction and Social Critique. New York: Oxford University Press. Haushofer, J. and E. Fehr (2014). ‘On the psychology of poverty’. Science 344 (6186):862–7. Heath and Anderson (2010). ‘Procrastination and the extended will’. In The Thief of Time: Philosophical Essays on Procrastination. Ed. C. Andreou and M. White. New York: Oxford University Press, pp. 233–52. Heck, P. et al. (2020). ‘Objecting to experiments even while approving of the policies or treatments they compare’. Proceedings of the National Academy of Sciences 117 (32):18948–50.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

184 bibliography Hedden, B. (2013). ‘Incoherence without exploitability’. Noûs 47 (3):482–95. Hedden, B. (2015a). Reasons without Persons: Rationality, Identity, and Time. Oxford: Oxford University Press. Hedden, B. (2015b). ‘Time-slice rationality’. Mind 124 (494):449–91. Hedden, B. (2016). ‘Mental processes and synchronicity’. Mind 125 (499):873–88. Hehman, E., J. Flake, and J. Calanchini (2018). ‘Disproportionate use of lethal force in policing is associated with regional racial biases of residents’. Social Psychological and Personality Science 9 (4):393–401. Heinzelmann, N. (2022). ‘Rationality is not coherence’. Philosophical Quarterly. Hempel, C. (1965). ‘Aspects of scientific explanation’. In Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. Ed. C. Hempel. New York and London: Free Press and Collier-Macmillian, pp. 331–496. Henning, T. (2018). From a Rational Point of View. Oxford: Oxford University Press. Herfeld, C. (2022). ‘Revisiting the criticisms of rational choice theories’. Philosophy Compass 17 (1):e12774. Herrnstein, R. (1961). ‘Relative and absolute strength of response as a function of frequency of reinforcement’. Journal of the Experimental Analysis of Behavior 4 (3):267–72. Hertwig, R. and I. Erev (2009). ‘The description-experience gap in risky choice’. Trends in Cognitive Sciences 13 (12):517–23. Hobbes, T. ([1655] 1839). The English Works of Thomas Hobbes. De corpore. Trans. W. Molesworth. Vol. 3. London: Bohn. Hodgson, G. (2012). From Pleasure Machines to Moral Communities: An Evolutionary Economics without Homo Economicus. Chicago: University of Chicago Press. Hofmann, W. et al. (2012). ‘Everyday temptations: an experience sampling study of desire, conflict, and self-control’. Journal of Personality and Social Psychology 102 (6):1318. Holroyd, J. and D. Kelly (2016). ‘Implicit bias, character, and control’. In From Personality to Virtue. Ed. J. Webber and A. Masala. Oxford: Oxford University Press. Holroyd, J., R. Scaife, and T. Stafford (2017). ‘What is implicit bias?’ Philosophy Compass 12 (e12437). Holton, R. (1999). ‘Intention and weakness of will’. Journal of Philosophy 96 (5):241–62. Holton, R. (2003). ‘How is strength of will possible?’ In Weakness of Will and Practical Irrationality. Ed. S. Stroud and C. Tappolet. Oxford: Oxford University Press. Holton, R. (2009). Willing, Wanting, Waiting. Oxford: Clarendon Press. Hong, C. S. and P. Wakker (1996). ‘The comonotonic sure-thing principle’. Journal of Risk and Uncertainty 12 (1):5–27. Houston, A. and J. McNamara (1986). ‘The influence of mortality on the behaviour that maximizes reproductive success in a patchy environment’. Oikos 47 (3):267–74. Houston, A., A. Kacelnik, and J. McNamara (1982). ‘Some learning rules for acquiring information’. Functional Ontogeny 1:140–91. Hume, D. ([1740] 2000). A Treatise of Human Nature. Ed. D. Norton and M. Norton. Oxford: Oxford University Press. Imhoff, R., A. Schmidt, and F. Gerstenberg (2014). ‘Exploring the interplay of trait selfcontrol and ego depletion: empirical evidence for ironic effects’. European Journal of Personality 28 (5):413–24. Irwin, T. (1977). Plato’s Moral Theory: The Early and Middle Dialogues. Oxford: Oxford University Press. Irwin, T. (1999). Aristotle: Nicomachean Ethics. 2nd ed. Indianapolis: Hackett. Iwasa, Y., Y. Suzuki, and H. Matsuda (1984). ‘Theory of oviposition strategy of parasitoids. I. Effect of mortality and limited egg number’. Theoretical Population Biology 26 (2):205–27.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 185 Jackson, F. (1984). ‘Weakness of will’. Mind 93 (369):1–18. Jackson, F. (1998). From Metaphysics to Ethics. New York: Oxford University Press. James, W. (1890). The Principles of Psychology. London: Dover. Jeffrey, R. (1983). The Logic of Decision. 2nd ed. Chicago: University of Chicago Press. Jeong, Y. et al. (2021). ‘Impacts of visualizations on decoy effects’. International Journal of Environmental Research and Public Health 18 (23):12674. Johnson, E. and D. Goldstein (2003). ‘Do defaults save lives?’ Science 302 (5649):1338–9. Kahneman, D. ([2011] 2012). Thinking, Fast and Slow. London: Penguin. Kahneman, D. and S. Frederick (2002). ‘Representativeness revisited’. In Heuristics and Biases. Ed. T. Gilovich, D. Griffin, and D. Kahneman. Cambridge: Cambridge University Press, pp. 51–2. Kahneman, D. and A. Tversky (1979). ‘Prospect theory: an analysis of decision under risk’. Econometrica 47 (2):263–92. Kalis, A. et al. (2008). ‘Weakness of will, akrasia, and the neuropsychiatry of decision making: an interdisciplinary perspective’. Cognitive, Affective, and Behavioral Neurscience 8 (4):402–17. Kant, I. (1900–). Gesammelte Schriften. Berlin: Akademie der Wissenschaften. Karniol, R. and D. Miller (1983). ‘Why not wait? A cognitive model of self-imposed delay termination’. Journal of Personality and Social Psychology 45 (4):935–42. Kauppinen, A. (2018). ‘Agency, experience, and future bias’. Thought 7 (4):237–45. Kennett, J. and M. Smith (1994). ‘Philosophy and commonsense: the case of weakness of will’. In Philosophy in Mind. Ed. J. O’Leary-Hawthorne and M. Michael. Dordrecht: Kluwer, pp. 141–57. Kennett, J. and M. Smith (1996). ‘Frog and toad lose control’. Analysis 56 (2):63–73. Kenny, A. (1966). ‘The practical syllogism and incontinence’. Phronesis 11 (2):163–84. Kidd, C., H. Palmeri, and R. Aslin (2013). ‘Rational snacking: young children’s decisionmaking on the marshmallow task is moderated by beliefs about environmental reliability’. Cognition 126 (1):109–14. Kiesewetter, B. (2017). The Normativity of Rationality. Oxford: Oxford University Press. Killeen, P. (1972). ‘The matching law’. Journal of the Experimental Analysis of Behavior 17 (3):489–95. King, A. (2017). ‘ “Ought implies can”: not so pragmatic after all’. Philosophy and Phenomenological Research 95 (3):637–61. Kirby, K. (1997). ‘Bidding on the future: evidence against normative discounting of delayed rewards’. Journal of Experimental Psychology: General 126 (1):54–70. Kirby, K., N. Petry, and W. Bickel (1999). ‘Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls’. Journal of Experimental Psychology 128 (1):78–87. Knauff, M. and W. Spohn, eds. (2021). The Handbook of Rationality. Cambridge (MA): MIT Press. Knobe, J. (2003a). ‘Intentional action and side effects in ordinary language’. Analysis 63 (279):190–4. Knobe, J. (2003b). ‘Intentional action in folk psychology: an experimental investigation’. Philosophical Psychology 16 (2):309–25. Knobe, J. and S. Nichols (2013). Experimental Philosophy. Vol. 2. New York: Oxford University Press. Kolodny, N. and J. Brunero (2018). ‘Instrumental rationality’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2018. Metaphysics Research Lab, Stanford University. Korsgaard, C. (1996). The Sources of Normativity. Cambridge: Cambridge University Press.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

186 bibliography Kramer, A., J. Guillory, and J. Hancock (2014). ‘Experimental evidence of massive-scale emotional contagion through social networks’. Proceedings of the National Academy of Sciences 111 (24):8788–90. Kriegel, U. (2012). ‘Moral motivation, moral phenomenology, and the alief/belief distinction’. Australasian Journal of Philosophy 90 (3):469–86. Kurdi, B. et al. (2019). ‘Relationship between the Implicit Association Test and intergroup behavior: a meta-analysis’. American Psychologist 74 (5):569. Kurzban, R. et al. (2013). ‘An opportunity cost model of subjective effort and task performance’. Behavioral and Brain Sciences 36 (6):661–79. Kwong, J. (2012). ‘Resisting aliefs: Gendler on belief-discordant behaviors’. Philosophical Psychology 25 (1):77–91. Kyburg, H. (1961). Probability and the Logic of Rational Belief. Middletown: Wesleyan University Press. Lackey, J. (2013). ‘Lies and deception: an unhappy divorce’. Analysis 73 (2):236–48. Laibson, D. (1997). ‘Golden eggs and hyperbolic discounting’. Quarterly Journal of Economics 112 (2):443–77. Lakoff, G. (1987). Women, Fire and Dangerous Things. What Categories Reveal about the Mind. Chicago: University of Chicago Press. Lally, P. et al. (2010). ‘How are habits formed: modelling habit formation in the real world’. European Journal of Social Psychology 40 (6):998–1009. Lamb, W. (1967). Plato in Twelve Volumes. Cambridge (MA): Harvard University Press. Langlois, J. et al. (2000). ‘Maxims or myths of beauty? A meta-analytic and theoretical review.’ Psychological Bulletin 126 (3):390. Leitner, J. et al. (2016). ‘Racial bias is associated with ingroup death rate for Blacks and Whites: insights from Project Implicit’. Social Science & Medicine 170:220–7. Leslie, S.-J. (2017). ‘The original sin of cognition: fear, prejudice, and generalization’. Journal of Philosophy 114 (8):393–421. Levy, N. (2006). ‘Autonomy and addiction’. Canadian Journal of Philosophy 36 (3):427–47. Levy, N. (2011). ‘Resisting “weakness of the will” ’. Philosophy and Phenomenological Research 82 (1):134–55. Levy, N. (2014). ‘Addiction as a disorder of belief ’. Biology and Philosophy 29 (3):337–53. Levy, N. (2015). ‘Neither fish nor fowl: implicit attitudes as patchy endorsements’. Nous 49 (4):800–23. Levy, N. (2017a). ‘Am I a racist? Implicit bias and the ascription of racism’. Philosophical Quarterly 67 (268):534–51. Levy, N. (2017b). ‘Implicit bias and moral responsibility: probing the data’. Philosophy and Phenomenological Research 44 (1):3–26. Levy, N. (2019). ‘Nudge, nudge, wink, wink: nudging is giving reasons’. Ergo 6:281–302. Lewis, C. (1946). An Analysis of Knowledge and Valuation. La Salle, Illinois: Open Court. Lewis, D. (1981). ‘Causal decision theory’. Australasian Journal of Philosophy 59 (1):5–30. Lin, H. (2022). ‘Bayesian epistemology’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta and U. Nodelman. Fall 2022. Metaphysics Research Lab, Stanford University. Loewenstein, G. (1996). ‘Out of control: visceral influences on behavior’. Organizational Behavior and Human Decision Processes 65 (3):272–92. Loewenstein, G. (1999). ‘A visceral account of addiction’. In Getting Hooked: Rationality and Addiction. Ed. J. Elster and O. Skog. Cambridge: Cambridge University Press. Loewenstein, G. (2018). ‘Self-control and its discontents: a commentary on Duckworth, Milkman, and Laibson’. Psychological Science in the Public Interest 19 (3):95–101. Lord, E. (2017). ‘What you’re rationally required to do and what you ought to do’. Mind 126 (504):1109–54.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 187 Lord, E. (2018). The Importance of Being Rational. Oxford: Oxford University Press. Lorenz, H. (2006). The Brute Within: Appetitive Desire in Plato and Aristotle. Oxford: Clarendon Press. Maas, J. et al. (2012). ‘Do distant foods decrease intake? The effect of food accessibility on consumption’. Psychology & Health 27 (sup2):59–73. Machery, E. (2016). ‘De-Freuding implicit attitudes’. In Implicit Bias and Philosophy. Ed. M. Brownstein and J. Saul. Vol. 1. Oxford: Oxford University Press, pp. 104–29. Madva, A. (2016). ‘Why implicit attitudes are (probably) not beliefs’. Synthese 193 (8): 2659–84. Madva, A. and M. Brownstein (2018). ‘Stereotypes, prejudice, and the taxonomy of the implicit social mind’. Noûs 52 (3):611–44. Mahon, J. (2016). ‘The definition of lying and deception’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2016. Metaphysics Research Lab, Stanford University. Mahtani, A. (2015). ‘Dutch books, coherence, and logical consistency’. Noûs 49 (3): 522–37. Makinson, D. (1965). ‘The paradox of the preface’. Analysis 25 (6):205. Malebranche, N. (1674). De la recherche de la verité. Paris: Pralard. Mandelbaum, E. (2013). ‘Against alief ’. Philosophical Studies 165 (1):197–211. Mandelbaum, E. (2016). ‘Attitude, inference, association: on the propositional structure of implicitbias’. Noûs 50 (3):629–58. Margolis, E. and S. Laurence (2019). ‘Concepts’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Summer 2019. Metaphysics Research Lab, Stanford University. Markovits, J. (2014). Moral Reason. Oxford: Oxford University Press. May, J. and R. Holton (2012). ‘What in the world is weakness of will?’ Philosophical Studies 157 (3):341–60. Mazur, J. (1987). ‘An adjusting procedure for studying delayed reinforcement’. In Quantitative Analyses of Behavior. Ed. M. Commons et al. Vol. V. New York: Psychology Press, pp. 55–73. McClure, S., K. Ericson, et al. (2007). ‘Time discounting for primary rewards’. Journal of Neuroscience 27 (21):5796–804. McClure, S., D. Laibson, et al. (2004). ‘Separate neural systems value immediate and delayed monetary rewards’. Science 306 (5695):503–7. McConnell, A. and J. Leibold (2001). ‘Relations among the implicit association test, discriminatory behavior, and explicit measures of racial attitudes’. Journal of Experimental Social Psychology 37 (5):435–42. McGuire, J. and J. Kable (2012). ‘Decision makers calibrate behavioral persistence on the basis of time-interval experience’. Cognition 124 (2):216–26. McGuire, J. and J. Kable (2015). ‘Medial prefrontal cortical activity reflects dynamic re-evaluation during voluntary persistence’. Nature Neuroscience 18 (5):760–6. McIntyre, A. (1990). ‘Is akratic action always irrational?’ In Identity, Character, and Morality. Ed. A. Rorty and O. Flanagan. Cambridge (MA): MIT Press, pp. 379–400. McIntyre, J. (2006). ‘Strength of mind: prospects and problems for a Humean account’. Synthese 152 (3):393–401. McKenzie, C., M. Liersch, and S. Finkelstein (2006). ‘Recommendations implicit in policy defaults’. Psychological Science 17 (5):414–20. McNaughton, D. and P. Rawling (2004). ‘Duty, rationality, and practical reasons’. In The Oxford Handbook of Rationality. Ed. A. Mele and P. Rawling. New York: Oxford University Press, pp. 110–31. Meacham, C. and J. Weisberg (2011). ‘Representation theorems and the foundations of decision theory’. Australasian Journal of Philosophy 89 (4):641–63. Mele, A. (1987). Irrationality. New York: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

188 bibliography Mele, A. (1992). ‘Akrasia, self-control, and second-order desires’. Noûs 26 (3):281–302. Mele, A. ([1995] 2003). Autonomous Agents: From Self-Control to Autonomy. Oxford/New York: Oxford University Press. Mele, A. (2010). ‘Weakness of will and akrasia’. Philosophical Studies 150:391–404. Mele, A. (2012). Backsliding. Oxford: Oxford University Press. Mele, A. (2022). ‘Weakness of will’. In The Oxford Handbook of Moral Psychology. Ed. M. Vargas and J. Doris. New York: Oxford University Press. Metcalfe, J. and W. Mischel (1999). ‘A hot/cool-system analysis of delay of gratification: dynamics of willpower’. Psychological Review 106 (1):3–19. Meyer, M., P. Heck, et al. (2019a). ‘Objecting to experiments that compare two unobjectionable policies or treatments’. Proceedings of the National Academy of Sciences 116 (22):10723–8. Meyer, M., P. Heck, et al. (2019b). ‘Reply to Mislavsky et al.: sometimes people really are averse to experiments’. Proceedings of the National Academy of Sciences 116 (48): 23885–6. Meyer, R. (2014, June 28). ‘Everything we know about Facebook’s secret mood manipulation experiment’. The Atlantic. Milkman, K. et al. (2011). ‘Using implementation intentions prompts to enhance influenza vaccination rates’. Proceedings of the National Academy of Sciences 108 (26):10415–20. Mintz-Woo, K. (2022). ‘Carbon pricing ethics’. Philosophy Compass 17 (1):e12803. Mischel, W. (1973). ‘Toward a cognitive social learning reconceptualization of personality’. Psychological Review 80 (4):252–83. Mischel, W. (2014). The Marshmallow Test: Understanding Self-Control and How to Master It. New York: Random House. Mischel, W. and E. Ebbesen (1970). ‘Attention in delay of gratification’. Journal of Personality and Social Psychology 16 (2):329–37. Mischel, W., E. Ebbesen, and A. Raskoff Zeiss (1972). ‘Cognitive and attentional mechanisms in delay of gratification’. Journal of Personality and Social Psychology 21 (2): 204–18. Mischel, W. and B. Moore (1973). ‘Effects of attention to symbolically presented rewards on self-control.’ Journal of Personality and Social Psychology 28 (2):172–9. Mischel, W., Y. Shoda, and M. Rodriguez (1989). ‘Delay of gratification in children’. Science 244 (4907):933–8. Mislavsky, R., B. J. Dietvorst, and U. Simonsohn (2019). ‘The minimum mean paradox: A mechanical explanation for apparent experiment aversion’. Proceedings of the National Academy of Sciences 116 (48):23883–23884. Moffitt, T. et al. (2011). ‘A gradient of childhood self-control predicts health, wealth, and public safety’. Proceedings of the National Academy of Sciences 108 (7):2693–8. Montaigne, M. de ([1580] 2007). Essais. Ed. J. Balsamo, C. Magnien-Simonin, and M. Magnien. Paris: Gallimard. Moore, A. (2019). ‘Hedonism’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2019. Metaphysics Research Lab, Stanford University. Morgenstern, O. (1979). ‘Some reflections on utility’. In Expected Utility and the Allais Paradox. Ed. M. Allais and O. Hagen. Dordrecht: Reidel, pp. 175–83. Moss, J. (2008). ‘Appearances and calculations: Plato’s division of the soul’. Oxford Studies in Ancient Philosophy 34:35–68. Moss, J. (2009). ‘Akrasia and perceptual illusion’. Archiv für Geschichte der Philosophie 91 (2):119–56. Moss, J. (2012). Aristotle on the Apparent Good: Perception, Phantasia, Thought, and Desire. Oxford: Oxford University Press. Moss, J. (2021). Plato’s Epistemology: Being and Seeming. New York: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 189 Moss, S. (2015). ‘Time-slice epistemology and action under indeterminacy’. In Oxford Studies in Epistemology. Ed. T. Gendler and J. Hawthorne. Vol. 5. Oxford: Oxford University Press, pp. 172–94. Myerson, J. et al. (2003). ‘Discounting delayed and probabilistic rewards: processes and traits’. Journal of Economic Psychology 24:619–35. Nagel, J. (2012). ‘Gendler on alief ’. Analysis 72 (4):774–88. Nagel, T. (1986). The View from Nowhere. Oxford: Oxford University Press. Neal, D., W. Wood, and A. Drolet (2013). ‘How do people adhere to goals when willpower is low? The profits (and pitfalls) of strong habits.’ Journal of Personality and Social Psychology 104 (6):959. Neumann, J. von and O. Morgenstern ([1944] 1953). Theory of Games and Economic Behaviour. 3rd ed. Princeton: Princeton University Press. New English Bible (1970). Cambridge and Oxford: Cambridge and Oxford University Presses. Nickerson, D. and T. Rogers (2010). ‘Do you have a voting plan? Implementation intentions, voter turnout, and organic plan making’. Psychological Science 21 (2):194–9. Noda, Y. et al. (2020). ‘Neural correlates of delay discount alterations in addiction and psychiatric disorders: a systematic review of magnetic resonance imaging studies’. Progress in Neuro-Psychopharmacology and Biological Psychiatry 99:109822. Noggle, R. (2020). ‘The ethics of manipulation’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Summer 2020. Metaphysics Research Lab, Stanford University. Norcross, A. (1996). ‘Rationality and the sure-thing principle’. Australasian Journal of Philosophy 74 (2):324–7. O’Donoghue, T. and M. Rabin (1999). ‘Doing it now or later’. American Economic Review 89 (1):103–24. O’Neill, O. (2004). ‘Kant: rationality as practical reason’. In The Oxford Handbook of Rationality. Ed. A. Mele and P. Rawling. New York: Oxford University Press, pp. 93–109. Ovidius Naso, P. ([8] 1914). Metamorphoses. Ed. H. Magnus. Berlin: Weidmann. Papineau, D. (2021). ‘Naturalism’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Summer 2021. Metaphysics Research Lab, Stanford University. Parfit, D. ([1984] 1987). Reasons and Persons. Oxford: Oxford University Press. Parfit, D. (1997). ‘Reasons and motivation’. Proceedings of the Aristotelian Society, Supplementary Volume 71 (1):99–130. Parsons, K. P. (1973). ‘Three concepts of clusters’. Philosophy and Phenomenological Research 33 (4):514–23. Pascal, B. ([1670] 1991). Pensées. Ed. P. Sellier. Paris: Bords. Paul, L. (2014). Transformative Experience. New York: Oxford University Press. Pavlov, I. (1927). Conditioned Reflexes. Trans. by G. Anrep. Oxford: Oxford University Press. Payne, K. and J. Hannay (2021). ‘Implicit bias reflects systemic racism’. Trends in Cognitive Sciences 25 (11):927–36. Peacocke, A. (2021). ‘Mental action’. Philosophy Compass 16 (6):e12741. Peijnenburg, J. (2005). ‘Shaping your past selves’. Behavioural and Brain Sciences 28 (5): 657–8. Penner, T. (1990). ‘Plato and Davidson: parts of the soul and weakness of will’. Canadian Journal of Philosophy 20:35–74. Penner, T. (1996). ‘Knowledge vs true belief in the Socratic psychology of action’. Apeiron 29 (3):199–230. Penner, T. (1997). ‘Socrates on the strength of knowledge: Protagoras 351B–357E’. Archiv für Geschichte der Philosophie 79 (2):117–49.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

190 bibliography Pettigrew, R. (2019). Choosing for Changing Selves. Oxford: Oxford University Press. Pickard, H. (2021). ‘Addiction and the self ’. Noûs 55:737–61. Plato ([n. d.] 1997). Plato: Complete Works. Ed. J. Cooper. Indianapolis: Hackett. Plunkett, D. and H. Cappelen (2020). ‘A guided tour of conceptual engineering and conceptual ethics’. In Conceptual Engineering and Conceptual Ethics. Ed. H. Cappelen, D. Plunkett, and A. Burgess. Oxford: Oxford University Press, pp. 1–26. Podgorski, A. (2016). ‘A reply to the synchronist’. Mind 125 (499):859–81. Prelec, D. and G. Loewenstein (1991). ‘Decision making over time and under uncertainty: a common approach’. Management Science 37 (7):770–86. Price, A. (2019). ‘Richard Mervyn Hare’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Summer 2019. Metaphysics Research Lab, Stanford University. Putnam, H. ([1975] 1979). ‘The meaning of “meaning” ’. In Mind, Language, and Reality. Philosophical Papers. Ed. H. Putnam. Vol. 2. Cambridge: Cambridge University Press. Rabinowicz, W. (2000). ‘Money pump with foresight’. In Imperceptible Harms and Benefits. Ed. M. Almeida. Dordrecht: Kluwer, pp. 123–54. Raineri, A. and H. Rachlin (1993). ‘The effect of temporal constraints on the value of money and other commodities’. Journal of Behavioral Decision Making 6 (2):77–94. Ramsey, F. (1928). ‘A mathematical theory of saving’. Economic Journal 38 (152):543–59. Ramsey, F. ([1926] 1931). ‘Truth and probability’. In The Foundations of Mathematics and Other Logical Essays. Ed. R. Braithwaite. London: Kegan, Paul, Trench, Trubner & Co., pp. 156–98. Rationality (2023). In Oxford Dictionary of English. Oxford: Oxford University Press. Rawls, J. ([1971] 1999). A Theory of Justice. Cambridge (MA): Harvard University Press. Raz, J. (2010). ‘The guise of the good’. In Desire, Practical Reason, and the Good. Ed. S. Tenenbaum. New York: Oxford University Press, pp. 111–37. Repetti, R., S. Taylor, and T. Seeman (2002). ‘Risky families: family social environments and the mental and physical health of offspring.’ Psychological Bulletin 128 (2):330. Rescorla, R. (1988). ‘Pavlovian conditioning: it’s not what you think it is.’ American Psychologist 43 (3):151. Reynolds, B. et al. (2003). ‘Delay and probability discounting as related to different stages of adolescent smoking and non-smoking’. Behavioural Processes 64 (3):333–44. Richards, J. et al. (1999). ‘Delay or probability discounting in a model of impulsive behavior: effect of alcohol’. Journal of the Experimental Analysis of Behavior 71 (2):121–43. Robinson, R. (1969). Essays in Greek Philosophy. Oxford: Clarendon Press. Rodríguez-Arias, D., L. Wright, and D. Paredes (2010). ‘Success factors and ethical challenges of the Spanish Model of organ donation’. The Lancet 376 (9746):1109–12. Rorty, A. (1970). ‘Plato and Aristotle on belief, habit, and ‘akrasia”. American Philosophical Quarterly 7 (1):50–61. Rorty, A. (1972). ‘Belief and self-deception’. Inquiry 15 (1–4):387–410. Rorty, A. (1980). ‘Where does the akratic break take place?’ Australasian Journal of Philosophy 58 (4):333–46. Rosati, C. (2016). ‘Moral motivation’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Fall 2016. Rosch, E. (1978). ‘Principles of categorization’. In Cognition and Categorization. Ed. E. Rosch and B. Lloyd. Hillsdale: Lawrence Erlbaum, pp. 27–48. Rosch, E. and C. Mervis (1975). ‘Family resemblances: studies in the internal structure of categories’. Cognitive Psychology 7 (4):573–605. Rosenbaum, L. (2016). ‘Leaping without looking—duty hours, autonomy, and the risks of research and practice’. New England Journal of Medicine 374 (8):701–3. Ross, H. (2004). The Economics of Tobacco and Tobacco Control in the European Union. Tech. rep. Brussels: The ASPECT Consortium, European Commission.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 191 Ross, H. and C. Plug (2002). The Mystery of the Moon Illusion. New York: Oxford University Press. Rotter, J. (1954). Social Learning and Clinical Psychology. Upper Saddle River: Prentice Hall. Royer, H., M. Stehr, and J. Sydnor (2015). ‘Incentives, commitments, and habit formation in exercise: evidence from a field experiment with workers at a fortune-500 company’. American Economic Journal: Applied Economics 7 (3):51–84. Rutschmann, R. and A. Wiegmann (2017). ‘No need for an intention to deceive? Challenging the traditional definition of lying’. Philosophical Psychology 30 (4):438–57. Rysiew, P. (2008). ‘Rationality disputes. Psychology and epistemology’. Philosophy Compass 3 (6):1153–76. Rysiew, P. (2021). ‘Naturalism in epistemology’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Fall 2021. Metaphysics Research Lab, Stanford University. Samuelson, P. (1937). ‘A note on measurement of utility’. Review of Economic Studies 4 (2):155–61. Samuelson, P. and W. Nordhaus (2010). Economics. 19th ed. Boston: McGraw-Hill Irwin. Santas, G. (1964). ‘The Socratic paradoxes’. Philosophical Review 73 (2):147–64. Santas, G. (1969). ‘Aristotle on practical inference, the explanation of action, and akrasia’. Phronesis 14 (2):162–89. Saul, J. (2013). ‘Implicit bias, stereotype threat, and women in philosophy’. In Women in Philosophy: What Needs to Change? Ed. F. Jenkins and K. Hutchinson. New York: Oxford University Press, pp. 39–60. Savage, L. (1954). The Foundations of Statistics. New York: Wiley. Scanlon, T. (1998). What We Owe to Each Other. Cambridge (MA): Belknap Press. Schälicke, J. (2004). ‘Willensschwäche und Selbsttäuschung’. Deutsche Zeitschrift für Philosophie 3:361–79. Schelling, T. (1980). The Intimate Contest for Self-Command. Cambridge (MA): Harvard Institute of Economic Research. Schelling, T. (1984). Choice and Consequence. Cambridge (MA): Harvard University Press. Schultz, W. (2015). ‘Neuronal reward and decision signals: from theories to data’. Physiological Reviews 95 (3):853–91. Schunk, D. et al. (2022). ‘Teaching self-regulation’. Nature Human Behavior (9):1680–90. Schwartz, J. et al. (2014). ‘Healthier by precommitment’. Psychological Science 25 (2):538–46. Schwitzgebel, E. (2010). ‘Acting contrary to our professed beliefs, or the gulf between occurrent judgment and dispositional belief ’. Pacific Philosophical Quarterly 91 (4): 531–53. Schwitzgebel, E. (2011). ‘Belief ’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2011. Seanor, D., N. Fotion, and R. Hare, eds. (1988). Hare and Critics: Essays on Moral Thinking. Oxford: Oxford University Press. Sen, A. (1977). ‘Rational fools: a critique of the behavioral foundations of economic theory’. Philosophy and Public Affairs 6 (4):317–44. Shefrin, H. and R. Thaler (1980). Rules and Discretion in a Two-Self Model of Intertemporal Choice. Graduate School of Business and Public Administration, Cornell University. Shepherd, J. (2021). The Shape of Agency. Control, Action, Skill, Knowledge. Oxford: Oxford University Press. Shoda, Y., W. Mischel, and P. Peake (1990). ‘Predicting adolescent cognitive and selfregulatory competencies from preschool delay of gratification: identifying diagnostic conditions’. Developmental Psychology 26 (6):978. Shull, R., D. Spear, and A. Bryson (1981). ‘Delay or rate of food delivery as a determiner of response rate’. Journal of the Experimental Analysis of Behavior 35 (2):129–43.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

192 bibliography Simon, H. (1982–97). Models of Bounded Rationality. Vol. 3. Cambridge (MA): MIT Press. Singer, P. (2002). ‘R. M. Hare’s achievements in moral philosophy’. Utilitas 14 (3):309–17. Sklar, A. and K. Fujita (2020). ‘Self-control as a coordination problem’. In Surrounding SelfControl. Ed. A. Mele. New York: Oxford University Press, p. 65. Smith, M. (1994). The Moral Problem. Oxford: Blackwell. Smith, M. (1996). ‘Normative reasons and full rationality: reply to Swanton’. Analysis 56:160–8. Smith, M. (1997). ‘In defense of The Moral Problem: a reply to Brink, Copp, and SayreMcCord’. Ethics 108:84–119. Smith, M. (2003). ‘Rational capacities, or: how to distinguish recklessness, weakness, and compulsion’. In Weakness of Will and Practical Irrationality. Ed. S. Stroud and C. Tappolet. Oxford: Oxford University Press, pp. 17–38. Snedegar, J. (2017). ‘Time-slice rationality and filling in plans’. Analysis 77 (3):595–607. Sousa, P. and C. Mauro (2015). ‘The evaluative nature of the folk concepts of weakness and strength of will’. Philosophical Psychology 28 (4):487–509. Southwood, N. (2016). ‘The thing to do’ implies ‘can’. Noûs 50 (1):61–72. Sozou, P. (1998). ‘On hyperbolic discounting and uncertain hazard rates’. Proceedings of the Royal Society London 265:2015–20. Spitzley, T. (1992). Handeln wider besseres Wissen: eine Diskussion klassischer Positionen. Berlin: De Gruyter. Spitzley, T. (2009). ‘Self-knowledge and rationality’. Erkenntnis 71 (1):73–88. Sripada, C. (2014). ‘How is willpower possible? The puzzle of synchronic self-control and the divided mind’. Noûs 48 (1):41–74. Sripada, C. (2021). ‘The atoms of self-control’. Noûs 55 (4):800–24. Staffel, J. (2019). Unsettled Thoughts: A Theory of Degrees of Rationality. New York: Oxford University Press. Stanovich, K. and R. West (2000). ‘Individual differences in reasoning: implications for the rationality debate’. Behavioral and Brain Sciences 23:645–65. Stocker, M. (1979). ‘Desiring the bad: an essay in moral psychology’. Journal of Philosophy 76 (12):738–53. Stokke, A. (2013). ‘Lying, deceiving, and misleading’. Philosophy Compass 8 (4):348–59. Strabbing, J. (2016). ‘Attributability, weakness of will, and the importance of just having the capacity’. Philosophical Studies 173 (2):289–307. Street, S. (2009). ‘In defense of future Tuesday indifference: ideally coherent eccentrics and the contingency of what matters’. Philosophical Issues 19 (1):273–98. Streumer, B. (2007). ‘Reasons and impossibility’. Philosophical Studies 136 (3):351–84. Strotz, R. (1955). ‘Myopia and inconsistency in dynamic utility maximization’. Review of Economic Studies 23 (3):165–80. Stroud, S. (2003). ‘Weakness of will and practical judgement’. In Weakness of Will and Practical Irrationality. Ed. S. Stroud and C. Tappolet. Oxford: Oxford University Press, pp. 121–46. Stroud, S. (2014). ‘Weakness of will’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Spring 2014. Stroud, S. and L. Svirsky (2021). ‘Weakness of will’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2021. Metaphysics Research Lab, Stanford University. Stroud, S. and C. Tappolet, eds. (2003). Weakness of Will and Practical Irrationality. Oxford: Oxford University Press. Sullivan, M. (2018). Time Biases: A Theory of Rational Planning and Personal Persistence. Oxford: Oxford University Press.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

bibliography 193 Sullivan-Bissett, E. (2014). ‘Implicit bias, confabulation, and epistemic innocence’. Consciousness and Cognition 33:548–560. Sylvan, K. (2021). ‘Respect and the reality of apparent reasons’. Philosophical Studies. Taylor, C. C. W. (2009). Protagoras. 3rd ed. Oxford: Oxford University Press. Tenenbaum, S. (1999). ‘The judgment of a weak will’. Philosophical and Phenomenological Research 59 (4):875–911. Thaler, R. (1981). ‘Some empirical evidence on dynamic inconsistency’. Economics Letters 8:201–7. Thaler, R. and H. Shefrin (1981). ‘An economic theory of self-control’. Journal of Political Economy 89 (2):392–406. Thaler, R. and C. Sunstein ([1999] 2008). Nudge. London: Penguin. Todd, P., G. Gigerenzer, and the ABC Research Group, eds. (2012). Ecological Rationality. New York: Oxford University Press. Tversky, A. and D. Kahneman (1971). ‘Belief in the law of small numbers.’ Psychological Bulletin 76 (2):105. Tversky, A. and D. Kahneman (1974). ‘Judgment under uncertainty: heuristics and biases’. Science 185 (4157):1124–31. Ullmann-Margalit, E. (2006). ‘Big decisions: opting, converting, drifting’. Royal Institute of Philosophy Supplement 58:157–72. Urmson, J. (1988). Aristotle’s Ethics. Oxford: Blackwell. US National Cancer Institute and World Health Organization (2016). The Economics of Tobacco and Tobacco Control. Tech. rep. 16-CA-8029A. Bethesda and Geneva. van de Ven, N., T. Gilovich, and M. Zeelenberg (2010). ‘Delay, doubt, and decision: how delaying a choice reduces the appeal of (descriptively) normative options’. Psychological Science 21 (4):568–73. Velleman, D. (1992). ‘The guise of the good’. Noûs 26 (1):3–26. Verma, I. (2014). ‘Editorial expression of concern: experimental evidence of massive-scale emotional contagion through social networks’. Proceedings of the National Academy of Sciences 111 (29):10779. Verplanken, B. and S. Faes (1999). ‘Good intentions, bad habits, and effects of forming implementation intentions on healthy eating’. European Journal of Social Psychology 29 (56):591–604. Vineberg, S. (2016). ‘Dutch book arguments’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Spring 2016. Vlastos, G. (1969). ‘Socrates on acrasia’. Phoenix 23:71–88. von Wright, H. (1971). Explanation and Understanding. New York: Cornell University Press. Vranas, P. (2007). ‘I ought, therefore I can’. Philosophical Studies 136 (2):167–216. Wade, N. ([1998] 1999). A Natural History of Vision. 2nd ed. Cambridge (MA): MIT Press. Wagenaar, A., M. Salois, and K. Komro (2009). ‘Effects of beverage alcohol price and tax levels on drinking: a meta-analysis of 1003 estimates from 112 studies’. Addiction 104 (2):179–90. Walsh, J. (1960). Aristotle’s Conception of Moral Weakness. New York: Columbia University Press. Warren, J. (2014). The Pleasures of Reason in Plato, Aristotle, and the Hellenistic Hedonists. Cambridge: Cambridge University Press. Watson, G. (1977). ‘Skepticism about weakness of will’. Philosophical Review 86 (3): 316–39.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

194 bibliography Watts, T., G. Duncan, and H. Quan (2018). ‘Revisiting the marshmallow test: a conceptual replication investigating links between early delay of gratification and later outcomes’. Psychological Science 29 (7):1159–77. Watzl, S. (2022). ‘Self-control, attention, and how to live without special motivational powers’. In Mental Action and the Conscious Mind. Ed. M. Brent and L. Miracchi. London: Routledge, pp. 272–300. Weber, K. (1995). Objects in Mirror Are Closer Than They Appear. New York: Crown. Wedgwood, R. (2013a). ‘Akrasia and uncertainty’. Organon F 20 (4):483–505. Wedgwood, R. (2013b). ‘Rational ‘ought’ implies ‘can”. Philosophical Issues 23 (1):70–92. Wehofsits, A. (2020). ‘Passions: Kant’s psychology of self-deception’. Inquiry:1–25. Werch, C. and D. Owen (2002). ‘Iatrogenic effects of alcohol and drug prevention programs’. Journal of Studies on Alcohol 63 (5):581–90. Wertenbroch, K. (1998). ‘Consumption self-control by rationing purchase quantities of virtue and vice’. Marketing Science 17 (4):317–37. West, S. and K. O’Neal (2004). ‘Project DARE outcome effectiveness revisited’. American Journal of Public Health 94 (6):1027–9. Wiggins, D. (1979). ‘Weakness of will, commensurability, and the objects of deliberation and desire’. Proceedings of the Aristotelian Society 79:251–77. Williams, B. (1965). ‘Ethical consistency’. Proceedings of the Aristotelian Society, Supplementary Volume 39:103–38. Williams, B. (1970). ‘The self and the future’. Philosophical Review 79:161–80. Williams, B. (1981). Moral Luck: Philosophical Papers 1973–1980. Cambridge: Cambridge University Press. Williams, B. (2002). ‘Truth and truthfulness: an essay in genealogy’. Philosophy 78 (305):411–14. Williams, B. ([1985] 2006). Ethics and the Limits of Philosophy. Abingdon: Routledge. Williamson, T. (2007). The Philosophy of Philosophy. Hoboken: Wiley-Blackwell. Wilson, G., S. Shpall, and J. Piñeros Glasscock (2016). ‘Action’. In The Stanford Encyclopedia of Philosophy. Ed. E. Zalta. Winter 2016. Metaphysics Research Lab, Stanford University. Wittgenstein, L. ([1953] 2009). Philosophical Investigations. Trans. E. Anscombe, P. Hacker, and J. Schulte. Chichester: Wiley. Wolf, U. ([1985] 1999). ‘Zum Problem der Willensschwäche’. In Motive, Gründe, Zwecke. Ed. S. Gosepath. Frankfurt am Main: Fischer, pp. 232–45. Wood, W. and D. Rünger (2016). ‘Psychology of habit’. Annual Review of Psychology 67. Woods, M. (1990). ‘Aristotle on akrasia’. In Studi sull’etica di Aristotele. Ed. A. Alberti. Naples: Bibliopolis, pp. 227–61. Wrenn, C. (2022). ‘Naturalistic epistemology’. In The Internet Encyclopedia of Philosophy. Zimmerman, M. (1996). The Concept of Moral Obligation. Cambridge: Cambridge University Press.

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

Index Note: Page numbers printed in bold refer to an entry in the Glossary action 12–3, see also intention addiction 29 age bias 118, see also bias agent 10, 173 Ainslie, George 1–2, 53, 55, 83–5, 90–3 akrasia 2–3, 16, 45, 59, 122, 126, 132, 173, see also Holton, Richard Ancient accounts of akrasia 31–6, 38–9, see also Aristotle, Plato, Socrates inverse akrasia 24, 26, 174 algorithmic bias 118, see also bias alief 118, see also bias anchoring effect 147, see also bias Aquinas, Thomas 2, 23, 35, 47, 68, 126 arationality 38, 119, 126, see also rationality Aristotle 1–2, 5, 10, 16, 24, 31–3, 120, 126 account of akrasia 35–9, 122, see also akrasia Aristotle and Hare 2, 42, 44, see also Hare, Richard Davidson on Aristotle 2, 45, 47, 51, see also Davidson, Donald Arpaly, Nomy 132–3 attractiveness bias 147, see also bias Austin, John 27 behavioural economics 61, 173 belief 14, 118 bias 1–2, 5–6, 101, 117–26, 133, 144–9, 151–3, 157 algorithmic bias 118 attractiveness bias 147 cognitive bias 2, 5–6, 103–5, 118–26, 144–5, 159, 173 confirmation bias 121 default bias 148–9 egocentric bias 121 future bias 137 moon illusion 120–1 near bias 137, 140 perceptual bias 120, 145, 147 racial bias 118–9 risk bias 5, 106, 115, 141, 157

statistical bias 118 time bias 137, 140, 155–6, 159, 175 uncertainty bias 140–3, 148, 151, 159, 175 blame 6, 67, 145 blameworthy 23, 44, 118 Bratman, Michael 2–3, 51–3, 55–7, 96 Canberra plan 10 causalism 50, 55 climate change 62, 144–5, 152 cluster concept 59 cognitive bias 2, 5–6, 103–5, 118–26, 144–5, 159, 173, see also bias coherent 3–6, 21, 63, 72 rationality as coherence 126–9, 131–3, 135–6, 140, 143, see also rationality compulsion 27–9, 43, 57, 67, 70 conceptual analysis 9–10, 97 conceptual engineering 10, 97 conditioning 155–6 confirmation bias 121, see also bias consent 151 control 28, 67, 119, 151 biases 104, 118, 121–2, 173 dual-process model 21, 114 self-control 19–20, 51, 53–4, 61, 174 weakness of will as loss of control 19, 27, 31, 38, 61, see also akrasia Dasgupta, Partha 1, 108–12 Davidson, Donald 1–3, 16, 18, 26, 36, 53, 55–9, 84 account of weakness of will 9, 14, 45–52, 95, 131–2 divided mind 21, 35 irrational weakness of will 5, 23, 126, 131–2 default bias 148–9, see also bias Descartes, René 120, 123 diachronic 20, 70, 173, see also synchronic diachronic delay discounting 77, 87, 89, 91–3 diachronic norms of rationality 130–1, see also rationality

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

196 index diachronic (cont.) diachronic preference reversal 79–84, 93, 95–6, 135–6, see also preference diachronic weakness of will 11–2, 16, 24–5, 34–5, 41, 72–3, 101 discount curve 76–9, 81, 90–2, 100, 109, 154, 173 discount factor 75–6, 82, 87, 99, 105, 107, 173 discount function 76–7, 79–80, 85, 87–93, 100, 105–7, 136, 173 discount rate 29, 55, 86–9, 91–3, 100, 107 DU model 86 dual-process model 21, 61, 114, 173 economic 6, 105, 112, 125, 134, 144 economics 1, 3–4, 61–2, 76, 83, 158 economic framework of agency 3–4, 27, 63–5, 68–9, 73–4, 77–9, 85, 100, see also homo economicus socioeconomic 117, 125, 157 egalitarian 138 egocentric bias 121, see also bias Ellsberg paradox 141, see also paradox enkratic 35–7, 39, 51, 173, see also akrasia experimental philosophy 10 exponential discounting 4, 85–8, 107–8, 136, 174 compared to hyperbolic discounting 74, 90–3, 100, 105, 136, 158 external 18, 39, 57, 121, 129, see also internalism future bias 137, see also bias gender bias 118, see also bias guise of the good 17, 46–7 habit 22, 119, 123, 155–6 Hare, Richard 5, 17, 23, 26–7, 46, 68, 79, 126, see also internalism account of weakness of will 2, 27, 39–45 hedonism 32–3 Holton, Richard 3, 5, 10, 14, 25, 51, 56–60 homo economicus 63–4, 174 Hume, David 50, 121 Humean theory of motivation 50 hyperbolic discounting 4, 74, 85–6, 88–90, 115, 174 compared to exponential discounting 74, 90–3, 100, 105, 136, 158 iff 45 implicit bias 118–9, 147, see also bias

impulsivity 61, 90, 115n17 indifference 64, 71, 78, 81, 112, 136, 138 indifference point 111–2, 115, 153–4 intention 2–3, 9, 11, 25, 36, 60, 64, 150 biased action 120, 123, see also bias Bratman on intention 51–2, 53, 55–6 Davidson on intention 49–52, 53 Holton on intention 56–7 implementation intention 155–6 intentional action 13–4, 27–8, 49, 59, see also compulsion Mele on intention and intentional action 52–5, 56 rationality 127, 129, 131–2, 136, see also rationality weak-willed action as intentional 17, 45–7, 53, 77, 131, see also guise of the good internalism 18, 39, 41, 44, 46–7, 49, 129 inverse akrasia 24, 26, 132, 174, see also akrasia irrationality, see rationality Jeffrey, Richard 29, 71–2 judgement 14–6 Knobe effect 59, 174 knowledge 1–2, 9, 23, 25–6, 31–9, 45, 121–2 liar paradox 21–2, see also paradox lottery paradox 21–2, see also paradox magnitude effect 92, 174 marshmallow case 4, 97–101, 107–8, 111–7, 124–5, 148–9, 158, 174 Maskin, Eric 1, 108–12 matching law 88 Medea 2, 27, 42, 67 Mele, Alfred 1–3, 10, 46, 51–6, 67, 83–5 Mischel, Walter 1, 54, 85 money pump 139 moon illusion 120–1, see also perceptual bias moral weakness 2, 5, 17, 23, 42–5, 126, see also Hare, Richard near bias 137, 140, see also bias normative economics 61–2 nudging 148–51, 174 Odysseus 54, 150 optical illusion 122, 145–6, see also perceptual bias organ donation 148–51 ‘ought’ implies ‘can’ 42

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

index 197 paradox 18–9, 21–3, 84–5, 174 Pavlov, Ivan 155 perceptual bias 120, 145, 147, see also bias Plato 2, 25, 31–4, 38–9, 45, 51, 54, 121–2, see also Socrates, guise of the good positive economics 61 precommitment 6, 153–4, 174 preface paradox 21–2, see also paradox preference 3, 15, 18, 46, 54, 63, 87–8, 138–9, see also preference reversal bias, see also bias 121, 124 economic framework of agency 3, 63–6, 69–74, 123 (ir)rational preferences 130, 133, 137, see also preference reversal, rationality preference reversal 4, 74, 77–9, 100–1, 154, 158, 174, see also preference exponential and hyperbolic discounting 90–3, see also exponential discounting, hyperbolic discounting incoherent preferences 133, 135–6, 140, 142–3, see also rationality in marshmallow cases 97–101, 107–8, 111–2, see also marshmallow case weakness of will 63, 69, 73–4, 77, 79–85, 93–7, 101 prescriptivism 39–44, see also Hare, Richard probability density 109–10 probability discounting 4–5, 105–6 procrastination 54, 98, 124, 144–5 propositional attitude 14 pure discounting 75n2, 133–4, 174

reward 2, 5–6, 53–6, 124, 130, 149, 151–6, 158 economic framework of agency 65, 72, see also homo economicus discounting of rewards 27, 61, 75–84, 86–93, 97–100, 105–17 risk 2, 5–6, 105–6, 115, 140–1, 143–4, 157, see also bias Rorty, Amelie 60 Samuelson, Paul 63–4, 66, 69, 76, 86–7 satisficing 46, 129 Schelling, Thomas 15–6, 63, 65, 68 self-control 19–20, 51–4, 61, 114, 174, see also control self-deception 24, 40, 60 side-effect effect 59, 174 smoking 29, 67, 96–7, 144, 148, 157 Socrates 2, 16, 23, 25–6, 46, 54, 73, 121–2, see also Aristotle, internalism, Plato account of weakness of will 31–6, 38–9 sorites paradox 21–2, see also paradox Sozou, Peter 1, 105–10 Sripada, Chandra 20, 35 statistical bias 118, see also bias strict stationarity 76–7, 175 stubborn 57 sure thing principle 142n24 syllogism 36–9, 47, 50, 175 synchronic 11–2, 19–20, 38, 70, 72, 84, 130, 175, see also diachronic, rationality synchronic delay discounting 87, 89, 91 synchronic preference reversal 77–9, 83–4, 92, 95–6, 101, 135–6

quasi-hyperbolic 86, 89–90 racial bias 118–9, see also bias rationality 1, 15, 21, 38, 52, 57, 126–43, 174 biases 119, 125, 140–3 discounting 90, 106, see also exponential discounting, hyperbolic discounting economic framework of agency 4, 61, 64, 66, 68–9, 85, see also homo economicus paradox of irrationality 84–5, see also paradox weakness of will as irrational 2, 5–6, 23, 49, 51, 103, 145, 159 reasoning 2, 34, 36–7, 47–52, 119, 127, 129, see also syllogism reasons 34, 49–50, 56, 85, 94–5, 123 reasons-responsiveness 6, 126–8, 130–3, 136–40, 143, see also rationality reinforcement 88, 156 responsibility 29, 118

Tenenbaum, Sergio 123 thick concept 68 time bias 137, 140, 152, 155–6, 175, see also bias time slice 21, 70, 130, 137–40 transformative experience 138 Ulysses 54, 150 uncertainty 2, 5–6, 41, 55, 103–5, 119, 123–5, 155–8 uncertainty bias 103, 126, 133, 140–3, 148, 151, 159, 175, see also bias uncertainty and delay discounting 108–9, 112–7, 134 utility 3, 64–6, 78, 81, 86, 115, 133–4, see also value expected utility 66, 68, 129, 140–1, 143 value 1, 24, 29, 94–5, 138–9, 144, 149, 152, see also utility

OUP CORRECTED PROOF – FINAL, 29/6/2023, SPi

198 index value (cont.) discounted value 4–5, 55, 61, 74–92, 98–100, 105–14, 133–4, 153–4 economic framework of agency 3, 63–6, 69–73, 123, see also homo economicus value-judgement 40, 46 vice 35, 39

virtue 24, 35–6, 39 visual illusion 5–6, 120–3, 145–6, see also perceptual bias Watson, Gary 65 welfare 61–2, 65–6, 157 well-being 61, 134, 136–7, 140, 143, 156