Ethics of Digital Well-Being: A Multidisciplinary Approach [1st ed.] 9783030505844, 9783030505851

This book brings together international experts from a wide variety of disciplines, in order to understand the impact th

1,032 82 5MB

English Pages XII, 265 [271] Year 2020

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Ethics of Digital Well-Being: A Multidisciplinary Approach [1st ed.]
 9783030505844, 9783030505851

Table of contents :
Front Matter ....Pages i-xii
The Ethics of Digital Well-Being: A Multidisciplinary Perspective (Christopher Burr, Luciano Floridi)....Pages 1-29
Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry (Rafael A. Calvo, Dorian Peters, Karina Vold, Richard M. Ryan)....Pages 31-54
Corporatised Identities ⇠Digital Identities: Algorithmic Filtering on Social Media and the Commercialisation of Presentations of Self (Charlie Harry Smith)....Pages 55-80
Digital Well-Being and Manipulation Online (Michael Klenk)....Pages 81-100
What Contribution Can Philosophy Provide to Studies of Digital Well-Being (Michele Loi)....Pages 101-118
Cultivating Digital Well-Being and the Rise of Self-Care Apps (Matthew J. Dennis)....Pages 119-137
Emotions and Digital Well-Being: The Rationalistic Bias of Social Media Design in Online Deliberations (Lavinia Marin, Sabine Roeser)....Pages 139-150
Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection (Andrew Gibson, Jill Willis)....Pages 151-173
Big Data and Wellbeing: An Economic Perspective (Clement Bellet, Paul Frijters)....Pages 175-206
The Implications of Embodied Artificial Intelligence in Mental Healthcare for Digital Wellbeing (Amelia Fiske, Peter Henningsen, Alena Buyx)....Pages 207-219
Causal Network Accounts of Ill-Being: Depression & Digital Well-Being (Nick Byrd)....Pages 221-245
Malware as the Causal Basis of Disease (Michael Thornton)....Pages 247-261
Correction to: Corporatised Identities ⇠Digital Identities: Algorithmic Filtering on Social Media and the Commercialisation of Presentations of Self (Charlie Harry Smith)....Pages C1-C1
Back Matter ....Pages 263-265

Citation preview

Philosophical Studies Series

Christopher Burr Luciano Floridi  Editors

Ethics of Digital Well-Being A Multidisciplinary Approach

Philosophical Studies Series Volume 140

Editor-in-Chief Mariarosaria Taddeo, Oxford Internet Institute, Digital Ethics Lab, University of Oxford, Oxford, UK Executive Editorial Board Patrick Allo, Vrije Universiteit Brussel, Brussel, Belgium Massimo Durante, Università degli Studi di Torino, Torino, Italy Phyllis Illari, University College London, London, UK Shannon Vallor, Santa Clara University, Santa Clara, CA, USA Board of Consulting Editors Lynne Baker, Department of Philosophy, University of Massachusetts, Amherst, USA Stewart Cohen, Arizona State University, Tempe, AZ, USA Radu Bogdan, Dept. Philosophy, Tulane University, New Orleans, LA, USA Marian David, Karl-Franzens-Universität, Graz, Austria John Fischer, University of California, Riverside, Riverside, CA, USA Keith Lehrer, University of Arizona, Tucson, AZ, USA Denise Meyerson, Macquarie University, Sydney, NSW, Australia Francois Recanati, Ecole Normale Supérieure, Institut Jean Nicod, Paris, France Mark Sainsbury, University of Texas at Austin, Austin, TX, USA Barry Smith, State University of New York at Buffalo, Buffalo, NY, USA Nicholas Smith, Department of Philosophy, Lewis and Clark College, Portland, OR, USA Linda Zagzebski, Department of Philosophy, University of Oklahoma, Norman, OK, USA

Philosophical Studies aims to provide a forum for the best current research in contemporary philosophy broadly conceived, its methodologies, and applications. Since Wilfrid Sellars and Keith Lehrer founded the series in 1974, the book series has welcomed a wide variety of different approaches, and every effort is made to maintain this pluralism, not for its own sake, but in order to represent the many fruitful and illuminating ways of addressing philosophical questions and investigating related applications and disciplines. The book series is interested in classical topics of all branches of philosophy including, but not limited to: • • • • • • • •

Ethics Epistemology Logic Philosophy of language Philosophy of logic Philosophy of mind Philosophy of religion Philosophy of science

Special attention is paid to studies that focus on: • the interplay of empirical and philosophical viewpoints • the implications and consequences of conceptual phenomena for research as well as for society • philosophies of specific sciences, such as philosophy of biology, philosophy of chemistry, philosophy of computer science, philosophy of information, philosophy of neuroscience, philosophy of physics, or philosophy of technology; and • contributions to the formal (logical, set-theoretical, mathematical, information-­ theoretical, decision-theoretical, etc.) methodology of sciences. Likewise, the applications of conceptual and methodological investigations to applied sciences as well as social and technological phenomena are strongly encouraged. Philosophical Studies welcomes historically informed research, but privileges philosophical theories and the discussion of contemporary issues rather than purely scholarly investigations into the history of ideas or authors. Besides monographs, Philosophical Studies publishes thematically unified anthologies, selected papers from relevant conferences, and edited volumes with a well-defined topical focus inside the aim and scope of the book series. The contributions in the volumes are expected to be focused and structurally organized in accordance with the central theme(s), and are tied together by an editorial introduction. Volumes are completed by extensive bibliographies. The series discourages the submission of manuscripts that contain reprints of previous published material and/or manuscripts that are below 160 pages/88,000 words. For inquiries and submission of proposals authors can contact the editor-in-chief Mariarosaria Taddeo via: [email protected]

More information about this series at http://www.springer.com/series/6459

Christopher Burr  •  Luciano Floridi Editors

Ethics of Digital Well-Being A Multidisciplinary Approach

Editors Christopher Burr Oxford Internet Institute University of Oxford Oxford, UK

Luciano Floridi Oxford Internet Institute University of Oxford Oxford, UK

ISSN 0921-8599     ISSN 2542-8349 (electronic) Philosophical Studies Series ISBN 978-3-030-50584-4    ISBN 978-3-030-50585-1 (eBook) https://doi.org/10.1007/978-3-030-50585-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 Chapter 2 is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/). For further details see licence information in the chapter. This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgements

We would like to begin by acknowledging the hard work of all the contributors to this edited collection. It is obvious that without their support and effort this edited collection would not exist and would not contain such excellent and thoughtful chapters. Second, we would like to acknowledge the entire community at the Digital Ethics Lab (DELab) and the Oxford Internet Institute (OII). This collection was conceived and primarily organised among this supportive community, which is truly one of the best places to conduct the inherently interdisciplinary research that is required to develop such a collection. Christopher would also like to acknowledge his colleagues at the Alan Turing Institute for their support towards the end of the process of publishing this collection. In addition, we would like to thank all of the participants of a workshop held on 19th July 2019, as well as those at Exeter College, University of Oxford, who hosted the event. Many of the contributions in this collection were initially presented at this workshop, and we’re confident that all the contributors who attended will agree that the feedback from the guests and participants was invaluable in revising and formulating the ideas that are presented in this collection. We would also like to specifically thank Danuta Farah for all her help in organising the workshop and the collection, and for keeping the DELab running so smoothly; Mariarosaria Taddeo, both as the current series editor and also as a colleague who offered insightful feedback and intellectual encouragement during the time this collection was developed; and the whole team at Springer Nature for supporting the publication of this edited collection. Finally, we would like to acknowledge Microsoft Research for funding the research project associated with this edited collection and for supporting our research unconditionally. Oxford, UK  Christopher Burr  Luciano Floridi

v

Contents

1 The Ethics of Digital Well-Being: A Multidisciplinary Perspective ������������������������������������������������������������    1 Christopher Burr and Luciano Floridi 2 Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry ��������������������������������������������������������   31 Rafael A. Calvo, Dorian Peters, Karina Vold, and Richard M. Ryan 3 Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media and the Commercialisation of Presentations of Self ����������������������������   55 Charlie Harry Smith 4 Digital Well-Being and Manipulation Online����������������������������������������   81 Michael Klenk 5 What Contribution Can Philosophy Provide to Studies of Digital Well-Being��������������������������������������������������������������  101 Michele Loi 6 Cultivating Digital Well-Being and the Rise of Self-Care Apps����������  119 Matthew J. Dennis 7 Emotions and Digital Well-Being: The Rationalistic Bias of Social Media Design in Online Deliberations ��������������������������  139 Lavinia Marin and Sabine Roeser 8 Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection ��������������������������������������������  151 Andrew Gibson and Jill Willis 9 Big Data and Wellbeing: An Economic Perspective������������������������������  175 Clement Bellet and Paul Frijters

vii

viii

Contents

10 The Implications of Embodied Artificial Intelligence in Mental Healthcare for Digital Wellbeing������������������������������������������  207 Amelia Fiske, Peter Henningsen, and Alena Buyx 11 Causal Network Accounts of Ill-Being: Depression & Digital Well-Being������������������������������������������������������������  221 Nick Byrd 12 Malware as the Causal Basis of Disease������������������������������������������������  247 Michael Thornton Index������������������������������������������������������������������������������������������������������������������  263

Contributors

Clement  Bellet  Erasmus School of Economics, Erasmus University Rotterdam, Rotterdam, The Netherlands Christopher Burr  Oxford Internet Institute, University of Oxford, Oxford, UK The Alan Turing Institute, London, UK Alena Buyx  Institute for History and Ethics of Medicine, Technical University of Munich School of Medicine, Technical University of Munich, Munich, Germany Nick Byrd  Stevens Institute of Technology, Hoboken, NJ, USA Rafael A. Calvo  Dyson School of Design Engineering, Imperial College London, London, UK Leverhulme Centre for the Future of Intelligence, Cambridge, UK Matthew J. Dennis  Department of Values, Technology, and Innovation, TU Delft, Delft, The Netherlands Amelia Fiske  Institute for History and Ethics of Medicine, Technical University of Munich School of Medicine, Technical University of Munich, Munich, Germany Luciano Floridi  Oxford Internet Institute, University of Oxford, Oxford, UK The Alan Turing Institute, London, UK Paul Frijters  London School of Economics, London, UK Andrew  Gibson  Science and Engineering Faculty, Queensland University of Technology (QUT), Brisbane, QLD, Australia Peter  Henningsen  Department of Psychosomatic Medicine and Psychotherapy, Klinikum rechts der Isar at Technical University of Munich, Munich, Germany Michael Klenk  Delft University of Technology, Delft, The Netherlands Michele Loi  Digital Ethics Lab, Digital Society Initiative and Center for Biomedical Ethics and the History of Medicine, University of Zurich, Zürich, Switzerland ix

x

Contributors

Lavinia Marin  Ethics and Philosophy of Technology Section, Department of VTI, Faculty of TPM, TU Delft, Delft, The Netherlands Dorian Peters  Leverhulme Centre for the Future of Intelligence, Cambridge, UK Design Lab, University of Sydney, Sydney, NSW, Australia Sabine Roeser  Ethics and Philosophy of Technology Section, Department of VTI, Faculty of TPM, TU Delft, Delft, The Netherlands Richard  M.  Ryan  Institute for Positive Psychology and Education, Australian Catholic University, North Sydney, NSW, Australia Charlie Harry Smith  Oxford Internet Institute, University of Oxford, Oxford, UK Michael Thornton  Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK Karina Vold  Leverhulme Centre for the Future of Intelligence, Cambridge, UK Alan Turing Institute, London, UK Jill  Willis  Faculty of Education, Queensland University of Technology (QUT), Brisbane, QLD, Australia

About the Editors

Christopher Burr is a Philosopher of Cognitive Science and Artificial Intelligence. He is a Senior Research Associate at the Alan Turing Institute and a Research Associate at the Digital Ethics Lab/Oxford Internet Institute, University of Oxford. His current research explores philosophical and ethical issues related to datadriven technologies, including the opportunities and risks that such technologies have for mental health and well-being. A primary goal of this research is to develop frameworks and guidance to support the governance, responsible innovation and sustainable use of data-driven technology within a digital society. To support this goal, he has worked with a number of public sector bodies and organisations, including NHSx; the UK Government’s Department for Health and Social Care; the Department for Digital, Culture, Media and Sport; the Centre for Data Ethics and Innovation; and the Ministry of Justice. He has held previous posts at the University of Bristol, where he explored the ethical and epistemological impact of big data and artificial intelligence as a postdoctoral research and also completed his PhD in 2017. Research Interests: Philosophy of Cognitive Science and Artificial Intelligence, Digital Ethics, Decision Theory, Public Policy and Human-Computer Interaction. [email protected]

Luciano Floridi is the OII’s Professor of Philosophy and Ethics of Information at the University of Oxford, where he is also the Director of the Digital Ethics Lab of the Oxford Internet Institute. Still in Oxford, he is Distinguished Research Fellow of the Uehiro Centre for Practical Ethics of the Faculty of Philosophy and Research Associate and Fellow in Information Policy of the Department of Computer Science. Outside Oxford, he is Turing Fellow of the Alan Turing Institute (the national institute for data science and AI) and Chair of its Data Ethics Group and Adjunct Professor (‘Distinguished Scholar in Residence’) of the Department of Economics, American University, Washington D.C.  

xi

xii

About the Editors

He is deeply engaged with emerging policy initiatives on the socio-ethical value and implications of digital technologies and their applications. He has worked closely on digital ethics (including the ethics of algorithms and AI) with the European Commission, the German Ethics Council and, in the UK, with the House of Lords, the House of Commons, the Cabinet Office and the Information Commissioner’s Office, as well as with multinational corporations (e.g. Cisco, Google, IBM, Microsoft and Tencent). Among his current commitments, he is Chair of the Ethics Committee of the Machine Intelligence Garage project, Digital Catapult, UK innovation programme; Member of the Board of the UK’s Centre for Data Ethics and Innovation (CDEI); the Advisory Board of The Institute for Ethical AI in Education; the EU Commission’s High-Level Group on Artificial Intelligence; EY AI Advisory Board; and the Advisory Board of the Vodafone Institute for Society and Communications. Research Interests: Digital Ethics (including the ethics of AI, and Information and Computer Ethics), Philosophy of Information and Philosophy of Technology. Among his recent books, all published by Oxford University Press (OUP): The Logic of Information (2019); The Fourth Revolution - How the infosphere is reshaping human reality (2014), winner of the J. Ong Award; The Ethics of Information (2013); The Philosophy of Information (2011). [email protected]

Chapter 1

The Ethics of Digital Well-Being: A Multidisciplinary Perspective Christopher Burr and Luciano Floridi

Abstract  This chapter serves as an introduction to the edited collection of the same name, which includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. The purpose of this introductory chapter is to provide a short primer on the different disciplinary approaches to the study of well-being. To supplement this primer, we also invited key experts from several disciplines—philosophy, psychology, public policy, and health care—to share their thoughts on what they believe are the most important open questions and ethical issues for the multi-disciplinary study of digital well-being. We also introduce and discuss several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. Keywords  Artificial intelligence · Automated interventions · Digital ethics · Digital well-being · Sustainable design

1.1  Introduction Recently, digital well-being has received increased attention from academics, technology companies, and journalists (see Burr et  al. 2020a, b). While a significant amount of this interest has been focused on understanding the psychological and social impact of various digital technologies (e.g. Orben and Przybylski 2019), in other cases the interest has been much broader. For instance, the International Network for Government Science Advice (INGSA)—a forum that advises on how scientific evidence can inform policy at all levels of government—claims that

C. Burr (*) · L. Floridi Oxford Internet Institute, University of Oxford, Oxford, UK The Alan Turing Institute, London, UK e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_1

1

2

C. Burr and L. Floridi

“to understand wellbeing in the 21st century requires an understanding of transformative digital technologies as drivers of change not just in human material circumstances, but also in human values and organisational systems that support wellbeing” (Gluckman and Allen 2018, p.10). The digital transformation of society, it seems, requires a more thorough investigation of how our conceptual understanding of well-being may have been altered by emerging technologies and the new modes of being they enable. One may rightfully wonder why this surge in interest has happened now. After all, digital technologies have been around for decades and our well-being has been dependent on technology for far longer (Floridi 2014). What, if anything, is different this time around? The short answer is that the function, use, effects, and even experience of digital technologies has been altered significantly by the widespread implementation of ubiquitous computing (e.g. wearables, smartphones), machine learning, and more recently artificial intelligence (AI). These technological developments have resulted in drastic changes to our environment, including social domains such as healthcare, education, employment, policy, and entertainment, and have also been accompanied by drastic shifts in media consumption and lifestyle habits (Ofcom 2018). Combined, these developments are exposing humans to an environment that is increasingly adaptable to them, either as individuals or as members of segmented groups, by monitoring and analysing digital traces of their interactions with intelligent software agents (Burr et al. 2018). This is an important shift. Whereas humanity has refined its ability to engineer and reconstruct its environmental niche over the course of our evolutionary history (Sterelny 2003), we are now at a stage where the design and construction of our environmental niche can be automated, thereby reducing the need for human agency and oversight. For example, recommender systems, due to their ability to operate at scale and speed, are deployed to control the architecture of our online environments, making split-­ second decisions about the design elements of web pages (e.g. colour of fonts), placement and content of links (e.g. advertisements), appropriate pricing for products (e.g. dynamic pricing of holiday packages), and much more (Milano et  al. 2020). Such a change is unprecedented and demands that we consider the ethical implications for our individual and social well-being. This is the primary purpose of this edited collection: to explore the ethics of digital well-being from a multi-­ disciplinary perspective, in order to ensure that the widest possible aperture is employed without losing focus on what matters most. The purpose of this introductory chapter, more specifically, is to provide an informative foundation to ground and contextualise the subsequent discussion, while also offering some initial suggestions about where to head next. That said, we do not consider it necessary to provide a precise definition or theory of ‘digital well-­ being’ that can serve as a universal placeholder for each of the subsequent chapters. This would be inappropriate for a number of reasons. First, as a multi-disciplinary (and often interdisciplinary) collection, each chapter will emphasise different aspects of digital well-being, conditional on the explanatory goal they wish to achieve. Second, it is unclear at present whether we need a new concept of ‘digital well-being’ that is distinct from ‘well-being’ in a meaningful way. And, finally, the

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

3

purpose of this collection is to generate further interdisciplinary interest in the topic, in the hope that greater conceptual clarity may arise from subsequent discussions. Therefore, for present purposes, ‘digital well-being’ can be treated as referring loosely to the project of studying the impact that digital technologies, such as social media, smartphones, and AI, have had on our well-being and our self-understanding of what it means to live a life that is good for us in an increasingly digital society. While the above outline may serve as a sufficient placeholder for general discussion, there is obviously a risk of it leading to some conceptual confusion. For example, a philosopher could rightfully ask what explanatory or enumerative role the concept offers, over and above ‘well-being’ simpliciter. Does the restricted domain, entailed by the inclusion of ‘digital’, offer any useful theoretical constraints, or does it merely impede the philosophical pursuit of identifying the most general conditions for well-being? In addition, psychology and economics have, in recent years, developed new tools that are designed to measure the subjective well-being of individuals or the socioeconomic indicators that are treated as proxies of social well-­ being. What explanatory or prescriptive role would the concept of ‘digital well-being’ serve in these disciplines? Could it be employed as a theoretical construct to be measured by a range of psychometric tests? Could it offer any useful theoretical constraints to assist with the selection of relevant socioeconomic indicators of social well-being, and in turn help to guide policy decisions that seek to improve human capabilities in an increasingly digital society? These questions require careful consideration, ongoing scrutiny, and thoughtful discussion, and we hope that the current collection may offer a rich starting point for answering them. This collection includes chapters that explore digital well-being from a range of disciplinary perspectives, including philosophy, psychology, economics, health care, and education. Because of this broad focus, Sect. 1.2 provides a short primer that serves as an introduction for those readers who may be approaching the topic of digital well-being from a particular disciplinary perspective. In each of these sub-­ sections the reader will also find a short commentary from invited experts who were asked to provide their own views on how they think their respective disciplines may be affected by ongoing technological innovation (e.g. novel research methodologies, new means to test empirical hypotheses, impacts on policy-making), as a way of pointing to further areas of research for the interested reader. Following this, Sect. 1.3 introduces several themes that we believe will be fundamental to the ongoing study of digital well-being: digital gratitude, automated interventions, and sustainable co-well-being. These topics are not intended to be exhaustive or representative of the literature (see Burr et al. 2020a, b for a more detailed review). Rather, they have been chosen in part because of the connection they have to some key ideas in other chapters. What they offer is merely some initial ideas that are intended to be, in conjunction with the subsequent chapters, a platform and guide for further discussion. Therefore, we hope that this collection as a whole will provide an informative starting point for readers from different disciplines interested in the study of well-being, while also contributing to what we expect will be an exciting and interdisciplinary pursuit of ensuring humanity can flourish in this new digital environment.

4

C. Burr and L. Floridi

1.2  Theories of Well-Being: A Short Primer Theoretical statements about well-being are typically understood as making either a descriptive claim (e.g., whether the implementation of a socio-economic policy typically enhances or decreases some quantifiable measure of well-being), or a normative claim (e.g., an evaluation of the goodness or badness of some moral action with regards to whether it maximises welfare). Although this can be a useful heuristic for assessing the nature of a particular well-being claim, it is also conceptually problematic. As Alexandrova (2017, p. xv) argues, empirical (descriptive) claims about well-being rely on an inseparable normative standard: “any standard or method of measurement of well-being is already a claim about the appropriateness of an action or state in the light of some assumed value.” For example, if a policymaker states that an economic policy (e.g., increasing funding for education) is highly correlated with some measure of social well-being, their descriptive claim is also mixed with a normative element (i.e., increasing funding for education ought to be done to increase social welfare). This is why the study of well-being is an inherently interdisciplinary task. Many disciplines, including philosophy, psychology, design engineering, economics, law, medicine, and sociology are concerned with well-being, and each discipline has its own distinct theoretical framework. Therefore, it is important to understand the commonalities and differences between the various theoretical perspectives because any digital technology that claims to be promoting or protecting well-being must at the very least implicitly presume some general account of what it is for a life to go well for an individual. By introducing some of the major theoretical perspectives, we will be able to specify more clearly what is at stake. Readers who are already familiar with the general issues in a particular discipline should feel free to skip over the relevant section.

1.2.1  Philosophy Philosophy has a long tradition of seeking to understand the concept of ‘well-being’, including its relationship with other important ethical concepts, such as ‘reason’ or ‘goodness’. A standard view is that ‘well-being’ refers to what is non-­instrumentally good for a subject S (Crisp 2006; Woodard 2013). This notion is used to separate that which is intrinsically (i.e., non-instrumentally) good for a person—sometimes referred to as ‘prudential value’—from that which is merely good because of its instrumental role in leading to a greater level of well-being (e.g., income, employment, or social network). Therefore, a fully developed philosophical theory of well-­ being is concerned both with enumerating those things that are non-instrumentally good for someone (e.g., a mental state such as pleasure, or desire-satisfaction) and also explaining why the individual ought to pursue and promote the respective good (Lin 2017; Crisp 2006; Tiberius 2015). These two theoretical objectives can come apart, such that there can be agreement between two theories regarding the

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

5

enumerated goods for a particular theory (e.g., friendship) but disagreement concerning the reasons why these goods have prudential value (e.g., friendship satisfies an informed desire or fulfils an important part of our nature). Although it has been extended or challenged over the last couple of decades (Haybron 2008; Woodard 2013; Sumner 1996), a (simplified) typology for well-being theories, famously introduced by Derek Parfit (1984), can help organise the various philosophical theories of well-­being into hedonistic theories, desire-fulfilment theories, and objective list theories. This typology is sufficient for our present purposes. Hedonistic theories claim that all that matters for well-being is the amount of pleasure and pain experienced by an individual, either at some point in time or over the course of their life. Different theories may diverge on how these states should be measured (i.e. their hedonic level) but will agree that more pleasure is good and more pain is bad. According to hedonists, if activities or objects such as music, love, food, or expressions of gratitude are good for us (the enumerative component), it is in virtue of their bringing about mental states such as pleasure and avoiding mental states such as pain (the explanatory component). Desire-fulfilment theories claim that it is good for us to get what we desire, and conversely, if our desires remain unfulfilled or frustrated this will lead to a decrease in our well-being. As with the other two theories, micro-debates exist within this class of theories that try to deal with a variety of possible objections. For example, desire-fulfilment theories are often objected to on the basis that the fulfilment of certain desires (e.g., the desire to stream one more television show rather than reading a book, or to eat processed meat rather than a healthier plant-based alternative) clearly leads to a diminished level of well-being. As such, desire-fulfilment theorists will seek to make the initial claim more precise and may argue that only those desires that are informed (i.e. held on the basis of rational deliberation and relevant evidence) should be considered. Whereas desire-fulfilment and hedonistic theories make reference to subjective attitudes that an individual possesses, objective list theories claim that well-being is constituted by some list of goods that are prudentially valuable irrespective of the attitude that an individual may hold towards them. Aside from this feature of attitude-­independence, as Fletcher (2016) labels it, the list of non-instrumental goods may have little in common. They could simply be a diverse list including goods such as achievement, friendship, pleasure, knowledge, and virtue, among others. Each of the above classes of theories is home to a series of micro debates, e.g., whether the process of obtaining some good must be experienced by the subject to entail an improvement in their overall well-being. These debates are a worthwhile theoretical enterprise but need not concern us for our present purposes. Moreover, in recent years, philosophers have focused on how it may be possible to integrate the various disciplines that study well-being in order to show how they can collectively contribute to an increased understanding of well-being (Alexandrova 2017; Bishop 2015; Haybron 2008). For example, Bishop states that we should begin with the assumption that “both philosophers and scientists are roughly right about wellbeing, and then figure out what it is they’re all roughly right about” (2015, p. 2). Psychology, as we will see in the next sub-section, is one of these sciences.

6

C. Burr and L. Floridi

Philosophy and Digital Well-Being Guy Fletcher (University of Edinburgh) I understand ‘digital well-being’ to mean the impact of digital technologies upon well-being as opposed to some specific dimension of well-being (for an introduction to philosophy and well-being generally, see Fletcher 2016). There is a rich seam of work at the intersection of politics, philosophy, and journalism on the ways in which social media, big data, and the like function to undermine democratic institutions. I will leave this, very interesting, work to one side to focus on philosophical work that concerns the direct impact of digital technologies upon individual well-being. Philosophers are interested in the myriad ways that digital technologies can promote or undermine well-being. One major focus of attention has been social media and the way in which social media impacts friendship, an important prudential good (whether instrumental or intrinsic). Social media creates new categories of purported friendship (‘Facebook friends’), makes it possible to make and sustain purely online relationships, and also has the capacity to affect our real-world friendships in ways that might be positive or negative for well-being (e.g. Elder 2014; Fröding and Peterson 2012; Jeske 2019; Sharp 2012; Vallor 2012). Philosophers are also interested in the way in which digital technologies such as social media impact upon the construction and expression of our personalities (e.g. Garde-Hansen 2009; Stokes 2012). Digital technologies are also philosophically significant in their ability to affect our powers, capacities and virtues. Recent philosophical work has examined the weakening of our powers of attention in a world of endless, readily-available, digital distraction, and the interaction between technology and the virtues (e.g. Williams 2018; Vallor 2016). One live question is whether it is possible to use or amend the technology itself to reduce its attention-­ grabbing nature. Another more squarely philosophical question is whether we can equip ourselves with powers and capacities to mitigate the attention-­ hogging effects of digital technologies, by developing specific virtues of attention and the like (e.g. Vallor 2016). Biography Dr. Guy Fletcher is senior lecturer in philosophy at the University of Edinburgh. His work examines the nature of moral discourse, philosophical theories of well-being, and theories of prudential discourse. He edited the Routledge Handbook of Philosophy of Well-Being (2016) and co-edited Having It Both Ways: Hybrid Theories in Meta-Normative Theory (Oxford University Press 2014). He is author of An Introduction to the Philosophy of Well-Being (Routledge 2016) and has another book, Dear Prudence, forthcoming with Oxford University Press.

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

7

1.2.2  Psychology Well-being has become an important indicator of progress for many governments around the world, thanks in part to empirical research that has shown it to be associated with a range of positive outcomes such as “effective learning, productivity and creativity, good relationships, pro-social behaviour, and good health and life expectancy” (Huppert and So 2013). Unlike philosophy, the behavioural and cognitive sciences—including psychology—are less concerned with whether these goods are non-instrumentally valuable, but rather with what causes them to fluctuate and how best to measure them. To understand the current theoretical focus of psychological theories of well-­ being, it is worth mentioning the emergence of positive psychology. Positive psychology emerged as a distinct disciplinary enterprise at the turn of the century. Writing in 2000, Seligman and Csikszentmihalyi stated that, “[p]sychology has, since World War II, become a science largely about healing. It concentrates on repairing damage within a disease model of human functioning” (Seligman and Csikszentmihalyi 2000, p.  5, emphasis added). This disease model assumed that well-being arose from the removal of mental disorders such as depression, and thus required no separate study or distinct methodology of its own. Positive psychology rejected this model and instead sought to reorient psychological science towards a better understanding of valuable subjective experiences in their own right (e.g., happiness, contentment, or satisfaction). Its goal was to determine which environmental features are needed to achieve an optimal level of human flourishing for individuals and communities. To achieve this goal, it was necessary to establish a distinct set of theoretical tools which could be used to measure and validate various psychological constructs that constitute well-being. Perhaps the most famous of these scales is subjective well-being (SWB), which comprises three components: frequent positive affect, infrequent negative affect, and an evaluation of the subject’s ‘satisfaction with life’ (Diener et  al. 1985). The assessment of SWB typically relies on self-report (i.e. answers given by an individual in response to a question and on the basis of introspection), and because of this reliance the measurement of SWB can be affected by a range of cognitive or memory biases that impact an individual’s ability to accurately recall and report on the subjective experience being assessed (e.g. frequency of positive emotions). Methods such as experience sampling (Csikszentmihalyi 2008) have improved the reliability of SWB measures, by allowing researchers to deliver near real-time assessments of an individual’s experience through notifications that prompt users to reflect on their well-being at specific times of the day and during different activities, providing what is sometimes referred to as ‘ecologically-­ valid data’. More recently, suggestions to extend these methodologies by leveraging advances in ubiquitous computing have been proposed (Reeves et al. 2019). SWB is widely assumed to be multidimensional, but there is disagreement over just how many dimensions (or factors) to include. Huppert and So (2013), for example, argue that ten factors are needed: competence, emotional stability, engagement,

8

C. Burr and L. Floridi

meaning, optimism, positive emotion, positive relationships, resilience, self-esteem, and vitality. In contrast, Ryff (1989) claims that only six factors are needed: autonomy, environmental mastery, personal growth, positive relationships, purpose in life and self-acceptance. In spite of these disagreements, there is often significant overlap between different theories, and many often rely on the same psychometric scales for measuring subjective well-being (e.g. Satisfaction With Life, Positive and Negative Affect Scale). We can separate the various psychological theories into two groups: hedonic and eudaimonic. Similar to philosophical hedonism, hedonic psychology claims that well-being consists of subjective experiences of pleasure or happiness, and can include “the preferences and pleasures of the mind as well as the body” (Ryan and Deci 2001, p.  144). Eudaimonic psychology, by contrast, claims that well-being consists of achieving one’s potential, as determined by human nature. According to eudaimonic psychology, human flourishing occurs when “people’s life activities are most congruent or meshing with deeply held values and are holistically or fully engaged” (Ryan and Deci 2001, p. 146). These theoretical perspectives are often broadly characterised and can encompass a wide variety of different theories within their scope. For example, self-­ determination theory (SDT) is characterised as a eudaimonic theory (Ryan and Deci 2001). Briefly, SDT is a theory of human motivation and personality that is concerned with identifying the basic psychological needs of human individuals as well as the environmental conditions that are required to supply people with the nutriments to thrive and grow psychologically (Ryan and Deci 2017). SDT identifies three basic needs (competence, autonomy, and relatedness), which must be satisfied for an individual to experience an ongoing sense of psychological integrity and well-being. It is discussed further in Chap. 2 of this collection (Supporting human autonomy in AI systems: A framework for ethical enquiry), in a contribution from Rafael A. Calvo, Dorian Peters, Karina Vold, and Richard M. Ryan.

Psychology and the Study of Digital Technologies Amy Orben (University of Cambridge) Psychologists are becoming increasingly involved in the study of novel technologies like social media. With the field’s focus being mainly on the individual, much of the work has examined digital technology’s effect on people’s well-being, cognition or behaviour (e.g. Burke and Kraut 2016). This research has routinely taken a broad view: examining the use of digital technologies as a whole, and trying to quantify how this affects the whole population or certain broad sections of society. Yet the diversity of digital technology uses and users might be the crucial aspect missing in current psychological investigations. (continued)

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

9

A lot of research has been done examining correlations or simple longitudinal relations between ‘screen time’ and general well-being outcomes, yet little concrete results have been found (Orben and Przybylski 2019; Jensen et al. 2019). We now know that increased time spent on digital technologies is routinely correlated with decreased well-being, but it is unclear whether this tiny correlation is causal or influential (Orben et al. 2019; Ferguson 2009). These issues are compounded by the low transparency of work done in the area, especially in the light of the recent replication crisis and open science movements (Munafò et al. 2017). The next years will see more and more psychologists moving away from general ‘screen time’ to using more digital tracking and fine-grained digital usage data—if such data is provided by the companies that hold them (Ellis et al. 2019). They could then examine how specific uses of technologies might affect certain cognitions (e.g. self-comparison), which could in turn affect well-being (Verduyn et al. 2017). Furthermore, psychologists are increasingly integrating more robust and transparent research methods into their work, while also acknowledging that in-depth longitudinal studies will be needed to tease apart the cause-and-effect relationships that the public and policy are so interested in. Such work would ultimately allow researchers to come closer to understanding whether the increased use of digital technologies causally decreases population well-being by triangulating different types of evidence, diverse study designs and various measurement methodologies (Munafò and Smith 2018; Orben 2019). Biography Dr. Amy Orben is College Research Fellow at Emmanuel College and the MRC Cognition and Brain Sciences Unit. Her work using large-scale datasets to investigate social media use and teenage mental health has been published in leading scientific and psychology journals. The results have put into question many long-held assumptions about the potential risks and benefits of ‘screen time’. Alongside her research, Amy campaigns for the use of improved statistical methodology in the behavioural sciences and the adoption of more transparent and open scientific practices, having founded the global ReproducibiliTea initiative. Amy also regularly contributes to both media and policy debate, in the UK and internationally.

1.2.3  Economics The development of new psychological measures of well-being has also brought about changes to the socio-economic study of well-being. Welfare economics, for example, is typically concerned with the measurement of aggregate levels of well-­ being. Its aim is to construct a social welfare function, which can be used to rank order a collection of social states (e.g., the differential allocation of public resources),

10

C. Burr and L. Floridi

and in turn help decide which of a possible set of social policies would maximise social well-being. This normative approach assumes a preference satisfaction view of well-being, in which rational agents are assumed to choose what is best for them and to reveal their preferences through overt choice behaviour (Binmore 2008). Obtaining this data at scale, however, is challenging and so surrogate indicators for national (or aggregate) well-being are often used instead. Until recently, one of the most popular indicators of national well-being was gross domestic product (GDP) per capita. As Diener and Seligman (2004) note, this is because economic indicators of this type are “rigorous, widely available, and updated frequently, whereas few national measures of well-being exist.” In addition, increased GDP per capita is assumed to lead to an increase in the freedom of choice available to individuals, which from the perspective of the preference satisfaction view means a greater ability to maximise well-being (or utility). However, the use of such indicators as surrogates for national well-being has been widely criticised (e.g. Stiglitz et  al. 2008), most notably from approaches within development economics, which often eschew the idea of a preference satisfaction view of well-being (Nussbaum 2011). One example is the capability approach (Robeyns 2005; Nussbaum and Sen 1993). In short, the capability approach draws attention to what people are “actually able to do and to be” in their environment, rather than simply assuming that their choice behaviour reveals a stable and ordered set preferences (Robeyns 2005)—an assumption that is also heavily challenged by research in behavioural economics that focuses on cognitive biases in judgement and decision-making (Kahneman 2011). A motivating idea here is that individuals need the freedom to pursue distinct capabilities, which may include health, education, arts and entertainment, political rights, social relationships, and so on. These diverse capabilities are poorly captured by a single indicator such as GDP per capita, and so a richer framework for measuring well-being is required. The influence of the capabilities approach can be seen in the United Nations Human Development Index and related programmes such as the Sustainable Development Goals (United Nations 2019). It also influenced a report, commissioned by the then President of the French Republic, Nicholas Sarkozy, who stated that he was “unsatisfied with the present state of statistical information about the economy and the society” and that economic progress and social development required more relevant indicators than simply GDP (Stiglitz et al. 2008). As one of their key recommendations, the commission suggested that “[m]easures of both objective and subjective well-being provide key information about people’s quality of life” and that “statistical offices should incorporate questions to capture people’s life evaluations, hedonic experiences and priorities in their own survey” (Stiglitz et  al. 2008, p.  12). Chapter 9 of this collection (Big Data and Wellbeing: An Economic Perspective), by Clement Bellet and Paul Frijters, offers a helpful overview of the recent developments that have followed this recommendation, leveraging insights derived from data-driven technologies, such as machine learning.

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

11

Public Policy, Well-Being and Digital Technology Florian Ostmann (Alan Turing Institute) There are two prominent strands of inquiry at the intersection of well-­ being and digital technology that are of interest to public policy researchers and increasingly relevant to policymaking agendas. The first strand may be referred to as digitally derived insights about well-­ being—work that leverages technology-enabled methods and big data analytics to measure well-being and understand and manage its determinants. In the context of measuring economic welfare, this includes the use of novel analytical techniques and unconventional data sources (e.g. electronic payments, social media, or business news data) to predict GDP growth and related indicators in real-time (Anesti et  al. 2018; Galbraith and Tkacz 2018) or with greater accuracy compared to traditional approaches. It also includes the use of massive online choice experiments for welfare measurement—for instance, to estimate the economic value of zero-priced goods, which fails to be captured by measures of GDP (Brynjolfsson et al. 2019).1 In the context of work that is dedicated to measuring subjective well-being, digital methods have impactful applications as well, illustrated by the use of digital surveys or novel inferential methods (e.g. sentiment analysis applied to social media activity or digitized books) to arrive at estimates of present or historical levels of subjective well-being (Hills et al. 2019). Finally, technology and data analytics can enable pathbreaking insights about specific factors that impact well-being—such as urban air quality, for example—improving our understanding of and ability to manage these factors (Hamelijnck et  al. 2019; Warwick Machine Learning Group 2019). The second strand concerns the well-being effects of digital technologies (i.e., the positive or negative consequences that the adoption of relevant technologies may have for individual and societal well-being). Consequences of interest from a public policy perspective may be intrinsically related to the technology in question or be characterised by a more indirect relationship, spanning a wide range of different policy domains (OECD 2019). Correspondingly, understanding the well-being effects of digital technologies and developing policy strategies that support the realisation of benefits while managing negative effects constitutes a wide-ranging area of research. This area includes the potential of technological innovation to enable well-being-­ enhancing improvements in the design and delivery of goods and services, especially in essential areas where accessibility and quality improvements may be particularly impactful for disadvantaged members of society (e.g. health, education, financial services and the judicial system). It also (continued)  The most prominent zero-priced goods and services are often digital goods themselves, such as search engines or social media platforms. 1

12

C. Burr and L. Floridi

comprises questions around digital exclusion, concerns about the risk of certain forms of innovation rendering consumers vulnerable to exploitative commercial practices, and a growing policy debate around ‘online harms’ (e.g. disinformation, cyberbullying, encouragement of self-harm, online grooming, and access to age-inappropriate material) (Vidgen et  al. 2019; UK Government 2019). Finally, there are important questions around the relationship between digital innovation and more abstract welfare-related categories of analysis including economic growth, labour market dynamics, and competition and market power. Biography Dr. Florian Ostmann is the Policy Theme Lead within the Public Policy Programme at The Alan Turing Institute. His work focuses on the societal implications of data science and AI, concentrating on the use of data science and AI to address governmental and social challenges and on the ethical and regulatory questions raised by their application in different sectors. As part of this work, he leads projects across a range of thematic areas, including financial services, criminal justice, and combatting modern-day slavery. Florian also has strong interests in the role of value judgments in applied economics, health policy, and questions concerning the future of work and social welfare systems.

1.2.4  Health While conceptually distinct, health and well-being are also intimately related, and therefore some brief remarks are helpful. The World Health Organisation defines health as “a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity” (World Health Organisation 2019), and Crisp (2017) notes that “[p]opular use of the term ‘well-being’ usually relates to health”. In the medical sciences, as Alexandrova (2017, p. 168) notes, “the stand-in for well-being is health-related quality of life”. Quality of life (QOL), and related variants such as quality-adjusted life years (QALYs) or disability-adjusted life years (DALYs) (Hausman 2015), is used in similar ways to economic constructs (i.e. as an input to calculations that help to determine the efficiency of policy decisions and to allocate healthcare resources). As with psychology, the medical sciences also rely on a range of more specific measures of well-being, which can be tailored to individual diseases or patients and sometimes extend to the well-being of caregivers. Although these measures will often rely on clinical diagnosis and observable indicators, subjective evaluation and self-report is also seen within healthcare in the

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

13

form of patient-reported outcomes (e.g. Alexandrova 2017; Haybron and Tiberius 2015). We offer more detailed comments on the links between digital health technologies and digital well-being in Sect. 1.3.2.

Digital Health: The Future Role of Policy Indra Joshi and Jessica Morley (NHSX) In July 2019 NHSX—a new joint unit bringing together staff from NHS England, the Department of Health, and NHS Improvement—came into being. It was created to ensure the NHS benefits from the best digital health thinking from government, the NHS, academia and industry. For too long, there has been a consistent lack of investment in digitising the health service, slow adoption of technology (it takes approximately 17 years for a new innovation to spread through the NHS (Leigh and Ashall-Payne 2019), and fear stemming from past failures such a Care.Data (Sterckx et al. 2016) and the National Programme for IT (Justinia 2017). These setbacks have left NHS staff reliant on technology that was outdated in the 90s and forced patients to turn to digital services provided by unvetted third parties, in an attempt to manage or improve their health and well-being. This situation has introduced huge opportunity costs, economic costs (Ghafur et al. 2019), and risk into healthcare systems across the globe—not just the NHS (Mackey and Nayyar 2016). It is, therefore, clearly untenable. There is a need for a step change in the way that healthcare system providers approach digital health and well-being. Policymakers need to adopt a principled, proportionate approach to its governance (Morley and Joshi 2019)—one that is open to the significant opportunities for improving outcomes, cutting costs, and ultimately saving lives—but mindful of the clinical and ethical risks (Morley and Floridi 2019b). This requires introducing policies that ensure digital health technologies are: designed for specific users (Themistocleous and Morabito 2012); developed in the open (Goldacre et al. 2019); interoperable; thoroughly and consistently evaluated (Ferretti et  al. 2019); evidence-based (Greaves et al. 2018); economically viable; clinically safe and efficacious (Challen et  al. 2019); and pro-ethically designed (Floridi 2016a). According to the World Health Organisation (2019), “Health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity.” Clearly harnessing digital technologies and the data that they generate in the right way, is going to be essential if healthcare systems, like the NHS, want to ensure their service users are able to achieve this state of wellbeing. Thus, while developing such policies will take time, we cannot afford to wait. (continued)

14

C. Burr and L. Floridi

Biographies Dr. Indra Joshi is director of AI at NHSX, and an emergency clinician by training. She is responsible for creating a trusted environment that supports innovation of data-driven technologies while being: the safest in the world; appropriately responsive to progress in innovation; ethical, legal, transparent and accountable; evidence-based; competitive and collaborative; in alignment with the NHS Constitution. Jessica Morley is currently pursuing the M.Sc. degree with Oxford Internet Institute, Oxford, U.K. She is also the policy lead with the DataLab, University of Oxford, Oxford, and the AI subject matter expert with NHSX, London, U.K. Her research focuses on the ethical and policy implications of digitizing healthcare.

1.3  Digital Well-Being: Three Themes for Further Discussion In this section, we offer some thoughts on three themes that overlap with some of the subsequent chapters. These themes are inherently interdisciplinary in nature, and, therefore, are good examples of why the study of digital well-being requires a multidisciplinary approach. One thing they have in common is a strong emphasis on the importance of ethical design. As such, it is helpful to begin with a few clarificatory remarks. The following two statements are widely accepted in the ethics of technology: (1) technological design and engineering is a value-laden process, and (2) the use and implementation of technology has the potential to fundamentally alter the way we understand ourselves, each other, and our environment, which may in turn create new ethical challenges (Floridi 2010). A single example can help illustrate and justify both statements: the design and use of wearable heart-rate monitors, such as smartwatches and fitness trackers. Starting with the first statement, a common technique that modern wearables employ for measuring heart-rate is known as photoplethysmography (PPG). PPG uses a light-emitting sensor to estimate changes in arterial volume caused by pulsating blood pressure. However, the design choice of which colour light to use (e.g. red, green, or blue) can have different consequences, some of which raise ethical concerns. For instance, green light has higher levels of accuracy than red light when the device is in constant motion—a common occurrence for wearables that are used for fitness activities. Therefore, if one is optimising for accuracy as determined by the context of use (i.e. fitness) this would seem like the obvious choice. However, green light has lower accuracy for darker skin tones than lighter skin tones, leading to a potential bias against certain groups of people (Woolley and Collins 2019). Therefore, the choice of which colour of light to use in PPG can be treated as a

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

15

value-laden design choice, which may favour people with lighter skin and discriminate against people with darker skin. Turing to the second statement, a number of studies have explored how data collected from heart-rate monitors can also be used to infer psychological information, including affective states (i.e., emotions) and psychopathological states (e.g., levels of anxiety) (see Burr and Cristianini 2019 for an overview). The process of recording and measuring biometric data and converting them into user-friendly types of information can be incredibly valuable for individuals who wish to track their health and well-being. In some cases, the bio-feedback that these devices provide can even allow individuals to acquire a degree of volitional control over the heart-rate, which could help alleviate symptoms of anxiety (Abukonna et al. 2013). However, as sociologists and philosophers have noted, the rapid increase in information to quantify and measure states of our bodies can also threaten our psychological integrity by negatively impacting our self-understanding and self-awareness (Lupton 2016; Floridi 2014). To illustrate, consider how the digital representation of our psychological states or process may be in competition with the internal representations that our brains have evolved to rely upon. Emotions, for example, are formed and refined on the basis of signals that originate from within our bodies. Smart devices aim to bypass this process—known as ‘interoception’—by inferring our psychological states through a variety of techniques, some of which rely on probabilistic machine learning algorithms.2 In addition to ongoing questions regarding the accuracy and validity of these measurement procedures, there is a further concern about how this information is stored and presented to the user. For instance, unlike digital representations, our inner emotional states are not perpetually recorded in discrete forms in silico. Rather, our emotions are typically appraisals of our current context, and provide salient information about how to act in the current environment—they are action-guiding (Frijda et al. 1989). In this sense, emotions have an immediacy and embeddedness that connects us to our present surroundings in ways that permanently stored digital representations do not. Digital representations, by contrast, are detachable records of past states or processes. While this allows them to store historical information for reflection, in doing so they lose the immediacy that our mental representations provide. It is possible that an increased use of digital technologies to represent our mental states or processes could alter the level of trust that we have in our own interoceptive capabilities, and may result in destabilising effects for our psychological integrity and well-being due to the altered functional role they play in guiding our behaviour and self-understanding. These brief examples of wearable heart-rate monitors helps to emphasise the importance of ethical principles in the design and development of digital technologies (for further discussion, see Floridi 2010; Calvo and Peters 2014). The following three themes, influenced by the subsequent chapters, share this emphasis on the importance of ethical design.

 See (Calvo et al. 2015) for an introduction to current techniques in affective computing.

2

16

C. Burr and L. Floridi

1.3.1  Digital Gratitude The majority of readers will have an instinctive understanding of the concept ‘gratitude’, recognising it as either an emotion (e.g., feeling grateful towards an individual who has helped you), a behaviour (e.g., expressing gratitude to a friend or family member), or a virtuous trait (e.g., a praiseworthy disposition of an individual). However, the conjunction of ‘digital’ and ‘gratitude’ may not elicit the same instinctive understanding. Our use of the term ‘digital gratitude’ is intended to emphasise the mediating role that digital technologies have on the feeling, expression, or trait of gratitude. As is well understood, digital technologies are not neutral. Their design is often motivated by commercial interests, as Charlie Harry Smith highlights in Chap. 3 of this collection (Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media and the Commercialisation of Presentations of Self), and can also be used to manipulate user behaviour, as Michael Klenk discusses in Chap. 4 (Digital Well-Being and Manipulation Online). This is important, because as Lavinia Marin and Sabine Roeser note in Chap. 7 (Emotions and Digital Well-being: The rationalistic bias of social media design in online deliberations), digital technologies, such as social media platforms, “do not mediate the full range of human emotions and thus are an impediment for successful deliberations”, signifying a key risk of digital technologies. Similarly, we can ask what impact, both positive and negative, digital technologies may have on our conceptual understanding of ‘gratitude’, as well as the emotion itself. Gratitude is an important affective trait. It is recognised by psychologists, anthropologists, and evolutionary biologists, as an other-directed emotion (e.g. gratitude towards a friend, object, or state of the world), one that plays a prosocial role in communities by strengthening interpersonal relationships and generating positive behavioural norms within organisations and groups (Yost-Dubrow and Dunham 2018; Ma et  al. 2017). Furthermore, gratitude is associated with higher levels of well-being—more grateful people are happier, express higher levels of life satisfaction, and also demonstrate greater levels of resilience to negative impacts on psychological well-being such as stress and burnout (Layous et  al. 2017; Wood et al. 2008). One reason that gratitude may have these benefits is because, as Allen (2018, p. 8, emphasis added) notes, “the experience of gratitude encourages us to appreciate what is good in our lives and compels us to pay this goodness forward.” Here, gratitude serves a dual role: it helps us identify and appreciate sources of prudential value (e.g. a mutually supportive online relationship with an anonymous stranger who helps an individual with difficult life challenges), and it encourages us to then increase the overall amount of prudential value by repeating the original behaviour and helping spread the feeling of gratitude to others. This latter role may also strengthen our original feelings of gratitude, generating a positive feedback loop. Because of these benefits, digital technologies should (where relevant) be designed to promote feelings and expressions of gratitude, as well as additional

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

17

motivating psychological attitudes (see Chap. 2 of this collection). For instance, designers could introduce additional points of friction into the process of interacting with information online (e.g., sharing and reading content on social media platforms). This could allow users to reflect on how they are reacting to information, rather than just instinctively “liking” a post with little to no thought about the benefits they received from the original content. While this may reduce the amount of valuable data available to the companies (e.g., implicit feedback from user behaviour that updates recommender system algorithms), it could generate more meaningful engagement from users, further generating the perceived value of the social media platform (see Burke and Kraut 2016). Beyond social media, designing digital technologies to promote feelings and expressions of gratitude could have additional benefits. For instance, it could help direct our attention to the intrinsic value of our digital environment and possibly generate more virtuous civic attitudes, rather than simply self-directed moral deliberation.3 By encouraging users to appreciate what is good in our lives, users may be encouraged to recognise the shared source of prudential value that is contained within the informational infrastructure that surrounds us—what we have previously referred to as the infosphere (Floridi 2014). For instance, AI offers myriad opportunities to improve and augment the capabilities of individuals and society, ranging from improved efficacy in healthcare decisions (Morley and Floridi 2019c) to identifying novel markers of social welfare in big datasets (see Chap. 9 of this collection). It is important that we a) continue to improve and augment our capabilities without reducing human control and b) continue to cultivate societal cohesion without eroding human self-determination (Floridi et al. 2018). A greater consideration of digital gratitude in the design of digital technologies could help us strike these balances, by motivating us to identify sources of prudential value, both individual and social. However, as Andrew Gibson and Jill Willis demonstrate clearly in Chap. 8 (Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection), the process of designing even simple gratitude enhancing technologies, such as digital self-reflective writing journals, can pose many complex and interrelated ethical challenges. Furthermore, as noted by Matthew Dennis in Chap. 6 (Cultivating Digital Well-being and the Rise of Self-Care Apps), the process of cultivating positive outcomes, such as well-being or gratitude, may sometimes generate a tension between the pursuit of the positive outcome on the one hand, and negative outcomes associated with too much screen time on the other hand. These topics are far from resolved, and we hope that this collection serves to motivate ongoing discussion and debate.

3  See (Vallor 2016; Howard 2018; Floridi 2010, Chapter 1) for a range of comments and approaches to moral virtues in the context of sociotechnical systems.

18

C. Burr and L. Floridi

1.3.2  Automating Interventions The final chapters in this collection discuss theoretical and conceptual issues related to the use of digital technologies for health care, starting with Chap. 10 by Amelia Fiske, Peter Henningsen and Alena Buyx (The implications of embodied artificial intelligence in mental healthcare for digital wellbeing); followed by Chap. 11 by Nick Byrd (Causal Network Accounts of Ill-being: Depression & Digital Well-­ being); and Chap. 12 by Michael Thornton (Malware as the Causal Basis of Disease). Many ethical challenges are intertwined with these developments, some of which are discussed in the aforementioned chapters (e.g., ensuring adequate data protection when dealing with big datasets; developing novel provisions for harm prevention). However, the broad, collective scope of these chapters also helps to draw our attention to another significant challenge: how to establish when a legitimate basis for an automated intervention has been secured. Or, to put it another way, how can we establish whether and when there is a right to intervene on the basis of an automated decision? A few clarificatory remarks are in order. By now, it is well known that digital technologies, such as automated decision-­ making systems,4 have enabled clinical researchers and practitioners to augment their assessment, diagnostic, and treatment capabilities by leveraging algorithmically-­ derived insights from large-scale datasets (e.g. The Topol Review Board 2019; Watson et al. 2019; Dwyer et al. 2018; Morley et al. 2019). However, the use of these technologies outside of formal healthcare systems (e.g. in contexts such as education, employment, and financial services), and the corresponding ethical and public health challenges that arise from this deployment, is not as well appreciated (Burr et al. 2020a, b). For instance, school administrators are using predictive analytics and social media data to identify vulnerable students who may need additional support (Watson and Christensen 2017), and financial services firms have used artificial intelligence to proactively detect consumers who may experience additional financial difficulties caused by their mental health issues (Evans 2019). While these developments may lead to more proactive and personalised support, the transition away from clinical settings also raises several ethical challenges (see Palm 2013), including the question of whether there is a legitimate basis for intervening. To understand why this is important it is helpful to contrast the clinical use of automated decision-making systems with non-clinical uses. In both cases, an automated decision can serve to establish a risk assessment or diagnosis of an individual and subsequently inform or select an intervention on the basis of these algorithmically-­derived insights (e.g. nudging a user who experiences a dip in

4  To clarify, our use of the term ‘automated decision-making systems’ is intended to be inclusive of systems that are fully automated (i.e. not requiring human oversight) and also decision support tools that keep a human-in-the-loop. Furthermore, we treat the act of classification as a decision (e.g. the classification of disease on the basis of a radiology image).

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

19

attention and engagement).5 This is problematic because, in clinical practice, interventions are typically decided upon following a process of participatory decisionmaking between the healthcare professional and their patient, due to the value-laden nature of health and well-being (Beauchamp and Childress 2013). Among other things, this process requires an assessment by the healthcare professional of the proportional risk associated with the intervention and the informed consent of the patient (Faden and Beauchamp 1986). It is currently unclear how automated decision-­making systems should be incorporated into the process of participatory decision-­making even within clinical settings (Morley and Floridi 2019b). Therefore, it is not possible to simply transpose existing bioethical guidance into the nonclinical settings, even as a starting point for further ethical analysis. This is a vital and, in our opinion, unaddressed issue in the ongoing debate and discussion on informed consent in an age of big data and artificial intelligence. To clarify, informed consent in clinical decision-making is typically viewed as a morally transformative procedure that provides a normative justification for an act, such as a clinical intervention that carries an associated risk of harm (Kim 2019). The normative legitimacy of the informed consent process rests in part on the professional accountability established by formal healthcare systems and on the successful communication between healthcare professional and patient (Manson and O’Neill 2007), which in mental healthcare often requires ongoing explanation throughout treatment plans for chronic illnesses (e.g. depression). Accountability and explainability are, therefore, vital components of informed consent in mental healthcare but are currently poorly represented in digital health (Watson et al. 2019). Guidelines and frameworks are currently being developed to help ensure the accountable design, development, and use of digital health tools (Henson et  al. 2019; Torous et al. 2019; Jessica Morley and Floridi 2019a). However, these developments will not easily transpose into non-clinical settings where comparable mechanisms of accountability and behavioural norms are lacking (Mittelstadt 2019). While there is, in principle, no a priori reason to doubt that such mechanisms could be established in non-clinical settings (e.g. education, criminal justice), the contextual nature of mental health diagnosis and treatment—often emphasised by reference to the ‘biopsychosocial model’ (Burns 2014)—means that separate procedures will likely be required for each social domain where digital health tools are used to automate some part of a health intervention. Additionally, there are also conceptual issues to address if we hope to have a robust account of what constitutes an intervention in the first place. For instance, as Michael Thornton notes in Chap. 12, novel digital technologies pose a challenge to existing conceptual accounts of ‘health’ and the boundaries of the body, as many devices can extend or augment human capabilities, thus placing pressure on our

5  This example is based on the work of a research group at MIT’s Media Lab, who have developed a product (AttentivU) that seeks to improve attention through real-time monitoring of a user’s engagement, using a head-mounted device that use physiological sensors (i.e. electroencephalography) to measure engagement (Kosmyna et al. 2019).

20

C. Burr and L. Floridi

existing theoretical concepts. It will be important, therefore, to develop robust accounts that are fit-for-purpose. This extends to concepts such as ‘psychological integrity’, which needs to be critically analysed if we are to make sense of the ethical significance of interventions that are more informational in nature (e.g. personalised recommendations for diet or lifestyle choices, or algorithmically-derived nudges). Whereas the notion of bodily integrity is central to extant bioethical theories, comparatively less has been written about the normative status of interventions that impact an individual’s mental or psychological integrity,6 despite being established in Article 8 of the UK Human Rights Act 1998 and Article 3 of the EU Charter of Fundamental Rights. This is in part because of the close connection this usage has to existing theories of informed consent, personal autonomy and self-determination. However, these perspectives are insufficient when we reflect on the changing conceptual nature of concepts such as self-determination, autonomy, and informed consent in an age where the boundaries and interactions between human users and artificial agents is increasingly blurred (Floridi 2014). It will, therefore, be vital to address these conceptual, ethical, and legal challenges if we are to develop satisfactory guidelines and frameworks that can govern the use of automated interventions on individual and social health and well-being.

1.3.3  Sustainable Co-Well-Being Derek Parfit (1984) famously offered a series of thought experiment concerning so-­ called “harmless torturers”, designed to query our intuitions about the possibility of imperceptible harms and benefits. We can reconstruct these thought experiments as a way to pump our intuitions about how the design of sociotechnical systems challenge ethical concepts such as ‘responsibility’, and whether a greater reflection on principles like sustainability can help overcome these difficulties. First, and in line with Parfit’s original thought experiment, imagine you enter a room and see an individual strapped to a chair, connected to various pads and wires that are designed to deliver an electric current to the victim. In front of you there is a dial with numbers ranging from 1 to 1000 that controls the electric current. You turn the dial by a single increment, increasing the electrical current so slightly that the victim is unable to perceive any difference in intensity. While certainly not a morally praiseworthy action, you are unlikely to be reprimanded for causing any harm to the individual concerned. However, we now run the thought experiment for a second time, and in this alternate scenario you turn the dial by one increment at the same time as 999 other people turn similarly connected dials by one increment each. The net result of this collective action is an intensely painful electric shock 6  A notable exception seems to be the literature on neuroethics (e.g. deep brain stimulation or direct brain interventions). However, it is more common to frame these discussions in terms of standard bioethical principles such as autonomy or informed consent (Pugh et al. 2017; Craig 2016).

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

21

that ends up killing the restrained victim. Your individual action has not changed between these two scenarios, but the relation in which your action stands to the actions of the other 999 individuals has altered drastically: you have now contributed to the death of another human being. Next, consider the following scenario. You are waiting for a bus, tired from a long day at work. You are mindlessly scrolling through a list of possible videos that have been presented to you by a recommendation system that powers your video streaming app. You select a video of a fiery argument between two political pundits, in which one of them “destroys” their interlocutor. Ordinarily you would avoid selecting such a video, knowing that it is likely to be needlessly polarising and sensationalist. However, you’re tired and occasionally enjoy a spectacle as much as everyone else. Unfortunately, at a similar time, 999 other individuals, with similar viewing histories to yourself also click on the same recommended video. The effect is that the recommendation system learns that users similar to yourself and the 999 other individuals are likely to click on videos of this nature towards the end of the day. As such, in the future it will be more likely to recommend similarly low-­quality, politically polarising videos to other users. While not as harmful, or morally reprehensible, as the death of an individual, this example nevertheless demonstrates that certain technologies—whether electric chairs or recommendation systems—have the potential to alter the moral status of our actions when they stand in a particular relation to the complementary actions of other individuals. However, what’s the particular lesson for digital well-being that we should draw from this example? To begin, it is important to avoid the false charge that we are merely suggesting individual users must take greater responsibility for their actions online. The actions of the 1000 users impact subsequent recommendations in virtue of how the recommender system’s architecture is designed.7 Therefore, while the users do have a responsibility for their actions, it is a collective responsibility (similar in nature to Parfit’s harmless torturers) that emerges as a result of the interactions between the users and the system’s architecture. These interwoven interactions form complex sociotechnical systems, which connect human users and constrain their actions in important ways, leading to what we have elsewhere described as a form of ‘distributed moral responsibility’ (Floridi 2016b). While it is immensely challenging to foresee the consequences and emergent effects of complex sociotechnical systems like recommender systems, ethical principles can serve as deliberative prompts for thinking through the ethical challenges. As such, they can offer designers a dual-advantage of identifying key opportunities to increase social value, while anticipating and avoiding costly risks (Floridi et al. 2018). One such principle we wish to propose is the need to orientate the design of digital technologies towards sustainable and communal well-being (hereafter, ‘sustainable co-well-being’). While this principle needs explaining and unpacking, the 7  Technically, this is known as ‘collaborative filtering’, which is a method for using the collaborative actions of users (e.g., which videos they watch, how long they watch them for, and what rating they give them) as ‘implicit feedback’ to train a recommender system (see Burr et al. 2018; Milano et al. 2020 for further discussion).

22

C. Burr and L. Floridi

focus on ‘well-being’ should be straightforward. As noted in Sect. 1.2, well-being is an intrinsic good at which much of human behaviour is directed. Therefore, although there is disagreement about what objects, activities, or states of the world bear prudential value, that well-being is a goal in itself and not merely as an instrumental means to other goods is relatively uncontroversial. To put it another way, well-being (or “the good life”), regardless of how it is understood at a subjective level, is an intrinsic good that system design should orientate towards.8 The ‘communal’ and ‘sustainable’ aspects of this principle requires a bit more explanation. As we learn more about the relationship between well-being and digital technologies, the role played by design is emerging as crucial, not just in terms of individual fulfilment, but also, and perhaps even more significantly, in terms of communal well-being, or co-well-being. Precisely because it may be difficult to reach final conclusions about absolute thresholds or values of digital well-being, strategies to rectify and improve solutions already adopted will need to be considered as necessary. For part of any form of well-being consists in knowing that its erosion may not be irreversible. And the socialisation of well-being, increasingly stressed by its dependence also on digital technologies, will emphasise the socio-­ political aspects of co-well-being in ways probably unprecedented. In a world so connected, globalised, and mutually dependent, no discourse on digital well-being will be reasonable by focusing on individuals in isolation. Similar thoughts on these topics are explored by Loi in Chap. 5 (What contribution can philosophy provide to studies of digital well-being?) where aspects of co-well-being are present in the different concepts of digital well-being that are explored towards the end of his contribution in this collection. Other frameworks and accounts have already emphasised the ethical significance of sustainability in design (see Floridi et al. 2018; Jobin et al. 2019). In addition, Calvo et al. in Chap. 2 of this collection note the following, regarding sustainable or circular design, “Just as we need to design in ways that preserve the natural environment for our survival, digital technologies, like YouTube, need to be designed in ways that minimise negative impact on individuals and societies to preserve a ‘sustainable’ social environment.” The sustainability part of our principle is similarly intended as a deliberative prompt to direct attention to the ethical significance of various design choices, while keeping a clear goal in mind (i.e. the sustainable promotion of co-well-being). This complementary focus could help steer design choices and help strike a balance between mitigating key risks while maximising opportunities. For instance, the risks of unsustainable design could include lock-in to a system that, while valuable on the basis of some outcome measures (e.g. entertainment, revenue), may nevertheless propagate bias, entrench social inequities, or create tensions between users and designers over questions such as responsibility and accountability. By considering the need for sustainability at the outset, designers may be able to consider more agile or fluid solutions, which can adapt to shifting

8  This point is clearly demonstrated by the widespread adoption of well-being in recent frameworks or guidelines for ethical technology design, including AI (see Floridi et al. 2018).

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

23

values that change over an individual life course and across societal shifts. It is worth remarking, in particular, that digital contexts enable forms of reversibility unknown in analogue contexts but that are often underused. Relying on our previous example regarding the political video, one may notice the unwelcome training of the recommender system due to an unfortunate synchronisation of choices and could intervene to adjust or even reverse such training. More concretely, one may imagine clicking on a YouTube video, dislike it, and “take back” the click both for oneself and for others, thus avoiding both the wrong training and the “winner take all” effect of more clicks attracting even more clicks. At the time of writing, this option is not available, but it is trivially feasible, technically speaking. As stated at the outset of this section, these themes are intended as a starting point for further discussion. Nevertheless, we hope that they will provide an informative starting point for further engagement with the subsequent chapters in this collection. If we have been able to generate some positivity or excitement in just a small handful of researchers, we will be content (and grateful) that we have contributed to a small increase in overall well-being.

References Abukonna, Ahmed, Xiaolin Yu, Chong Zhang, and Jianbao Zhang. 2013. Volitional Control of the Heart Rate. International Journal of Psychophysiology 90 (2): 143–148. https://doi. org/10.1016/j.ijpsycho.2013.06.021. Alexandrova, Anna. 2017. A Philosophy for the Science of Well-Being. New  York: Oxford University Press. Allen, Summer. 2018. The Science of Gratitude. UC Berkeley: Greater Good Science Center. https://ggsc.berkeley.edu/images/uploads/GGSC-JTF_White_Paper-Gratitude-FINAL.pdf. Anesti, Nikoleta, Ana Beatriz Galvão, and Silvia Miranda-Agrippino. 2018. ‘Uncertain Kingdom: Nowcasting GDP and Its Revisions.’ 764. Bank of England Staff Working Paper. https://www.bankofengland.co.uk/working-paper/2018/ uncertain-kingdom-nowcasting-gdp-and-its-revisions. Beauchamp, Tom L., and James F.  Childress. 2013. Principles of Biomedical Ethics. 7th ed. New York: Oxford University Press. Binmore, Ken. 2008. Rational Decisions. 1st ed. Princeton: Princeton University Press. Bishop, Michael. 2015. The Good Life: Unifying the Philosophy and Psychology of Well-Being. New York: Oxford University Press. Brynjolfsson, Erik, Avinash Collis, and Felix Eggers. 2019. Using Massive Online Choice Experiments to Measure Changes in Well-Being. Proceedings of the National Academy of Sciences of the United States of America 116 (15): 7250–7255. Burke, Moira, and Robert E.  Kraut. 2016. The Relationship Between Facebook Use and Well-­ Being Depends on Communication Type and Tie Strength. Journal of Computer-Mediated Communication 21 (4): 265–281. https://doi.org/10.1111/jcc4.12162. Burns, Tom. 2014. Our Necessary Shadow: The Nature and Meaning of Psychiatry. London: Penguin. Burr, Christopher, and Nello Cristianini. 2019. ‘Can Machines Read Our Minds?’ Minds and Machines, March. https://doi.org/10.1007/s11023-019-09497-4.

24

C. Burr and L. Floridi

Burr, Christopher, Nello Cristianini, and James Ladyman. 2018. An Analysis of the Interaction Between Intelligent Software Agents and Human Users. Minds and Machines 28 (4): 735–774. https://doi.org/10.1007/s11023-018-9479-0. Burr, C., J. Morley, M. Taddeo, and L. Floridi. 2020a. Digital Psychiatry: Risks and Opportunities for Public Health and Wellbeing. IEEE Transactions on Technology and Society 1 (1): 21–33. https://doi.org/10.1109/TTS.2020.2977059. Burr, Christopher, Mariarosaria Taddeo, and Luciano Floridi. 2020b. The Ethics of Digital Well-­ Being: A Thematic Review. Science and Engineering Ethics, January. https://doi.org/10.1007/ s11948-020-00175-8. Calvo, Rafael A., and Dorian Peters. 2014. Positive Computing: Technology for Well-Being and Human Potential. Cambridge, MA: The MIT Press. Calvo, Rafael A., Sidney D’Mello, Jonathan Gratch, and Arvid Kappas. 2015. The Oxford Handbook of Affective Computing. Oxford: Oxford Library of Psychology. Challen, Robert, Joshua Denny, Martin Pitt, Luke Gompels, Tom Edwards, and Krasimira Tsaneva-­ Atanasova. 2019. Artificial Intelligence, Bias and Clinical Safety. BMJ Quality and Safety 28 (3): 231–237. https://doi.org/10.1136/bmjqs-2018-008370. Craig, Jared N. 2016. Incarceration, Direct Brain Intervention, and the Right to Mental Integrity – A Reply to Thomas Douglas. Neuroethics 9 (2): 107–118. https://doi.org/10.1007/ s12152-016-9255-x. Crisp, Roger. 2006. Hedonism Reconsidered. Philosophy and Phenomenological Research 73 (3): 619–645. https://doi.org/10.2307/40041013. ———. 2017. Well-Being. In The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta and Fall 2017. Stanford: Metaphysics Research Lab, Stanford University. https://plato.stanford. edu/archives/fall2017/entries/well-being/. Csikszentmihalyi, Mihaly. 2008. Flow: The Psychology of Optimal Experience. 1st Harper Perennial Modern Classics. New York: Harper Perennial. Diener, Ed, and Martin E.P. Seligman. 2004. Beyond Money: Toward an Economy of Well-Being. Psychological Science in the Public Interest 5 (1): 1–31. Diener, E.D., Robert A.  Emmons, Randy J.  Larsen, and Sharon Griffin. 1985. The Satisfaction with Life Scale. Journal of Personality Assessment 49 (1): 71–75. Dwyer, Dominic B., Peter Falkai, and Nikolaos Koutsouleris. 2018. Machine Learning Approaches for Clinical Psychology and Psychiatry. Annual Review of Clinical Psychology 14 (1): 91–118. https://doi.org/10.1146/annurev-clinpsy-032816-045037. Elder, Alexis. 2014. Excellent Online Friendships: An Aristotelian Defense of Social Media. Ethics and Information Technology 16 (4): 287–297. Ellis, David A., Brittany I. Davidson, Heather Shaw, and Kristoffer Geyer. 2019. Do Smartphone Usage Scales Predict Behavior? International Journal of Human-Computer Studies 130 (October): 86–92. https://doi.org/10.1016/J.IJHCS.2019.05.004. Evans, Katie. 2019. ‘Financial Transactions Data, AI and Mental Health: The Challenge’. Money and Mental Health Policy Institute. January 2019. https://www.moneyandmentalhealth.org/ wp-content/uploads/2019/01/FCA-financial-transactions-data-discussion-note-footnotesat-end.pdf. Faden, Ruth R., and Tom L.  Beauchamp. 1986. A History and Theory of Informed Consent. New York: Oxford University Press. Ferguson, Christopher J. 2009. An Effect Size Primer: A Guide for Clinicians and Researchers. Professional Psychology: Research and Practice 40 (5): 532–538. https://doi.org/10.1037/ a0015808. Ferretti, Agata, Elettra Ronchi, and Effy Vayena. 2019. From Principles to Practice: Benchmarking Government Guidance on Health Apps. The Lancet Digital Health 1 (2): e55–e57. https://doi. org/10.1016/S2589-7500(19)30027-5. Fletcher, Guy. 2016. The Philosophy of Well-Being: An Introduction. London: Routledge. Floridi, Luciano, ed. 2010. The Cambridge Handbook of Information and Computer Ethics. Cambridge: Cambridge University Press.

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

25

———, ed. 2014. The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. Oxford: Oxford University Press. ———. 2016a. Tolerant Paternalism: Pro-Ethical Design as a Resolution of the Dilemma of Toleration. Science and Engineering Ethics 22 (6): 1669–1688. https://doi.org/10.1007/ s11948-015-9733-2. ———. 2016b. Faultless Responsibility: On the Nature and Allocation of Moral Responsibility for Distributed Moral actions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 374 (2083): 20160112. https://doi.org/10.1098/ rsta.2016.0112. Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, et  al. 2018. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines 28 (4): 689–707. https://doi.org/10.1007/s11023-018-9482-5. Frijda, Nico H., Peter Kuipers, and Elisabeth Ter Schure. 1989. Relations Among Emotion, Appraisal, and Emotional Action Readiness. Journal of Personality and Social Psychology 57 (2): 212. Fröding, Barbro, and Martin Peterson. 2012. Why Virtual Friendship Is No Genuine Friendship. Ethics and Information Technology 14 (3): 201–207. Galbraith, John W., and Greg Tkacz. 2018. Nowcasting with Payments System Data. International Journal of Forecasting 34 (2): 366–376. Garde-Hansen, Joanne. 2009. MyMemories?: Personal Digital Archive Fever and Facebook. In Save As… Digital Memories, 135–150. London: Springer. Ghafur, S., S. Kristensen, K. Honeyford, G. Martin, A. Darzi, and P. Aylin. 2019. A Retrospective Impact Analysis of the Wanna Cry Cyberattack on the NHS. Npj Digital Medicine 2 (1): 98. https://doi.org/10.1038/s41746-019-0161-6. Gluckman, Sir Peter, and Kristiann Allen. 2018. ‘Understanding Wellbeing in the Context of Rapid Digital and Associated Transformations’. The International Network for Government Science Advice. https://www.ingsa.org/wp-content/uploads/2018/10/INGSA-Digital-WellbeingSept18.pdf. Goldacre, Ben, Caroline E. Morton, and Nicholas J. DeVito. 2019. Why Researchers Should Share Their Analytic Code. BMJ 367 (November): l6365. https://doi.org/10.1136/bmj.l6365. Greaves, Felix, Indra Joshi, Mark Campbell, Samantha Roberts, Neelam Patel, and John Powell. 2018. What Is an Appropriate Level of Evidence for a Digital Health Intervention? The Lancet 392 (10165): 2665–2667. https://doi.org/10.1016/S0140-6736(18)33129-5. Hamelijnck, Oliver, Theodoros Damoulas, Kangrui Wang, and Mark Girolami. 2019. ‘Multi-­ Resolution Multi-Task Gaussian Processes’. In: Thirty-Third Conference on Neural Information Processing Systems, Canada, 8–14 Dec 2019 Hausman, Daniel. 2015. Valuing Health: Well-Being, Freedom, and Suffering. New York: Oxford University Press. Haybron, Daniel M. 2008. The Pursuit of Unhappiness: The Elusive Psychology of Well-Being. Oxford: Oxford University Press. Haybron, Daniel M., and Valerie Tiberius. 2015. Well-Being Policy: What Standard of Well-Being? Journal of the American Philosophical Association 1 (4): 712–733. https://doi.org/10.1017/ apa.2015.23. Henson, Philip, Gary David, Karen Albright, and John Torous. 2019. Deriving a Practical Framework for the Evaluation of Health Apps. The Lancet Digital Health 1 (2): e52–e54. https://doi.org/10.1016/S2589-7500(19)30013-5. Hills, Thomas, Eugenio Proto, Daniel Sgroi, and Chanuki Illushka Seresinhe. 2019. Historical Analysis of National Subjective Wellbeing Using Millions of Digitized Books. Nature Human Behaviour 3: 1271–1275. Howard, Don. 2018. Technomoral Civic Virtues: A Critical Appreciation of Shannon Vallor’s Technology and the Virtues. Philosophy & Technology 31 (2): 293–304. https://doi.org/10.1007/ s13347-017-0283-1.

26

C. Burr and L. Floridi

Huppert, Felicia A., and Timothy T.C. So. 2013. Flourishing Across Europe: Application of a New Conceptual Framework for Defining Well-Being. Social Indicators Research 110 (3): 837–861. https://doi.org/10.1007/s11205-011-9966-7. Jensen, Michaeline, Madeleine J.  George, Michael R.  Russell, and Candice L.  Odgers. 2019. Young Adolescents’ Digital Technology Use and Mental Health Symptoms: Little Evidence of Longitudinal or Daily Linkages. Clinical Psychological Science, August, 216770261985933. https://doi.org/10.1177/2167702619859336. Jeske, Diane. 2019. Friendship and Social Media: A Philosophical Exploration. Abingdon/Oxford/ New York: Routledge. Jobin, Anna, Marcello Ienca, and Effy Vayena. 2019. The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence 1 (9): 389–399. https://doi.org/10.1038/ s42256-019-0088-2. Justinia, T. 2017. The UK’s National Programme for IT: Why Was It Dismantled? Health Services Management Research 30 (1): 2–9. https://doi.org/10.1177/0951484816662492. Kahneman, Daniel. 2011. Thinking, Fast and Slow. London: Penguin. Kim, Nancy S. 2019. Consentability: Consent and Its Limits. Cambridge: Cambridge University Press. Kosmyna, Nataliya, Caitlin Morris, Utkarsh Sarawgi, Thanh Nguyen, and Pattie Maes. 2019. AttentivU: A Wearable Pair of EEG and EOG Glasses for Real-Time Physiological Processing. In 2019 IEEE 16th International Conference on Wearable and Implantable Body Sensor Networks (BSN), 1–4. Chicago: IEEE. https://doi.org/10.1109/BSN.2019.8771080. Layous, Kristin, S.  Katherine Nelson, Jaime L.  Kurtz, and Sonja Lyubomirsky. 2017. What Triggers Prosocial Effort? A Positive Feedback Loop between Positive Activities, Kindness, and Well-Being. The Journal of Positive Psychology 12 (4): 385–398. https://doi.org/10.108 0/17439760.2016.1198924. Leigh, Simon, and Liz Ashall-Payne. 2019. The Role of Health-Care Providers in MHealth Adoption. The Lancet Digital Health 1 (2): e58–e59. Lin, Eden. 2017. Enumeration and Explanation in Theories of Welfare. Analysis 77 (1): 65–73. https://doi.org/10.1093/analys/anx035. Lupton, Deborah. 2016. The Quantified Self. Hoboken: Wiley. Ma, Lawrence K., Richard J.  Tunney, and Eamonn Ferguson. 2017. Does Gratitude Enhance Prosociality?: A Meta-Analytic Review. Psychological Bulletin 143 (6): 601–635. https://doi. org/10.1037/bul0000103. Mackey, Tim K., and Gaurvika Nayyar. 2016. Digital Danger: A Review of the Global Public Health, Patient Safety and Cybersecurity Threats Posed by Illicit Online Pharmacies. British Medical Bulletin 118 (1): 110–126. https://doi.org/10.1093/bmb/ldw016. Manson, Neil C., and Onora O’Neill. 2007. Rethinking Informed Consent in Bioethics. Cambridge/ New York: Cambridge University Press. Milano, Silvia, Mariarosaria Taddeo, and Luciano Floridi. 2020. Recommender Systems and Their Ethical Challenges. AI and Society, February. https://doi.org/10.1007/s00146-020-00950-y. Mittelstadt, Brent. 2019. Principles Alone Cannot Guarantee Ethical AI. Nature Machine Intelligence 1 (11): 501–507. https://doi.org/10.1038/s42256-019-0114-4. Morley, Jessica, and Luciano Floridi. 2019a. How to Design a Governable Digital Health Ecosystem. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3424376. ———. 2019b. NHS AI Lab: Why We Need to Be Ethically Mindful About AI for Healthcare. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3445421. ———. 2019c. ‘The Limits of Empowerment: How to Reframe the Role of MHealth Tools in the Healthcare Ecosystem’. Science and Engineering Ethics, June. https://doi.org/10.1007/ s11948-019-00115-1. Morley, J., and I.  Joshi. 2019. Developing Effective Policy to Support Artificial Intelligence in Health and Care. Eurohealth 25 (2): 11–14. https://apps.who.int/iris/bitstream/handle/10665/326127/Eurohealth-V25-N2-2019-eng.pdf?sequence=1&isAllowed=y. Morley, Jessica, Caio Machado, Christopher Burr, Josh Cowls, Mariarosaria Taddeo, and Luciano Floridi. 2019. ‘The Debate on the Ethics of AI in Health Care: A Reconstruction and Critical

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

27

Review’. SSRN Scholarly Paper ID 3486518. Rochester: Social Science Research Network. https://papers.ssrn.com/abstract=3486518. Munafò, Marcus R., and George Davey Smith. 2018. Robust Research Needs Many Lines of Evidence. Nature 553 (7689): 399–401. https://doi.org/10.1038/d41586-018-01023-3. Munafò, Marcus R., Brian A.  Nosek, Dorothy V.M.  Bishop, Katherine S.  Button, Christopher D.  Chambers, Nathalie Percie du Sert, Uri Simonsohn, Eric-Jan Wagenmakers, Jennifer J. Ware, and John P.A. Ioannidis. 2017. A Manifesto for Reproducible Science. Nature Human Behaviour 1 (1): 0021. https://doi.org/10.1038/s41562-016-0021. Nussbaum, Martha C. 2011. Creating Capabilities. Cambridge, MA/London: Harvard University Press. Nussbaum, Martha, and Amartya Sen. 1993. The Quality of Life. New  York: Oxford University Press. OECD. 2019. How’s Life in the Digital Age? Opportunities and Risks of the Digital Transformation for People’s Well-Being. Organisation for Economic Co-operation and Development. https://www.oecd-ilibrary.org/science-and-technology/ how-s-life-in-the-digital-age_9789264311800-en. Ofcom. 2018. Adults’ Media Use and Attitudes Report. The Office of Communications. https:// www.ofcom.org.uk/__data/assets/pdf_file/0011/113222/Adults-Media-Use-and-AttitudesReport-2018.pdf. Orben, Amy. 2019. ‘Teens, Screens and Well-Being: An Improved Approach’. PhD Thesis (University of Oxford). https://ora.ox.ac.uk/objects/uuid:198781ae-35b8-4898-b482-8df7201b59e1/ download_file?file_format=pdf&safe_filename=_main.pdf&type_of_work=Thesis. Orben, Amy, and Andrew K. Przybylski. 2019. The Association between Adolescent Well-Being and Digital Technology Use. Nature Human Behaviour 3 (2): 173–182. https://doi.org/10.1038/ s41562-018-0506-1. Orben, Amy, Tobias Dienlin, and Andrew K. Przybylski. 2019. Social Media’s Enduring Effect on Adolescent Life Satisfaction. Proceedings of the National Academy of Sciences of the United States of America 116 (21): 10226–10228. https://doi.org/10.1073/pnas.1902058116. Palm, Elin. 2013. Who Cares? Moral Obligations in Formal and Informal Care Provision in the Light of ICT-Based Home Care. Health Care Analysis 21 (2): 171–188. Parfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Pugh, Jonathan, Hannah Maslen, and Julian Savulescu. 2017. Deep Brain Stimulation, Authenticity and Value. Cambridge Quarterly of Healthcare Ethics 26 (4): 640–657. https://doi.org/10.1017/ S0963180117000147. Reeves, Byron, Nilam Ram, Thomas N. Robinson, James J.  Cummings, C.  Lee Giles, Jennifer Pan, Agnese Chiatti, et al. 2019. Screenomics: A Framework to Capture and Analyze Personal Life Experiences and the Ways That Technology Shapes Them. Human–Computer Interaction, March, 1–52. https://doi.org/10.1080/07370024.2019.1578652. Robeyns, Ingrid. 2005. The Capability Approach: A Theoretical Survey. Journal of Human Development 6 (1): 93–117. Ryan, R.M., and E.L. Deci. 2001. On Happiness and Human Potentials: A Review of Research on Hedonic and Eudaimonic Well-Being. Annual Review of Psychology 52: 141. Ryan, Richard M., and Edward L. Deci. 2017. Self-Determination Theory: Basic Psychological Needs in Motivation, Development, and Wellness. New York: Guilford Press. Ryff, Carol D. 1989. Happiness Is Everything, or Is It? Explorations on the Meaning of Psychological Well-Being. Journal of Personality and Social Psychology 57 (6): 13. Seligman, Martin E.P., and Mihaly Csikszentmihalyi. 2000. Positive Psychology: An Introduction. American Psychologist 55 (1): 5–14. Sharp, Robert. 2012. The Obstacles Against Reaching the Highest Level of Aristotelian Friendship Online. Ethics and Information Technology 14 (3): 231–239. Sterckx, S., V. Rakic, J. Cockbain, and P. Borry. 2016. “You Hoped We Would Sleep Walk into Accepting the Collection of Our Data”: Controversies Surrounding the UK Care. Data Scheme and Their Wider Relevance for Biomedical Research. Medicine, Health Care and Philosophy 19 (2): 177–190. https://doi.org/10.1007/s11019-015-9661-6.

28

C. Burr and L. Floridi

Sterelny, Kim. 2003. Thought in a Hostile World: The Evolution of Human Cognition. Malden: Blackwell. Stiglitz, Joseph, Amartya Sen, and Jean-Paul Fitoussi. 2008. Report by the Commission on the Measurement of Economic Performance and Social Progress. https://ec.europa.eu/eurostat/ documents/118025/118123/Fitoussi+Commission+report. Stokes, Patrick. 2012. Ghosts in the Machine: Do the Dead Live on in Facebook? Philosophy & Technology 25 (3): 363–379. Sumner, Leonard W. 1996. Welfare, Happiness, and Ethics. Oxford: Clarendon Press. The Topol Review Board. 2019. The Topol Review: Preparing the Healthcare Workforce to Deliver the Digital Future. Health Education England. topol.hee.nhs.uk. Themistocleous, M., and V. Morabito. 2012. How Can User-Centred Design Affect the Acceptance and Adoption of Service Oriented Healthcare Information Systems? International Journal of Healthcare Technology and Management 13 (5–6): 321–344. https://doi.org/10.1504/ IJHTM.2012.052550. Tiberius, Valierie. 2015. Prudential Value. In The Oxford Handbook of Value Theory, ed. Iwao Hirose and Jonas Olson, 158–174. New York: Oxford University Press. Torous, John, Gerhard Andersson, Andrew Bertagnoli, Helen Christensen, Pim Cuijpers, Joseph Firth, Adam Haim, et al. 2019. Towards a Consensus Around Standards for Smartphone Apps and Digital Mental Health: Towards a Consensus Around Standards for Smartphone Apps and Digital Mental Health. World Psychiatry 18 (1): 97–98. https://doi.org/10.1002/wps.20592. UK Government. 2019. Online Harms White Paper. UK Government. https://www.gov.uk/ government/consultations/online-harms-white-paper. United Nations. 2019. The Sustainable Development Agenda. United Nations Sustainable Development Goals (blog). 2019. https://www.un.org/sustainabledevelopment/ development-agenda/. Vallor, Shannon. 2012. Flourishing on Facebook: Virtue Friendship & New Social Media. Ethics and Information Technology 14 (3): 185–199. https://doi.org/10.1007/s10676-010-9262-2. ———. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press. Verduyn, Philippe, Oscar Ybarra, Maxime Résibois, John Jonides, and Ethan Kross. 2017. Do Social Network Sites Enhance or Undermine Subjective Well-Being? A Critical Review. Social Issues and Policy Review 11 (1): 274–302. https://doi.org/10.1111/sipr.12033. Vidgen, Bertie, Helen Margetts, and Alex Harris. 2019. How Much Online Abuse Is There? A Systematic Review of Evidence for the UK. The Alan Turing Institute. https:// www.turing.ac.uk/research/research-programmes/public-policy/programme-articles/ how-much-online-abuse-there. Warwick Machine Learning Group. 2019. London air quality. The Alan Turing Institute. 2019. https://www.turing.ac.uk/research/research-projects/london-air-quality. Watson, Ryan J., and John L.  Christensen. 2017. Big Data and Student Engagement among Vulnerable Youth: A Review. Current Opinion in Behavioral Sciences 18 (December): 23–27. https://doi.org/10.1016/j.cobeha.2017.07.004. Watson, David S, Jenny Krutzinna, Ian N Bruce, Christopher EM Griffiths, Iain B McInnes, Michael R Barnes, and Luciano Floridi. 2019. Clinical Applications of Machine Learning Algorithms: Beyond the Black Box. BMJ, March, l886. https://doi.org/10.1136/bmj.l886. Williams, James. 2018. Stand Out of Our Light: Freedom and Resistance in the Attention Economy. Oxford: Oxford University Press. Wood, Alex M., John Maltby, Gillett Raphael, P. Alex Linley, and Stephen Joseph. 2008. The Role of Gratitude in the Development of Social Support, Stress, and Depression: Two Longitudinal Studies. Journal of Research in Personality 42 (4): 854–871. https://doi.org/10.1016/j. jrp.2007.11.003. Woodard, Christopher. 2013. Classifying Theories of Welfare. Philosophical Studies 165 (3): 17. Woolley, Sandra, and Tim Collins. 2019. Some Heart-Rate Monitors Give Less Reliable Readings for People of Colour. The Conversation. 8 January 2019. http://theconversation.com/ some-heart-rate-monitors-give-less-reliable-readings-for-people-of-colour-121007.

1  The Ethics of Digital Well-Being: A Multidisciplinary Perspective

29

World Health Organisation. 2019. Constitution. 2019. https://www.who.int/about/who-we-are/ constitution. Yost-Dubrow, Rachel, and Yarrow Dunham. 2018. Evidence for a Relationship between Trait Gratitude and Prosocial Behaviour. Cognition and Emotion 32 (2): 397–403. https://doi.org/1 0.1080/02699931.2017.1289153. Christopher Burr is a Philosopher of Cognitive Science and Artificial Intelligence. He is a Senior Research Associate at the Alan Turing Institute and a Research Associate at the Digital Ethics Lab/ Oxford Internet Institute, University of Oxford. His current research explores philosophical and ethical issues related to data-driven technologies, including the opportunities and risks that such technologies have for mental health and wellbeing. A primary goal of this research is to develop frameworks and guidance to support the governance, responsible innovation and sustainable use of data-driven technology within a digital society. To support this goal, he has worked with a number of public sector bodies and organisations, including NHSx; the UK Government’s Department for Health and Social Care; the Department for Digital, Culture, Media and Sport; the Centre for Data Ethics and Innovation; and the Ministry of Justice. He has held previous posts at the University of Bristol, where he explored the ethical and epistemological impact of big data and artificial intelligence as a postdoctoral research and also completed his PhD in 2017. Research Interests: Philosophy of Cognitive Science and Artificial Intelligence, Digital Ethics, Decision Theory, Public Policy and Human-Computer Interaction. [email protected]

Luciano Floridi is the OII’s Professor of Philosophy and Ethics of Information at the University of Oxford, where he is also the Director of the Digital Ethics Lab of the Oxford Internet Institute. Still in Oxford, he is Distinguished Research Fellow of the Uehiro Centre for Practical Ethics of the Faculty of Philosophy and Research Associate and Fellow in Information Policy of the Department of Computer Science. Outside Oxford, he is Turing Fellow of the Alan Turing Institute (the national institute for data science and AI) and Chair of its Data Ethics Group and Adjunct Professor (‘Distinguished Scholar in Residence’) of the Department of Economics, American University, Washington D.C. He is deeply engaged with emerging policy initiatives on the socio-ethical value and implications of digital technologies and their applications. He has worked closely on digital ethics (including the ethics of algorithms and AI) with the European Commission, the German Ethics Council and, in the UK, with the House of Lords, the House of Commons, the Cabinet Office and the Information Commissioner’s Office, as well as with multinational corporations (e.g. Cisco, Google, IBM, Microsoft and Tencent). Among his current commitments, he is Chair of the Ethics Committee of the Machine Intelligence Garage project, Digital Catapult, UK innovation programme; Member of the Board of the UK’s Centre for Data Ethics and Innovation (CDEI); the Advisory Board of The Institute for Ethical AI in Education; the EU Commission’s High-Level Group on Artificial Intelligence; EY AI Advisory Board; and the Advisory Board of the Vodafone Institute for Society and Communications. Research Interests: Digital Ethics (including the ethics of AI, and Information and Computer Ethics), Philosophy of Information and Philosophy of Technology. Among his recent books, all published by Oxford University Press (OUP): The Logic of Information (2019); The Fourth Revolution - How the infosphere is reshaping human reality (2014), winner of the J. Ong Award; The Ethics of Information (2013); The Philosophy of Information (2011).  

[email protected]

Chapter 2

Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry Rafael A. Calvo, Dorian Peters, Karina Vold, and Richard M. Ryan

Abstract  Autonomy has been central to moral and political philosophy for millennia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for

R. A. Calvo (*) Dyson School of Design Engineering, Imperial College London, London, UK Leverhulme Centre for the Future of Intelligence, Cambridge, UK e-mail: [email protected] D. Peters Leverhulme Centre for the Future of Intelligence, Cambridge, UK Design Lab, University of Sydney, Sydney, NSW, Australia K. Vold Leverhulme Centre for the Future of Intelligence, Cambridge, UK Alan Turing Institute, London, UK R. M. Ryan Institute for Positive Psychology and Education, Australian Catholic University, North Sydney, NSW, Australia

© The Author(s) 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_2

31

32

R. A. Calvo et al.

the following three claims: (1) There are autonomy-related consequences to ­algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; (2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and (3) Autonomy-support must be analysed from at least six spheres of experience in order to appropriately capture contradictory and downstream effects. Keywords  Human autonomy · Artificial intelligence · Targeting · Recommender systems · Self-determination theory

2.1  Introduction Digital technologies now mediate most human experience from health and education, to personal relations and politics. ‘Mediation’ here refers, not only to facilitation, but also to the ways technologies shape our relations to the environment, including the ways we perceive and behave in different situations. This sense of mediation goes beyond the concept of a technology as a channel of information. It acknowledges that, by changing our understanding of the world and our behaviour, technology affects core features of our humanity. Verbeek (2011), among others, has argued that acknowledging technological mediation is important to understanding the moral dimension of technology, as well as implications for design ethics. In this paper we focus on human autonomy in relation to technology design ethics. We rely on the definition of autonomy put forward in self-determination theory (SDT; Ryan and Deci 2017) a current psychological theory of motivational and wellbeing psychology. SDT’s approach to autonomy is consistent with both analytic (e.g., Frankfurt 1971; Friedman 2003) and phenomenological perspectives (e.g., Pfander 1967; Ricoeur 1966) in viewing autonomy as a sense of willingness and volition in acting (Ryan and Deci 2017). Common in these definitions is viewing autonomous actions as those that are or would be “endorsed by the self”. Critically, according to this definition, autonomy involves acting in accordance with one’s goals and values, which is distinct from the use of autonomy as simply a synonym for either independence or being in control (Soenens et al. 2007). According to SDT one can be autonomously (i.e. willingly) dependent or independent, or one can be forced into these relations. For instance a person can be autonomously collectivistic, and endorse rules that put group over self (Chirkov et al. 2003). This distinction is significant for our discussion given that, vis-a-vis technologies, individuals may or may not endorse giving over, or alternatively, being forced to retain, control over information or services being exchanged (Peters et al. 2018). The psychological evidence aligned to this conception of autonomy is considerable. From workplaces, to classrooms, to health clinics, to sport fields (Ryan and Deci 2017), participants who experience more autonomy with respect to their

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

33

actions have shown more persistence, better performance and greater psychological wellbeing. Evidence for the importance of autonomy-support to human wellbeing, and to positive outcomes more generally, has more recently led to concern about autonomy within technology design (Peters et al. 2018). However, the identification of design strategies for supporting human autonomy poses at least two significant challenges. The first regards breadth: Design for autonomy covers very broad territory given that technologies now mediate experiences in every aspect of our lives and at different stages of human development, including education, workplace, health, relationships and more. The second challenge is that such design practices raise significant ethical questions which can challenge the core of how autonomy has been conceived across multiple disciplines. For example, most technologies are designed to influence (i.e. support or hinder) human behaviours and decision making. As Verbeek (2011) has put it, “Technological artifacts are not neutral intermediaries but actively co-shape people’s being in the world: their perceptions and actions, experience and existence…When technologies co-shape human actions, they give material answers to the ethical question of how to act.” Therefore, intentionally or not, technology design has an impact on human autonomy, and as such, on human opportunities for wellbeing. This paper elaborates on the nuances of the experience of autonomy within technology environments using a model called METUX (“Motivation, Engagement and Thriving in User Experience”; Peters et al. 2018). The model has been described as “the most comprehensive framework for evaluating digital well-being to date” (Burr et  al. 2020), and is based on self-determination theory, a body of psychological research that has strongly influenced autonomy-support in fields such as education, parenting, workplaces and health care (Ryan and Deci 2017). SDT holds that human wellbeing is dependent on the satisfaction of basic psychological needs for autonomy, competence, and relatedness. Herein, we focus exclusively on autonomy owing to its particular relevance in relation to discussions of machine autonomy, and its centrality among principles for ethical AI. We begin by briefly reviewing some of the predominant conceptions of autonomy within philosophy, giving special attention to notions that stand to inform the design of AI environments. In Sect. 2.2, we look at how autonomy, and ethics more broadly, have been perceived within the engineering and technology industry. In Sect. 2.3, we summarise the work in human-computer interaction (HCI) that has bridged technology with the social sciences to improve support for human autonomy within digital systems—sometimes within the larger context of designing for psychological wellbeing. In Sects. 2.4 and 2.5, we provide rationale for the specific value of SDT, as compared to other psychology theories, for understanding AI experience. Then, in Sect. 2.6 we describe the example of the YouTube video recommender system as a case study for illustrating various autonomy-related tensions arising from AI, and the value of applying the METUX model for better understanding the complexities. The model is elaborated in Sect. 2.6 and applied to the case study in Sect. 2.7. In Sect. 2.8, we conclude.

34

R. A. Calvo et al.

2.2  Philosophical Positions on Autonomy Concepts of human autonomy have long played an important role in moral and political philosophy. Despite general agreement that human autonomy is valuable and merits respect, there is less agreement around what autonomy is, and why (and to what extent) it should be valued and respected. We will not attempt to settle these disagreements, but here we will lay out a few conceptual distinctions with the aim of providing clarity around the notion as we employ it. The term autonomy was originally used by the Ancient Greeks to characterize self-governing city states. They did not explicitly discuss the concept of individual autonomy, which has, in contrast, preoccupied many modern philosophers. John Stuart Mill, in his famous work On Liberty, did not use the term autonomy, but nonetheless argued for the concept of “self-determination” broadly as “the capacity to be one’s own person, to live one’s life according to reasons and motives that are taken as one’s own and not the product of manipulative or distorting external forces.” (Christman 2018). The value of this capacity is not limited to any domain—it is a characteristic that can apply to any aspect of an individual’s life, though for Mill, it is perhaps most significant in the moral and political spheres (Christman 1989, 2018). Indeed, he saw self-determination as a basic moral and political value because it is “one of the central elements of well-being” (Mill 1859/1975, ch. 3). For Mill, then, individual autonomy is a psychological ideal, and represents a constitutive element of one’s well-being. Furthermore, for Mill this ideal has a normative aspect, which grounds certain duties on others. Individuals have a right to self-determine, and so others have an obligation not to unduly interfere with others’ decisions or ability to live in accordance with their own reasons and motives. Of course, Mill is just one of many philosophers of autonomy. Immanuel Kant, for example, was occupied with an a priori concept of rational autonomy that, he argued, is presupposed by both morality and all of our practical thought. Hill (2013) highlights that in Kant’s view, certain conditions should be met for a decision or action to be considered autonomous. First, the agent has to have certain relevant internal cognitive capacities that are necessary for self-governance, but that are widely thought to be lacking in most animals, children, and some mentally disabled adults. Second, the individual has to be free from certain external constraints. Like Mill, Kant also recognized that our capacities for rational autonomy can be illegitimately restricted by external forces in many ways, including “by physical force, coercive threats, deception, manipulation, and oppressive ideologies” (Hill 2013), and that a legal system is needed to “hinder hindrances to freedom” (Kant, RL6:230–33; quoted in Hill 2013). The notion of manipulation and deception as a hindrance to autonomy is particularly relevant within certain technological environments and we will touch on this later within our example. If, for the sake of this discussion, we accept autonomy as willingness and self-­ endorsement of one’s behaviors, then it’s useful to highlight the opposite, heteronomy, which concerns instances when one acts out of internal or external pressures

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

35

that are experienced as controlling (Ryan and Deci 2017). Feeling controlled can be quite direct, as when a technology “makes” someone do something that she does not value (e.g., an online service that forces the user to click through unwanted pages illustrates a minor infringement on autonomy). But it is not only external factors that can be coercive, there are also internally controlling or heteronomous pressures (Ryan 1982) that can reflect a hindrance to autonomy. For example, technology users can develop a compulsion that leads to overuse, as widely seen with video games and social media (e.g., Przybylski et al. 2009). Many use the term “addiction,” in describing overuse, to convey a coercive quality. Popularly, the concept of FOMO (fear of missing out) describes one such type of technology-induced compulsion to constantly check one’s social media. Przybylski et al. (2013) found that FOMO was higher in people who heavily used social media, and was also associated with lower basic need satisfaction, including lower feelings of autonomy, and lower mood. Such examples suggest that even though a user might appear to be opting into a technology willingly, the experience may nonetheless feel controlling. Self-reports that “I can’t help it” or “I use it more than I’d like to” reflect behaviour that is not fully autonomous (Ryan and Deci 2017). In fact, there are now many technologies available which are dedicated solely to helping people regain self-control over their use of other technologies (Winkelman 2018). Taking these points together, we can outline a series of characteristics for a conceptualisation of autonomy useful for AI and technology contexts. For this working definition, we can conclude that human autonomy within technology systems requires: • A feeling of willingness, volition and endorsement. • The lack of pressure, compulsion or feeling controlled. • The lack of deception or deliberate misinformation. Although this is, of course, not a complete or sufficient conceptualisation for operationalising human autonomy within AI systems, it forms a helpful foundation that provides a basis for addressing a large number of the key tensions that arise within these contexts, which will be demonstrated within our case study in the second half of this chapter. However, first we will turn to perceptions and manifestations of autonomy within computer science, engineering, and human-computer interaction.

2.3  Notions of Autonomy Within Technology Fields Although we have highlighted that human autonomy has long been important to philosophy and the social sciences, engineering and computer science have tended to focus on machine autonomy. For example, as of 2019, a search for the word “autonomy” in the Digital Library of the Association for Computing Machinery (ACM) reveals that of the top 100 most cited papers, 90% are on machine autonomy. However, human autonomy has begun to assert itself within the technology

36

R. A. Calvo et al.

industry of late, due to a growing public concern over the impacts of AI on human wellbeing and society. In response, philosophers and technology leaders have gathered and come to consensus over the need to respect and support human autonomy within the design of AI systems (Floridi et  al. 2018). New sets of AI principles codify autonomy-support, mirroring a similar refocus on autonomy within health (Beauchamp and Childress 2013). The Institute of Electrical and Electronics Engineers (IEEE), the world’s largest professional engineering organisation, states that its mission is to “foster technological innovation and excellence for the benefit of humanity” (IEEE 2019). This benefit has traditionally been interpreted as maximizing productivity and efficiency (i.e. the rate of output per unit of input), an approach that has fuelled decades of work on automation and computer agency within the industry. Automation is a design strategy aimed at maximising productivity by avoiding the need for human intervention. As such, the vast majority of research in engineering has focused on the design of autonomous systems, particularly robots and vehicles (e.g., Baldassarre et al. 2014). Within engineering practice, there has traditionally been little questioning of productivity, efficiency, and automation as primary strategies for benefiting humanity. Ethics within engineering education has focused on ensuring safe and properly functioning technologies. While it could be argued that productivity is a poor proxy for human benefit, it might also be argued that, at a basic level, by creating products to satisfy human needs, engineers have taken humans as ends-in-themselves and therefore, essentially acted ethically (in Kantian terms). Yet this would be true only under conditions where the needs satisfied are ones both endorsed and valued by users. In fact, many new business models focus on users data and attention as the basis for monetisation, turning this traditional value structure on its head and make humans merely a “means-to-an-end”. For example, on massively popular platforms like YouTube, Facebook and Instagram, what is being harvested and sold is user attention, which is valuable to marketers of other products. In this new economic model of attention trading, engineers create technologies that collect user data and attention as input, and hours of engagement and user profiling as output to be sold to advertisers. Within these systems, the human is an essential ‘material’ or means to an end. Aside from some of the broad ethical issues relating to this business model, implications for human autonomy can specifically arise from a disalignment between commercial interests and user interests. Where marketers are the “real” customers, serving user best interest is only important to the extent that doing so is necessary for serving the interests of marketers. Therefore, if there are ways to increase engagement that are manipulative or deceptive to the user, but effective, then these methods are valuable to business (and to the machine learning algorithms programmed to ‘value’ these things and optimise for them). In addition, when users choose to adopt a technology, but under conditions in which the use of their behavior, personal information, or resources is not disclosed, the user’s autonomy is compromised. This is especially true where the information would potentially alter their choices. Not surprisingly, human autonomy has

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

37

suffered in a number of ways within this new business model, including through increased exposure to misinformation, emotional manipulation and exploitation. We touch on some of these in more detail in our case study later). Concerns about this new economy, sometimes referred to as “surveillance capitalism”, have grown steadily (Zuboff 2019; Wu 2017). In response, engineers and regulators have begun attempting to devise ethical boundaries for this space. For example, in 2017 the IEEE began the development of a charter of ethical guidelines for the design of autonomous systems that places human autonomy and wellbeing (rather than productivity) at the centre (Chatila et al. 2017). In fact, a growing number of employees and industry leaders, many responsible for contributing to the most successful of the attention market platforms, are beginning to openly acknowledge the intrinsic problems with these systems and push for more “responsible” and “humane” technologies that better benefit humanity (e.g. humanetech.com; doteveryone.org.uk). Thus, at the cusp of the third decade of the twenty-first century, the technology industry finds itself in a kind of ethical crisis with myriad practical implications. Many who benefit from the attention market continue to defend its current strategies, while others are increasingly expressing self-doubt (e.g. Schwab 2017; Lewis 2019), signing ethical oaths (e.g. the Copenhagen Letter, see Techfestival 2017), joining ethics committees (see doteveryone.org for a list of charters, oaths and committees), and challenging the status quo within their own organisations (e.g. Rubin 2018). Others, having identified business models as core to the problem, are experimenting with alternative models, such as subscription services (which generally do not rely on ad revenue), social enterprises, and “B corporations” designed to “balance purpose and profit”.1

2.4  Designing for Autonomy in HCI A handful of researchers in human-computer interaction have been working on supporting human autonomy through design since at least the 1990s. For example, Friedman (1996) described three key design factors for a user interface that impact autonomy, including system capability, system complexity, misrepresentation, and fluidity. In the last five years, a number of researchers have developed new design methods for supporting autonomy which go beyond the immediate effects of a user interface and extend to autonomy as a life-wide experience. These methods have often approached autonomy through the larger contexts of psychological wellbeing (Peters et  al. 2018; Gaggioli et  al. 2017; Calvo and Peters 2014; Desmet and Pohlmeyer 2013; Hassenzahl 2010) and human values (Friedman and Hendry 2019; Flanagan and Nissenbaum 2014) and often build on psychological theories, such as

 See http://bcorporation.net/

1

38

R. A. Calvo et al.

theories of positive psychology (Seligman 2018), hedonic psychology (Kahneman et al. 1999), or motivation (Hekler et al. 2013). These approaches have generally been based on the idea of translating psychology research into design practice. However, empirical evidence for the effectiveness of these translational models, and the extent to which they impact the quality of design outcomes, is still emerging. Among the psychological theories translated into the design context, SDT has perhaps been the most systematically applied. The likely reasons for this are outlined below.

2.5  SDT as a Basis for Autonomy-Supportive Design SDT has gathered the largest body of empirical evidence in psychology with respect to issues of autonomy, psychological needs, and wellbeing. In its broadest strokes, SDT identifies a small set of basic psychological needs deemed essential to people’s self-motivation and psychological wellbeing. It has also shown how environments that neglect or frustrate these needs are associated with ill-being and distress (Ryan and Deci 2000, 2017). These basic needs are: • Autonomy (feeling willingness and volition in action), • Competence (feeling able and effective), • Relatedness (feeling connected and involved with others). Although in this article we focus on the individual’s need for autonomy, we note that aiming to support all three is important for human wellbeing, and therefore, essential criteria for the ethical design of technology. Indeed, innate concerns over our basic psychological needs are reflected in modern anxieties over AI systems. Take, for example, the fears that AI will take over our jobs and skills (threatening our competence), take over the world, (threatening our autonomy) or replace human-­ to-­human connection (threatening our relatedness). Ensuring support for basic psychological needs constitutes one critical component of any ethical technology solution. In addition to its strong evidence base, there are also a number of qualities of self-determination theory that make it uniquely applicable within the technology context. Firstly, as a tool for applied psychology, SDT is sufficiently actionable to facilitate application to technology and design. However it is not so specific that it loses meaning across cultures or contexts. Up to this point, research on psychological needs across various countries, cultures, and human developmental stages provides significant evidence that autonomy, competence and relatedness are essential to healthy functioning universally, even if they are met in different ways and/or valued differentially within different contexts (e.g., Yu et al. 2018). Second, SDT literature describes, and provides empirical evidence for, a spectrum of human motivation which runs along a continuum from lesser to greater autonomy (Howard et  al. 2017; Litalien et  al. 2017). This motivation continuum has, for example, been used to explain varying levels of technology adoption and

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

39

engagement as well as the powerful pull of video games (Ryan et al. 2006; Rigby and Ryan 2011). An additional pragmatic point is that SDT provides a large number of validated instruments for measuring autonomy (as well as wellbeing and motivation). These can be used to directly quantitatively compare technologies or designs with regard to an array of attributes and impacts. Related to this point, perhaps the most important advantage of SDT for integrating wellbeing psychology into the technology context is its unique applicability to almost any resolution of phenomenological experience. That is to say, its instruments and constructs are as useful at measuring autonomy at the detailed level of user interface controls as they are to measuring the experience of autonomy in someone’s life overall. In contrast, most other theories of wellbeing are applicable only at higher levels. For example, Quality of Life measures used in Wellbeing Economics focus on the life level (Costanza et al. 2007). Moreover, SDT’s measures can be used to measure the psychological impacts of any technology, regardless of its purpose and whether it is used only occasionally or everyday. For example, Kerner and Goodyear (2017) used SDT measures to investigate the psychological impact of wearable fitness trackers over eight weeks of use. Results showed significant reductions in need satisfaction and autonomous motivation over that time. Qualitative evidence from focus groups suggested the wearables catalyzed short-term increases in motivation through feelings of competition, guilt, and internal pressure, suggesting some ways in which lifestyle technologies can have hidden negative consequences in relation to autonomy. Furthermore, SDT measures have been widely applied to compare various video game designs, showing how design approaches can differentially impact autonomy, and thereby influence sustained engagement and enjoyment (e.g., Ryan et al. 2006; Peng et al. 2012). As helpful as SDT promises to be for technology design research, it has not, until recently, provided a framework for differentiating experiences of autonomy with respect to the various layers of human technology interactions. This gap has only became salient as the theory has been applied in technology applications where a large range of different resolutions must be considered and where these can present contradictory effects on psychological needs. For example, “autonomy-support”, with respect to technology, might refer to customisable settings that provide greater choice in use of the software. Alternatively, it might refer to the way a self-driving car affords greater autonomy in the daily life of someone who is physically disabled. While both describe experiences of increased autonomy, and autonomy-­ supportive design, they are qualitatively very different and only the latter is likely to cause measurable impact at a life level. Moreover, a game may increase psychological need satisfaction within the context of gameplay (providing strong experiences of autonomy and competence during play) but hinder these same needs at a life level (if overuse feels compulsive and crowds out time for taking care of work, family and other things of greater import). Therefore, it is clear that greater precision is required in order to effectively identify and communicate conceptions of autonomy at different resolutions within technology experience. Calvo et al. (2014) first highlighted this need and presented a

40

R. A. Calvo et al.

framework distinguishing four “spheres of autonomy”. Peters et al. (2018) expanded on this work substantially, developing as part of a larger framework, a six-sphere model of technology experience that identifies six distinct levels at which all three psychological needs can be impacted. It is this model that we believe can be usefully applied to our understanding of ethical conceptions of autonomy within technology experiences, and we will describe it in greater detail in Sect. 2.6. However, it may first be helpful to turn to a case study to provide greater context. Specifically, we provide a brief analysis of the YouTube video recommender system and its implications for human autonomy.

2.6  A  utonomy in Context: The Example of the YouTube Recommender System Different accounts of autonomy have significantly different practical implications within technology experience. For example, when discussing freedom of speech on the Internet, autonomy is appealed to by both those arguing for the right to free speech (even when it is hateful) and those defending the right to be free from hate speech (Mackenzie and Stoljar 2000). The designers of the systems that mediate today’s speech must make values-based decisions that affect this balance, and that impact how individuals experience communication with others. In the case of YouTube, for example, the action of uploading or ‘liking’ a discriminatory video occurs within the context of a system of recommendations that either supports or suppresses the likelihood of such videos being seen. For example, a user who “likes” one video which contains slightly racially bigoted content, is then likely to get shown more of them, many of which may be more explicitly discriminatory, since the algorithm is influenced by the engagement advantages of extreme and emotionally-charged headlines (i.e. clickbait). Shortly, this user’s YouTube experience may be dominated by videos aligned only to a particular extreme view. This experience leaves the user within a social “reality” in which “everyone” seems to support what, in truth, may be a very marginal view. “Evidence” is given, not only by the videos that constitute this environment, but also by the thousands of likes associated with each, since the videos have previously been shown primarily to users more likely to “like” them, thanks to the recommendation system. The ideological isolation caused by this “filter bubble” doesn’t even require the user to enter search terms because recommendations are “pushed” unsolicited into the visual field beside other videos, and may even autoplay. This scenario shows how social influence can be constructed by a system that is deliberately designed to reflect a biased sample. For an unwitting user, this biased representation of the zeitgeist creates reinforcement feedback. Furthermore, consider the consequences of how frictionless uploading a racially charged video is within systems that create a social environment in which such

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

41

content would mostly receive positive comments. While in a non-digitally mediated life, a person might not consider producing or engaging with such content because of negative social reactions, in the algorithmically shaped online world the same behaviour is encouraged and perceived as a norm. In other words, before our content was being filtered by an AI system, one had to consider the potential diversity of ‘listeners’. Few would stand on a busy sidewalk in a diverse metropolitan area handing out racially charged flyers. But on YouTube, one can experiment with extreme views, and put out hateful content with some guarantee that it will be shown to an audience that is more likely to receive it well. The example of YouTube recommender and “like” systems lends strong evidence for the notion of technological mediation (Verbeek 2011) and the “hermeneutic relations” (Ihde 1990) through which human interpretation of the world is shaped. The AI-driven recommendation system shapes, not only how we perceive our social situation and our understanding of the world, but also our behaviour. This presents an interesting challenge for autonomy support. Unaware of the bias, a user is likely to feel highly autonomous during the interaction. However, the misinformation (or misrepresentation) potentially represents a violation of autonomy according to most of the philosophical views discussed earlier, as awareness of potentially conflicting alternative information would likely become more phenomenologically salient if the user were informed of the manipulation. Understanding autonomy as reflective endorsement (e.g., Frankfurt 1971), technologies that obscure relevant considerations compromise autonomy. This is akin to similar problems within human-human relations, and the definition of ‘misleading’ (as opposed to erroneous) is sometimes controversial since it is often based on intentions which can be difficult to prove. For example, it may benefit technology makers to deliberately obscure information about how data is used (hiding it within inscrutable terms and conditions). For instance, some developers obscure the uses they may make of location data (Gleuck 2019). In our case study, it’s unlikely YouTube developers deliberately intend to fuel radicalisation. However, they might choose to overlook the effect if the technical approach is sufficiently effective by other measures. While human intention can be difficult to prove, an algorithm’s “intention” is far more straightforward. It must be mathematically defined based on an explicit goal, for example, “optimise user engagement.” This allows for ethical enquiry into the potential consequences of these goals and algorithmic drivers. If we know the algorithm “intends” to do whatever will most effectively increase user engagement and it does so by narrowing the diversity of content shown, what might some of the implications be on human autonomy? In one sense, YouTube’s system can be thought of as empowering user autonomy—for both producers and consumers of content. It empowers producers to post the content they want to post, while at the same time it is less likely that someone who would be offended will be shown it (freedom to create hate speech is respected while freedom to be free from hate speech is also supported). Indeed, at one level the ‘dilemma’ of hate speech has been resolved (in essence, by creating different worlds and allowing each user to exist in the one they prefer).

42

R. A. Calvo et al.

But these virtual worlds are illusory and ephemeral and their effects can carry into the real world. We believe a new dilemma arises that can only be clearly seen when viewed across distinct spheres of technology experience. For instance, this optimistic analysis of YouTube’s design as a solution to freedom of speech tensions relies, not only on ignoring the extent to which recommender systems shape the free speech that is viewed, but also on an entirely individualistic and exclusively low-­ resolution analysis of autonomy—one that excludes the broader social reality of the individual. In this non-relational account, the individual can be considered “autonomous” as long as options are offered and not imposed by the system. However, the system must inevitably “impose” some content to the extent that it can’t show all available videos and must choose on behalf of the user what options they will have. When the number of options is infinite the choice architecture may be driven by social variables. Not taking into account broader social impacts of the technology’s silent restructuring of reality also has consequences. One example of the consequences of ignoring the socially-situated reality of technologies can be found in the work of Morley and Floridi (2019a, b) who explored the narratives of empowerment often used in health policy. They consider how digital health technologies (DHTs) act as sociocultural products and therefore cannot be considered as separate from social norms or the values they have on others. In this context health technologies designed to “empower” (i.e. support human autonomy) create scenarios of control through which potentially shaming or ‘victim blaming’ messaging fosters introjected motivation, whereby self-worth is contingent on performing the prescribed behaviors (see also Burr and Morley 2019). We argue that a new conceptual lens is needed to make sense of scenarios like these—a lens that considers the different levels at which personal autonomy can be impacted. While any perspective on these questions is likely to be incomplete, we believe that at least acknowledging the various interdependent layers of impact is an important start.

2.7  A  pplying the “METUX” Model to the Analysis of Autonomy Support Within Digital Experience As mentioned previously, self-determination theory posits that all human beings have certain basic psychological needs including a need for competence, relatedness, and, most germane to our discussion, autonomy. Significant evidence for this theory of basic psychological needs (BPNs), has accrued over the past four decades and includes research and practical application in education, sport, health, workplace and many other domains (see Ryan and Deci 2017; Vansteenkiste et al. 2019 for extensive reviews). Recent efforts applying SDT to technology have revealed the need for an additional framework of analysis in order to more accurately understand BPNs within the technology context. In response, Peters et  al. (2018) developed a model of

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

43

Fig. 2.1  Spheres of technology experience, a component of the METUX model

“Motivation, Engagement and Thriving in User Experience” (METUX). Figure 2.1 provides a visual representation of the model. The METUX model, among other things, introduces six separable “Spheres of Technology Experience” in which a technology can have an impact on our basic psychological needs. Broadly, the first sphere, Adoption, refers to the experience of a technology prior to use, and the forces leading a person to use it. For example marketing strategies can tap into internal self-esteem pressures to induce people to buy, or they can take an informational and transparent approach to encourage choice. Adoption can be a function of external and social pressures, or something more volitional. Once someone begins using a technology, they enter the next four spheres of the “user experience”. At the lowest level of granularity, the Interface sphere involves a user’s experience interacting with the software itself, including the use of navigation, buttons and controls. At this level, a technology supports psychological needs largely by supporting competence (via ease-of-use) and autonomy (via task/goal support and meaningful options and controls). The next sphere, Task refers to discrete activities facilitated by the technology, for example “tracking steps” in the case of a fitness app or “adding an event” as part of using calendar software. Separate to the effect of the interface, these tasks can each be accompanied by more or less need satisfaction. Some tasks for example, may feel unnecessary, irrelevant or even forced on users, whereas others are understood as useful, and thus done with willingness. Combinations of tasks generally contribute to an overall behaviour, and the Behaviour sphere encompasses the overarching goal-driven activity enabled, or enhanced, by the technology. For example, the task “step-counting” may contribute to the overall behaviour: “exercise”. Regardless of how need-supportive a

44

R. A. Calvo et al.

technology is at the interface and task levels, a behaviour such as exercising might be more or less a self-endorsed goal and domain of activity. The final sphere within the user’s direct experience is Life, which captures the extent to which a technology influences the fulfillment of psychological needs, such as autonomy, within life overall, thus potentially effecting the extent to which one is “thriving”. For example, even though a person may autonomously adopt an activity “tracker,” and feel comfortable at the interface and task levels, the use of the tracker may still compromise one’s overall sense of autonomy and wellness at the life level, as suggested by the research we reviewed by Kerner and Goodyear (2017). In sum, a user may feel autonomous when navigating the interface of a fitness app, but not with respect to step counting (e.g. “I can’t possibly do 10,000 steps every day”). Or, they may find step counting increases their sense of autonomy but not their experience of autonomy with regard to exercise overall. Finally, a technology may fulfil psychological needs at the levels of interface, task and behaviour but not have a measurable impact on one’s life. The ways in which the spheres framework allows designers to identify and target need satisfaction at all relevant levels makes them helpful to design. The existence of measures for need satisfaction that can be applied at most of these spheres, also makes them actionable. Finally, expanding beyond the user experience, we come to Society which involves impact on need satisfaction in relation to all members of a society, including non-users of a technology (and non-humans). For example, a person might enjoy their new smartphone, and endorse its adoption, but component parts made of gold are manufactured through abusive labour practices. More broadly the volitional use of smartphones may change the overall patterns of interaction between humans, in ways for better and worse or have a collective impact on child development. More detailed explanations for each of these spheres is given in Peters et al. (2018). It is important to note that the boundaries between spheres are conceptual and examples of overlap and interrelation naturally exist. The point is not to overemphasize the boundaries but to provide a way of organising thinking and evaluation in a way that can address the layered, and potentially contradictory, parallel effects of technology designs (e.g., when a technology supports psychological needs at one level while undermining them at another).

2.8  R  eturning to the Case Example: Applying the METUX Spheres to YouTube Systems In the previous section we described the spheres in relation to the satisfaction of psychological needs. Coming back to our YouTube case study, we can begin to apply the lens of the METUX spheres to an exploration of ethical issues to do with autonomy in relation to this technology.

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

45

2.8.1  Adoption Beginning with Adoption, the initial autonomy-related issue that arises is the extent to which someone’s adoption of a technology is autonomous (rather than controlled). When someone starts using YouTube for the first time, SDT predicts that the extent to which they do so autonomously (i.e. because they wanted to versus because they feel pressured to do so) will have an impact on their engagement afterward. People are often compelled to use technologies, for example, for work, school, or in order to be part of a group or community. While technology makers may have little control over this area of impact, they can look at ways to communicate the benefits of the technology (i.e. through marketing) to increase endorsement. An ethical enquiry might explore questions like: “Do people feel pressured to adopt the platform and if so, what are the sources of that pressure?” “To what extent is information available about the technology’s benefits and risks transparent or misleading?” And, “Is the platform equally available to all people who might benefit from it, or are their exclusionary factors that may be a concern?” (e.g., to do with cost, accessibility, region, etc.).

2.8.2  Interface Once someone becomes a user of the platform, we turn to the Interface sphere. Within our example, YouTube’s autoplay feature is an interface design element that can cause autonomy frustration as it makes decisions automatically for the user about what they will watch and when, without confirming endorsement. Autoplay can be turned off, but the feature is opt out rather than opt in. This clearly benefits media providers by increasing hours of user engagement, but the extent to which it benefits users is more questionable and will likely depend on the individual. Autoplay is just one example of how even the design of low-level controls can impact human autonomy and carry ethical implications. Design for autonomy-­ support in this interface sphere is largely about providing meaningful controls that allow users to manipulate content in ways they endorse. Focusing on ethics at the interface directs our attention to the things over which users are given control and things over which they are not, as well as the limits placed on that control.

2.8.3  Tasks Within the Tasks sphere, we encounter the wide range of activities afforded by a system. Specifically, YouTube supports uploading of videos, “liking” content, searching, browsing, and creating channels, as well as tasks effected by the recommender system described previously. One example of ethical enquiry at this level is

46

R. A. Calvo et al.

provided by Burr et al. (2018) who review the different ways Intelligent Software Agents (ISA), such as recommender systems, interact to achieve their goals. Specifically, they identify four strategies: coercion, deception, trading and nudging provide task level examples such as: “recommending a video or news item, suggesting an exercise in a tutoring task, displaying a set of products and prices”. Coercion might involve, for example, forcing a user to watch an ad before continuing to a movie. However, even ‘forced’ behaviours may be relatively endorsed by the user (e.g. “I don’t mind watching an ad if it allows the content to be free”) and designers can work to gain this endorsement by providing rationale for the infringement. Deception involves the use of misleading text or images to engage the user in a task (e.g., phishing scams) while trading occurs when the ISA makes inferences about the users’ goals and uses them to offer options that maximise both the users’ and the ISA’s goals. The final form of interaction presented by the authors is nudging, which involves the use of available information or user bias to influence user decision-­ making (see Arvanitis et al. 2019). In workplaces, tasks are particularly important because they are the focus of automation efforts. While the totality of what an employee experiences as her “job” is often hard to automate, tasks are not. In some cases, task automation can benefit a job, but in others it can be enough to eliminate it. For example, Optical Character Recognition might improve experience for an accountant by making their work more efficient and accurate, however it may entirely eliminate the job of a data entry person. The impact of AI on workplaces will likely be through replacing human tasks. Technology designers will often focus on tasks, both when the goal is to engage the user as means-to-a-commercial-end, or when automating something that a human used to do.

2.8.4  Behaviour In our YouTube case study, tasks like content browsing and “liking” contribute to different behaviours for different users. Broadly, all users “consume media”, and some of them do this for purposes of “entertainment” or “education”. A smaller number of users, “publish media” and they might do this for the purpose of “communication” or “work,” each of which can be thought of as a behaviour. Targeting autonomy at this level draws attention to the needs of content producers to feel autonomous in creating and disseminating their work and designers might ask “What will help support a video producer’s feelings of autonomy?” or “What are their goals and values and how can YouTube’s design support these?” For an ethical enquiry, we might investigate what rights producers retain with respect to their content, what policies and limits are placed on what can be published, as well as the reasons for those limits. We might also scrutinize the ways media is presented or distorted as a result of the unique characteristics of the technology, and what implications this might have on the autonomy of users.

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

47

Moreover, the way in which technologies work to grab attention is critical to ethical questions of autonomy, since, if we accept that attention, as William James (1890) described it, is “the essential phenomenon of will” there is little room for autonomous action without it. For example, when a student watches a lecture on Youtube for a class, he is pursuing a goal to learn and fulfil course requirements. When his attention is then drawn by a video recommendation, the original intention (to learn) may be forgotten, and with it, the nature of the behaviour. Behaviour change is often driven by an intention imposed by a technology, and often without awareness of the individual effected, and therefore can be said to affect autonomy.

2.8.5  Life In some cases, YouTube may become a significant influence on someone’s life in either a positive or negative way. For example, one user might earn enough to make a living as a “YouTuber” while another may start and maintain a yoga practice because of it. On the other hand, another user may find it difficult to stop overusing YouTube, or a vulnerable teenager may find herself with easy access to pro-­ anorexia videos. As we touched on previously, designing to increase the amount of time users spend on a system can fuel overuse, reducing time they have available to engage in other healthy activities (such as connecting with friends, parenting, working, or experiencing nature). This can have consequences on life-level autonomy and other psychological needs. In extreme cases, overengagement has been viewed as addiction (Kuss and Lopez-Fernandez 2016), a condition in which autonomy is significantly frustrated. The possible examples are many but the important point is that circumstances exist in which YouTube will have measurable effects on autonomy at the Life level. Ethical enquiry into life-level impact explores influence above and beyond the virtual boundaries of the technology and will rely on research into the human experience of actual use or, for new or prototype technologies, on anticipatory analysis.

2.8.6  Society Finally, should some of these life level experiences propagate they could add up to identifiable impact within the society sphere. Here again, a combination of sociological research on patterns of use and/or anticipatory processes involving multiple stakeholders will be necessary for identifying critical ethical issues which stand to reverberate across society. A useful parallel might be drawn with respect to ‘sustainable’ and ‘circular’ design. Just as we need to design in ways that preserve the natural environment for our survival, digital technologies, like YouTube, need to be designed in ways that

48

R. A. Calvo et al.

minimise negative impact on individuals and societies to preserve a ‘sustainable’ social environment. For example, the extent to which recommendation systems might coopt attention, change intention and behaviour, and even construct social norms, could have deleterious effects on social interaction, societal values and politics. Filter bubble dynamics, discussed earlier, may deprive individuals of contact with information that may influence their reflective considerations, leading them to support social movements they otherwise would not endorse. Finally, technologies may drive consumer behaviors which may be satisfying in an immediate sense, but which ultimately impact the health and wellness of many members of society, including those who do not consume, or cannot access, the products pushed by a technology. Addressing societal autonomy requires a relational conception of autonomy (Mackenzie and Stoljar 2000) which acknowledges, among other things, the extent to which individual autonomy is socially situated and therefore influenced by willing obligations and interdependence with others (e.g., caring between parents and children, the collective goals of a group, a desire for national sovereignty.) When a child’s wellbeing is negatively affected by a technology, it is also the parent’s autonomy that suffers. When fairness is undermined by algorithmic bias, it is a segment of society whose autonomy may be affected. When democracy is undermined by the generation and targeting of fake news, national autonomy may be threatened. We argue that, in order for AI products to be considered responsible, and to, therefore, be successful in the longer term, they need to consider their impact within all of the above mentioned spheres—including life and society—both by anticipating potential impact, and then evaluating it regularly once the technology is in use. In Table 2.1 we summarise various types of impact on autonomy arising from the use of YouTube and present these against the METUX spheres of technology experience.

2.9  Discussion: Ethics in the Design of AI Systems In this chapter we have described how the METUX model’s “Spheres of Technology Experience” might contribute to clearer thinking, analysis and design in relation to human autonomy within AI systems. We have proposed that the spheres present a useful starting point for applying necessary dimensionality to these discussions. The METUX model also provides instruments that could be used to measure the differential impacts of different design decisions on users at each level. In order to illustrate this point, we described how the model might be applied in the context of YouTube, a familiar AI case study. In conclusion, if we are to be guided by both philosophers and psychologists with regard to an ethical future for technology, than there is no way forward without an understanding of human autonomy and ways to safeguard it through design. Understanding the phenomenological experience of autonomous behaviour as well as the multifaceted and layered ways in which users of technologies can be

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

49

Table 2.1  Spheres of technology experience for YouTube with examples of factors likely to impact autonomy in each Sphere of experience Adoption To what extent is technology adoption autonomously motivated? Interface To what extent does direct interaction with the technology (i.e., via the user interface) impact autonomy. Tasks What are the technology specific tasks? How do they impact on autonomy?

Support for autonomy Most users adopt YouTube autonomously as it is primarily used for entertainment and self-guided learning, rather than as an obligatory tool for work or communication. “10 seconds back” and “skip ad” buttons allow users more refined control over content. Controls are also provided for adjusting data input to recommendation systems.

Compromise to autonomy Some users (publishers) may feel pressured to use YouTube over other video platforms (e.g. Vimeo) owing to market dominance. There is no way to skip the beginning of ads (coercive); videos will autoplay (without user consent) unless the setting is turned off (an opt out). Deception through clickbait Tasks such as subscribing to channels leads to unintended activity; and ‘liking’ allow users to customise Recommender system results content. Searching provides access to limit options, may distort nearly endless content options. social norms and may change behaviours online and offline. YouTube contributes to users’ ability to Strategies for increasing user Behaviour engage in a number of behaviours, for engagement increase the risk How does the of overuse or “addiction”. example, for educate or entertain technology impact Some “educational” content autonomy with respect themselves. Others are able to share media in order to communicate, work, on YouTube may be to the behaviour it deliberately or inadvertently or engage in a hobby in whole new supports? misleading. ways. Users may not be aware of how YouTube uses the media they uploaded to it (and what rights they retain). Greater opportunities for entertainment, Instances of radicalization Life education and work flexibility can have exist. Some videos may How does the promote unhealthy or technology influence the an impact on one’s overall life. dangerous behaviours. user’s experience of autonomy in life overall? Due to its reach, YouTube People have more potential to Social videos can influence public To what extent does the communicate, find like others, and opinion and politics, and organise. Societal trends are formed technology impact on rapidly spread sources of and shaped. experiences of disinformation. autonomy beyond the user and across society?

controlled or supported in acting autonomously (sometimes in parallel) are essential. Pursuit of this understanding must proceed at both a universal and at context-­ specific levels as patterns will exist across many technologies, yet each implementation of AI will also have a unique set of contextual issues specific to it.

50

R. A. Calvo et al.

Knowledge in these areas will contribute to informing evidence-based strategies for (more ethical) autonomy-supportive design. In sum, we hope the work presented herein can help contribute to a future in which technologies that leverage machine autonomy do so to better support human autonomy. Acknowledgements and Reported Interests  RAC and DP have received payment for providing training on wellbeing-supportive design to Google. KV was supported by the Leverhulme Centre for the Future of Intelligence, Leverhulme Trust, under Grant RC-2015-067, and by the Digital Charter Fellowship Programme at the Alan Turing Institute, UK.  RMR is a co-founder of Immersyve Inc., a motivation consulting and assessment firm.

References Arvanitis, A., K. Kalliris, and K. Kaminiotis. 2019. Are Defaults Supportive of Autonomy? An Examination of Nudges Under the Lens of Self-Determination Theory. The Social Science Journal. https://doi.org/10.1016/j.soscij.2019.08.003. Baldassarre, G., T. Stafford, M. Mirolli, P. Redgrave, R.M. Ryan, and A. Barto. 2014. Intrinsic Motivations and Open-Ended Development in Animals, Humans, and Robots: An Overview. Frontiers in Psychology 5: 985. https://doi.org/10.3389/fpsyg.2014.00985. Beauchamp, T.L., and J.F. Childress. 2013. Principles of Biomedical Ethics. 7th ed. New York: Oxford University Press. Burr, C., and J Morley. 2019. Empowerment or Engagement? Digital Health Technologies for Mental Healthcare. (May 24, 2019). Available at SSRN: https://ssrn.com/abstract=3393534. Burr, C., N. Cristianini, and J. Ladyman. 2018. An Analysis of the Interaction Between Intelligence Software Agents and Human Users. Minds and Machines 28 (4): 735–774. Burr, C., M. Taddeo, and L. Floridi. 2020. The Ethics of Digital Well-Being: A Thematic Review. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00175-8. Calvo, R.A., and D.  Peters. 2014. Positive Computing: Technology for Wellbeing and Human Potential. Cambridge, MA: MIT Press. Calvo, R.A., D. Peters, D. Johnson, and Y. Rogers. 2014. “Autonomy in Technology Design” CHI ‘14 Extended Abstracts on Human Factors in Computing Systems Pages 37–40. ACM, 2014. Chatila, R., K.  Firth-Butterflied, J.C.  Havens, and K.  Karachalios. 2017. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems [Standards]. IEEE Robotics and Automation Magazine 24: 110–110. https://doi.org/10.1109/ MRA.2017.2670225. Chirkov, V., R.M.  Ryan, Y.  Kim, and U.  Kaplan. 2003. Differentiating Autonomy from Individualism and Independence: A Self-Determination Theory Perspective on Internalization of Cultural Orientations and Well-Being. Journal of Personality and Social Psychology 84 (1): 97–110. Christman, J., ed. 1989. The Inner Citadel: Essays on Individual Autonomy. New York: Oxford University Press. ———. Autonomy in Moral and Political Philosophy. The Stanford Encyclopedia of Philosophy (Spring 2018 Edition), ed. Edward N.  Zalta, Available online https://plato.stanford.edu/ archives/spr2018/entries/autonomy-moral/. Costanza, R., B. Fisher, S. Ali, C. Beer, L. Bond, R. Boumans, et al. 2007. Quality of Life: An Approach Integrating Opportunities, Human Needs, and Subjective Well-Being. Ecological Economics 61 (2–3): 267–276. Desmet, P.M.A., and A.E.  Pohlmeyer. 2013. Positive Design: An Introduction to Design for Subjective Well-Being. International Journal of Design 7: 5–19.

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

51

Flanagan, M., and H.  Nissenbaum. 2014. Values At Play in Digital Games. Cambridge, MA: MIT Press. Floridi, L., et al. 2018. AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines 28: 689–707. Available online: https://link.springer.com/article/10.1007/s11023-018-9482-5. Frankfurt, H.G. 1971. Freedom of the Will and the Concept of a Person. Journal of Philosophy 68: 5–20. Friedman, B. 1996. Value-Sensitive Design. Interactions 3: 16–23. https://doi. org/10.1145/242485.242493. Friedman, M. 2003. Autonomy, Gender, Politics. New York: Oxford University Press. Friedman, B., and D.G. Hendry. 2019. Value Sensitive Design: Shaping Technology with Moral Imagination. Cambridge, MA: MIT Press. Gaggioli, A., G.  Riva, D.  Peters, and R.A.  Calvo. 2017. Chapter 18  – Positive Technology, Computing, and Design: Shaping a Future in Which Technology Promotes Psychological Well-­ Being. In Emotions and Affect in Human Factors and Human-Computer Interaction, 477–502. https://doi.org/10.1016/B978-0-12-801851-4.00018-5. Gleuck, J. 2019, Oct 16. How to Stop the Abuse of Location Data. New York Times.Com. Hassenzahl, M. 2010. Experience Design: Technology for All the Right Reasons. Synthesis Lectures on Human-Centered Informatics 3: 1–95. https://doi.org/10.2200/ S00261ED1V01Y201003HCI008. Hekler, E.  B., P.  Klasnja, J.E.  Froehlich, and M.P.  Buman. 2013. Mind the Theoretical Gap: Interpreting, Using, and Developing Behavioral Theory in HCI Research. Proceedings of CHI 2013, 3307–3316. https://doi.org/10.1145/2470654.2466452. Hill, T. 2013. Kantian Autonomy and Contemporary Ideas of Autonomy. In Kant on Moral Autonomy, ed. Oliver Sensen, 15–31. Cambridge: Cambridge University Press. Howard, J.L., M. Gagné, and J.S. Bureau. 2017. Testing a Continuum Structure of Self-Determined Motivation: A Meta-Analysis. Psychological Bulletin 143 (12): 1346. IEEE. 2019. Vision and Mission. https://www.ieee.org/about/vision-mission.html. Accessed 21 Oct 2019. Ihde, D. 1990. Technology and the Lifeworld: From Garden to Earth (No. 560). Chicago: Indiana University Press. Institute of Electrical and Electronics Engineers (IEEE). 2019. Mission and Vision, IEEE. Retrieved on 13 October, 2019. https://www.ieee.org/about/vision-mission.html James, W. (1890). The Principles of Psychology, Volumes I and II. 1983 edition. Cambridge, MA: Harvard University Press. Kahneman, D., E.  Diener, and N.  Schwarz. 1999. Well-Being: The Foundations of Hedonic Psychology. Health San Francisco. https://doi.org/10.7758/9781610443258. Kerner, C., and V.A.  Goodyear. 2017. The Motivational Impact of Wearable Healthy Lifestyle Technologies: A Self-Determination Perspective on Fitbits with Adolescents. American Journal of Health Education 48 (5): 287–297. https://doi.org/10.1080/19325037.2017.1343161. Kuss, D.J., and O. Lopez-Fernandez. 2016. Internet Addiction and Problematic Internet Use: A Systematic Review of Clinical Research. World Journal of Psychiatry 6 (1): 143–176. https:// doi.org/10.5498/wjp.v6.i1.143. Lewis, P. 2019. At: https://www.theguardian.com/technology/2017/oct/05/smartphone-addictionsilicon-valley-dystopia. Accessed on: 5/9/2019. Litalien, D., A.J.S. Morin, M. Gagné, R.J. Vallerand, G.F. Losier, and R.M. Ryan. 2017. Evidence of a Continuum Structure of Academic Self-Determination: A Two-Study Test Using a Bifactor-­ ESEM Representation of Academic Motivation. Contemporary Educational Psychology 51: 67–82. Mackenzie, C., and N.  Stoljar, eds. 2000. Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self. New York: Oxford University Press. Mill, J.S. 1859/1975. On Liberty, ed. David Spitz. New York: Norton.

52

R. A. Calvo et al.

Morley, J., and L.  Floridi. 2019a. The Limits of Empowerment: How to Reframe the Role of mHealth Tools in the Healthcare Ecosystem. Science and Engineering Ethics: 1–25. ———. 2019b. Enabling Digital Health Companionship Is Better Than Empowerment. The Lancet Digital Health 1 (4): e155–e156. Peng, W., J.-H. Lin, K.A. Pfeiffer, and B. Winn. 2012. Need Satisfaction Supportive Game Features as Motivational Determinants: An Experimental Study of a Self-Determination Theory Guided Exergame. Media Psychology 15: 175–196. https://doi.org/10.1080/15213269.2012.673850. Peters, D., R.A.  Calvo, and R.M.  Ryan. 2018. Designing for Motivation, Engagement and Wellbeing in Digital Experience. Frontiers in Psychology – Human Media Interaction 9: 797. Pfander, A. 1967. Motive and Motivation. Munich: Barth, 3rd ed., 1963 (1911); Translation in Phenomenology of Willing and Motivation, ed. H.  Spiegelberg, Evanston: Northwestern University Press, 1967. Przybylski, A.K., N. Weinstein, R.M. Ryan, and C.S. Rigby. 2009. Having to Versus Wanting to Play: Background and Consequences of Harmonious Versus Obsessive Engagement in Video Games. Cyber Psychology & Behavior 12 (5): 485–492. https://doi.org/10.1089/cpb.2009.0083. Przybylski, A.K., K. Murayama, C.R. Dehaan, and V. Gladwell. 2013. Motivational, Emotional, and Behavioral Correlates of Fear of Missing Out. Computers in Human Behavior. https://doi. org/10.1016/j.chb.2013.02.014. Ricoeur, P. 1966. Freedom and Nature: The Voluntary and Involuntary (trans: Kohák, E.V.). Evanston: Northwestern University Press. Rigby, S., and R.M. Ryan. 2011. Glued to Games: How Video Games Draw us in and Hold us Spellbound. Santa Barbara: Praeger. Rubin, B.F. 2018. “Google Employees Push Back Against Company’s Pentagon Work”, CNET http://www.cnet.com/news/google-employees-push-back-against-companys-pentagonwork4/4/18. Accessed at: 6/9/2019. Ryan, R.M. 1982. Control and Information in the Intrapersonal Sphere: An Extension of Cognitive Evaluation Theory. Journal of Personality and Social Psychology 43 (3): 450. Ryan, R.M., and E.L.  Deci. 2000. Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. The American Psychologist 55: 68–78. https://doi.org/10.1037/0003-066X.55.1.68. ———. 2017. Self-Determination Theory: Basic Psychological Needs in Motivation, Development, and Wellness. New York: Guilford Press. Ryan, R.M., C.S. Rigby, and A. Przybylski. 2006. The Motivational Pull of Video Games: A Self-­ Determination Theory Approach. Motivation and Emotion 30: 344. https://doi.org/10.1007/ s11031-006-9051-8. Schwab, K. 2017. “Nest Founder: I Wake Up In Cold Sweats Thinking, What Did We Bring To The World?” Fast Company. 7/7/2017. https://www.fastcompany.com/90132364/nest-founderi-wake-up-in-cold-sweats-thinking-what-did-we-bring-to-the-world. Accessed on 6/9/2019. Seligman, M. 2018. PERMA and the Building Blocks of Well-Being. Journal of Positive Psychology. https://doi.org/10.1080/17439760.2018.1437466. Soenens, B., M.  Vansteenkiste, W.  Lens, K.  Luyckx, L.  Goossens, W.  Beyers, and R.M.  Ryan. 2007. Conceptualizing Parental Autonomy Support: Promoting Independence Versus Promoting Volitional Functioning. Developmental Psychology 43 (3): 633–646. https://doi. org/10.1037/0012-1649.43.3.633. Techfestival. 2017. The Copenhagen Letter. Copenhagen: Techfestival. Retrieved on 13 October, 2019. https://copenhagenletter.org. Vansteenkiste, M., R.M.  Ryan, and B.  Soenens. 2019. Basic Psychological Need Theory: Advancements, Critical Themes, and Future Directions. Motivation and Emotion, Advance Online Publication. Verbeek, P.P. 2011. Moralizing Technology: Understanding and Designing the Morality of Things. Chicago: University of Chicago Press.

2  Supporting Human Autonomy in AI Systems: A Framework for Ethical Enquiry

53

Winkelman, S. 2018. The Best Apps for Limiting Your Screen Time. Digital Trends. January 6, 2018. Accessed 6/9/19 at: https://www.digitaltrends.com/mobile/ best-apps-for-limiting-your-screen-time/ Wu, T. 2017. The Attention Merchants: The Epic Scramble to Get Inside our Heads. New York: Alfred A. Knopf. Yu, S., C.  Levesque-Bristol, and Y.  Maeda. 2018. General Need for Autonomy and Subjective Well-Being: A Meta-Analysis of Studies in the US and East Asia. Journal of Happiness Studies 19 (6): 1863–1882. Zuboff, S. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books. Rafael A. Calvo, PhD, is Professor and Chair of Design Engineering at Imperial College London and at the University of Sydney. From 2015 to 2019 Calvo was Future Fellow of the Australian Research Council to study the design of well-being-supportive technology. He is Co-lead of the Leverhulme Centre for the Future of Intelligence. He is the recipient of five teaching awards, and published four books and over 200 articles in the fields of HCI, well-being-supportive design, learning technologies, affective computing and computational intelligence. His books include Positive Computing: Technology for Wellbeing and Human Potential (MIT Press) and the Oxford Handbook of Affective Computing. He has worked globally at universities, high schools and professional training institutions, including the Language Technology Institute at Carnegie Mellon University and Universidad Nacional de Rosario, and on sabbaticals at the University of Cambridge and the University of Memphis. Rafael has also worked as a technology consultant for projects in the USA, Brasil, Argentina and Australia. He Co-editor of the IEEE Transactions on Technology and Society, Associate Editor for the Journal of Medical Internet Research and former Associate Editor of the IEEE Transactions on Learning Technologies and IEEE Transactions on Affective Computing. He has a PhD in Artificial Intelligence applied to automatic document classification and has also worked at Carnegie Mellon University, Universidad Nacional de Rosario, and as a consultant for projects worldwide. His current interests focus on how to design for well-being and human values in the context of health and educational technologies. Research Interests: Design Engineering, Human-Computer Interaction, motivation, WellBeing and Basic Psychological Needs. [email protected]

Dorian Peters is a designer, design researcher and author who specialises in design for well-being and digital ethics in practice. She works at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and as a Research Associate at Imperial College London. Her books include Positive Computing: Technology for Wellbeing and Human Potential (MIT Press) and Interface Design for Learning: Design Strategies for Learning Experiences (New Riders). With over 20 years’ experience in technology design, she works with engineers, social scientists and users to co-create human-centred and research-driven technologies in ways that respect psychological needs. She has designed for educational, non-profit and corporate institutions including Carnegie Mellon, University of Cambridge, Movember Foundation, Asthma Australia, Sony Music and Phillips. Research Interests: Wellbeing-Supportive Design, Value-Sensitive Design, Participatory Design, Technology Design Ethics and Human-Computer Interaction. [email protected]

54

R. A. Calvo et al.

Karina Vold is a Philosopher of Cognitive Science and Artificial Intelligence. She is a Postdoctoral Research Associate with the Leverhulme Centre for the Future of Intelligence, a Research Fellow with the Faculty of Philosophy at the University of Cambridge and a Digital Charter Fellow with the Alan Turing Institute, London, UK. In 2020 she will begin as an Assistant Professor in the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She received her BA in Philosophy and Political Science from the University of Toronto in 2011 and her PhD in Philosophy from McGill University in 2017. Vold’s current research explores philosophical and ethical issues related to data-driven technologies, including machine learning techniques, with a particular focus on the cognitive impacts these technologies have on humans. Research Interests: Philosophy of Cognitive Science, Philosophy of Artificial Intelligence, Philosophy of Mind, Data Ethics, Human-Computer Interaction, Cognitive Enhancement, AI Ethics and Neuroethics. [email protected]

Richard M. Ryan, PhD, is a Professor at the Institute for Positive Psychology and Education at the Australian Catholic University, North Sydney, and Professor Emeritus at the University of Rochester. He is a clinical psychologist and co-developer of Self-Determination Theory, an internationally recognised leading theory of human motivation. He is also Co-founder and Chief Scientist at Immersyve Inc. Ryan is among the most cited researchers in psychology and social sciences today and the author of over 450 papers and books in the areas of human motivation and well-being. He lectures frequently around the world on the factors that promote motivation and healthy psychological and behavioural functioning, applied to such areas as work and organisations, education, health, sports and exercise, video games, and virtual environments. Reflective of Ryan’s influence internationally and across disciplines, he has been recognised as one of the eminent psychologists of the modern era and been honoured with many distinguished career awards. His current research interests go in many directions all of which to relate to Self-Determination Theory. He has considerable interest in the neurological mechanisms that underlie autonomy and intrinsic motivation. He is also interested in higher levels of analysis such as political and economic influences on motivational processes. In between he is focused on the impact of situational, interpersonal and technology-driven effects on motivation and wellness. Research Interests: Self-Determination Theory, Motivation, Well-Being, Eudaimonia and Basic Psychological Needs. [email protected]

Open Access  This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

Chapter 3

Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media and the Commercialisation of Presentations of Self Charlie Harry Smith

Abstract  Goffman’s (The presentation of self in everyday life. Anchor Books, 1959) dramaturgical identity theory requires modification when theorising about presentations of self on social media. This chapter contributes to these efforts, refining a conception of digital identities by differentiating them from ‘corporatised identities’. Armed with this new distinction, I ultimately argue that social media platforms’ production of corporatised identities undermines their users’ autonomy and digital well-being. This follows from the disentanglement of several commonly conflated concepts. Firstly, I distinguish two kinds of presentation of self that I collectively refer to as ‘expressions of digital identity’. These digital performances (boyd, Youth, identity, and digital media. MIT Press, Cambridge, MA, 2007) and digital artefacts (Hogan, Bull Sci Technol Soc 30(6): 377–386, 2010) are distinct, but often confused. Secondly, I contend this confusion results in the subsequent conflation of corporatised identities – poor approximations of actual digital identities, inferred and extrapolated by algorithms from individuals’ expressions of digital identity – with digital identities proper. Finally, and to demonstrate the normative implications of these clarifications, I utilise MacKenzie’s (Autonomy, oppression, and gender. Oxford University Press, Oxford, 2014; Women’s Stud Int Forum 72:144–151, 2019) interpretation of relational autonomy to propose that designing social media sites around the production of corporatised identities, at the expense of encouraging genuine performances of digital identities, has undermined multiple dimensions of this vital liberal value. In particular, the pluralistic range of authentic preferences that should structure flourishing human lives are being flattened and replaced by commercial, consumerist preferences. For these reasons, amongst others, I contend that digital identities should once again come to drive individuals’ actions on social media sites. Only upon doing so can individuals’ autonomy, and control over their digital identities, be rendered compatible with social media. C. H. Smith (*) Oxford Internet Institute, University of Oxford, Oxford, UK e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_3

55

56

C. H. Smith

Keywords  Goffman · Identity · Relational autonomy · Performance · Digital artifact · Social media

3.1  Introduction1 As the online/offline distinction has blurred, digital identities have become part of daily life (Floridi 2011a, p. 477; Hongladarom 2011, p. 534), drawing critical attention to the “construction of personal identities in the infosphere” (Floridi 2011b, p. 550). Indeed, most liberal democratic citizens now remain constantly connected to the internet through their smartphones, continually contributing to and maintaining various online personas. Although not the only sites where such digital identities play an important role, they are perhaps most closely associated with social media platforms (Ellison and boyd 2007, p. 210) – sites like Facebook, Instagram, YouTube and Twitter. Accordingly, this chapter’s primary aim is one of precision: to refine a conception of digital identities and clarify their role(s) on modern social media sites. This objective is complicated by various usages of the term ‘digital identity’ in the literature so,2 to be clear, I will utilise a performative account of digital identity, following in Goffman’s (1959) ‘dramaturgical’ footsteps, to consider how social media can affect individuals’ presentations of their identities online. This requires considering digital identities in a highly subjective and personal manner: as tapestries of intersubjective experiences, woven from ongoing presentations of self. Such identities are therefore understood to be inherently unique and always fluctuating, changing and reacting as they are continually performed, re-negotiated and re-presented to numerous audiences. This approach allows me to distinguish what I will argue are digital identities ‘proper’ from the substandard approximations of digital identities – which I term ‘corporatised identities’ – that social media companies covertly and algorithmically produce once they have identified individuals. Although often confused with one another, these two kinds of identity are distinct, and I will suggest that their conflation obscures fundamental, important questions surrounding the self and personal identities as they are continuously performed and constructed online. However, understanding how and why corporatised identities are being conflated with digital identities begins with recognising the differences between digital performances (boyd 2007) and digital artefacts (Hogan 2010) which, together, I term ‘expressions of digital identity’. Whilst these interrelated elements both express individuals’ identities and can be understood to be “presentations of self” 1  My thanks to Kai Spiekermann, Christopher Burr, and Laura Valentini for feedback on earlier drafts of this chapter. I am also grateful for Jaimie-Lee Freeman’s advice regarding conceptions of digital well-being. 2  For instance, ‘digital identities’ can refer to merely technical tools for identification (Whitley et al. 2014, p. 18).

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

57

(boyd 2007, p. 128), they do not contribute equally to individuals’ digital identities. Consequently, in the next two sections, I elucidate the differences between all these various related and often confused presentations of self (Sect. 3.2) and types of identities (Sect. 3.3). This chapter’s key theoretical contribution is thus the disentanglement of these concepts. This conceptual work offers real value to the political theorist. To illustrate, in the fourth and final section, I first draw on MacKenzie’s (2014, 2019) relational account of autonomy to argue that designing social media sites around the algorithmic production of corporatised identities, rather than encouraging the performance of digital identities, has undermined multiple dimensions of this vital liberal value, including self-governance, self-determination, and self-authorisation. Then, I utilise the eudaimonic account of well-being to frame this as a harm to individual functioning (Devine et al. 2008).3 In particular, I argue that the pluralistic range of authentic values and preferences that should structure flourishing human lives are being flattened and replaced by commercially-motivated, consumerist preferences. It is for these reasons, amongst others, that I contend digital identities should come once more to drive individuals’ actions on social media sites. In summary, this chapter has two main aims. Firstly, to reframe the debate surrounding the social media industry in terms of its negative effects on digital identities, not mere expressions of digital identity or the corporatised identities inferred from those expressions. And, secondly, from this new vantage point, to argue that (relational) autonomy, particularly over digital identity formation, is currently being impaired by social media companies and the algorithms that drive their systems. The conclusion then readily follows that the continued production of corporatised identities will be deeply damaging for individuals on a eudaimonic account of digital well-being.

3.2  Expressions of Digital Identity 3.2.1  Digital Performances Preliminary examples of digital performances include posts, status updates, photos, likes, livestreams, tweets and retweets, as well as purchasing choices, ad clicks and many more interactions. Unsurprisingly, these kinds of expressions of digital identity bear strong resemblance to Goffman’s (1959) notion of a performance in the analogue world, whereby social interactions are metaphorically understood to be theatrical performances delivered to an audience.4 For Goffman, performances 3  The eudaimonic approach to well-being emphasises the individual’s flourishing and functioning, standing in contrast to hedonic approaches that focus on their subjective happiness alone (Ryan and Deci 2001). 4  On social media audiences are often “imagined” as it is frequently impossible to know who will witness an individual’s performances (Marwick and boyd 2011, p. 115).

58

C. H. Smith

amount to “all the activity of an individual which occurs during a period marked by his continuous presence before a particular set of observers and which has some influence on the observers” (Goffman 1959, p. 13). Such performances essentially aim to impress a particular identity upon an audience, and individuals then work to maintain that impression over time and ensure that their performances will be acknowledged as genuine (Goffman 1959, p. 10). Identity is therefore inherently social both because it is relational by nature (between actor and audience) and because performances are continually being “negotiated” with the audience (as those involved figure out how they relate to one another and try to shape how they are treated) (Phillips 2009, p. 304). Importantly, however, individuals regularly perform many different identities depending on the situation, with various identities (or combinations of identities) being more or less salient in a given context (Davis 2016, p. 139). Individuals can thus be understood to wear different “masks” in varying situations that emphasise different aspects of their multiple identities (Bullingham and Vasconcelos 2013, p. 101). And, in much the same way, when performing online, individuals attempt to do the same (boyd and Heer 2006, p. 4). Asserting a digital identity also involves the performance of an idealised identity to an audience (Marwick and boyd 2011, p. 114). When it comes to self-­presentation, social media users personalise their profile pages, choose photos and write pithy ‘bios’ that describe themselves, and generally attempt to present a coherent identity (Marwick and boyd 2011, p. 115). Furthermore, the content they actually perform takes specific forms (e.g. livestreaming, memes, etc.) that will be appropriate for the identity they are adopting. Online, this is particularly noticeable when considering the numerous personae that individuals create and maintain on different platforms – the sanitised performances most people give on LinkedIn, for example, are likely to be completely different to those in a WhatsApp chat with close friends. All of this performative work results in a unique impression of a digital identity being formed with and for the audience in question. Although mediated by technological means, digital identities thus have much in common with their analogue counterparts: they are the result of ongoing social processes of negotiation that stem from a struggle to present and maintain a particular impression of oneself in the eyes of another. Each identity is thus highly individualised, distinct and contextual, made up of the entirety of the relevant online interactions that have occurred between a given set of individuals up until that point.5 They are, in other words, far more than the sum of their digital parts, a social impression that results from concerted efforts to present a particular version of the self. Nonetheless, a corollary of digital identities constantly evolving is that attending to each distinct performance in isolation – looking at a single tweet or status update – only provides part of the fuller picture. Each is only ever one expression of a salient identity (or several identities) in that specific context; one quarantined scene that

5  This thumbnail sketch of a digital identity should suffice until the concept is more thoroughly fleshed out in §2.3.

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

59

constitutes only a slice of a larger part that is being performed over time, and which responds to audience reactions (Goffman 1959, p. 5). “Performances”, as Schieffelin argues (1998, p. 198), “are a living social activity […] While they refer to the past and plunge towards the future, they exist only in the present”. Identities are thus always being performed more or less effectively, with a fuller picture of each digital identity only emerging through repeated successful performances. This means that whilst a particular performance might therefore have contributed effectively towards an identity when first performed, it can easily fail to continue doing so when removed from its original situation, altering the meaning of that performance and its relevance to the individual. In the context of modern social media, this is not unproblematic, but to see why we must consider the other half of expressions of identity.

3.2.2  Digital Artefacts Goffman chiefly investigated social interactions that occurred when “in one another’s immediate physical presence” (Goffman 1959, p. 8), so his work needs modification to remain relevant for internet communications. Largely, this requires acknowledging that the digital traces of performances which are recorded by social media platforms actually fail to meet Goffman’s criteria for a performance (Hogan 2010, p. 377). Whilst these digital remnants of a performance are still undeniably “a form of presentation of self” (Hogan 2010, p. 377), once later accessed and processed it would be inaccurate to call them performances. Instead, they are better understood (metaphorically-speaking) as online “exhibitions” made up of stored performances that are recalled and re-presented to audiences as required – a new kind of expression of digital identity that Hogan terms “artifacts”6 (Hogan 2010, p. 381). Thus, although similar, digital performances and digital artefacts are not in fact identical. This terminological change is primarily required for two reasons. Firstly, stored artefactual performances are endlessly re-presented by social media companies to different audiences, not just the audience to which the performance was originally delivered, and, secondly, this re-presentation is done by algorithmic “curators” that make complex and opaque decisions when selecting what to exhibit (Hogan 2010, p. 380). For instance, the audience that watches a videogaming live-stream in the moment are no doubt exposed to a full-on performance, but if a recording is made available to view a year later, it is likely that a completely different audience is being shown that artefact, selected by an algorithm trying to maximise engagement7 (Hogan 2010, p. 381).

 Hogan uses the American spelling, but I have opted for the British ‘artefacts’.  E.g. Facebook’s ‘memories’: curated content from an individual’s past which generated the most engagement at the time is selectively re-presented to them today to share to new audiences. For more, see: (Hod 2018). 6 7

60

C. H. Smith

Exhibitions are therefore not made up of true digital performances but assembled from the digital artefacts of what used to be performances: the digital traces of an individual’s identity as it was at a particular moment. Consequently, whilst earlier researchers recognised that without “live” updates social media profiles became “frozen performances” and “outdated representations of the self” (boyd and Heer 2006, p. 9), Hogan successfully formulates the need to move beyond performances altogether and extricates the similar, but distinct, concept of a digital artefact. Reconsidering the preliminary examples of performances that began our discussion, we can now see that many decay in just this way. Still, in my taxonomy, both digital artefacts and digital performances are merely expressions of digital identities which, when taken together, are not equivalent to digital identities themselves (See Fig. 3.1). This is because digital identities emerge from digital performances but are not reducible to them. Nonetheless, when considering social media and its normative issues, analysis often focuses on these expressions of digital identity, taken in part or as a whole, as these are easy to access and link back to an identified individual.8 After all, it is now trivial for social media companies to harvest and store vast stores of digital data. But, unfortunately, much of the analysis also ends here, conflating digital performances and digital artefacts despite them not being equally relevant to digital identities.

Fig. 3.1  A schematic delineating an individual’s various kinds of identities, and their relationships to different presentations of self, both online and offline

 Consider, for example, studies charting racism on Twitter (Chaudhry 2015) or fake news on Facebook (Bakir and McStay 2018) that focus on expressions of digital identity rather than individuals’ actual digital identities. 8

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

61

3.3  From Expressions to Identities To maximise engagement, and therefore advertising revenues, social media sites strive to show individuals content that ‘people like them’ have interacted with (Wu 2017) – i.e. identify their digital identity markers (Cheney-Lippold 2011, p. 165). To do so, machine learning algorithms are used to infer anything from an individual’s “sensitive attributes (e.g., race, gender, sexual orientation)” to their interests and “opinions (e.g., political stances)” from the data they post and consume9 (Wachter and Mittelstadt 2019, p.  4). These psychometric categorisations allow advertisements (and content) to be targeted at specific ‘audiences’ (Bay 2018, p. 1723) – the same process by which Cambridge Analytica categorised possible voters into “universes” and social marketeers sort potential customers into “buckets” (Bartlett 2018, p. 83). Does this categorisation, however, amount to individuals being assigned a digital identity based on their digital performances? I think not. Indeed, I believe that conflating digital artefacts and digital performances has led to an equivocation between digital identities and corporatised identities  – a term for the audiences, buckets and universes used to wrangle economically-exploitable categories out of the masses of data generated by social media usage. These corporate amalgamations of digital identities are not digital identities proper (Fig. 3.2) and, as I will demonstrate, prioritising their production at the expense of digital identities themselves results in commercial values being forcefully impressed upon individuals. The term ‘corporatised’ is thus fitting, as it means to make something “corporate by introducing or imposing the structures,

Fig. 3.2  A revised schematic accounting for an individual’s ‘corporatised identities’  As these conclusions often cannot be understood by humans such algorithms are often described as being ‘opaque’ (Villaronga et al. 2018, p. 308). 9

62

C. H. Smith

practices, or values associated with a large business corporation; to commercialize” and, hence, “to deprive” that thing “of independence or individual character” (OED Online 2006). Bearing this definition in mind, I will argue below that the unique, fluctuating digital identities that individuals seek to create and maintain over social media are being corrupted by the production of corporatised identities that identify individuals for advertising and tracking purposes. Indeed, as it will transpire, it is damaging to confuse this forensic, corporate process of individuation and identity assignment with the fluid and social construction of digital identities, as production of the former is harmful to individuals’ digital well-being in its current form.

3.3.1  Corporatised Identities Corporatised identities are no more than commercially useful extrapolations inferred from the deluge of expressions of digital identity recorded by social media companies. Whilst such “social sorting” (Lyon 2014, p. 10) no doubt captures, perhaps quite accurately, elements of a performed identity, it therefore cannot ever hope to emulate or equal an individual’s ever-changing digital identities. There are at least four reasons for this. Firstly, algorithmically generating corporatised identities relies on analysing many different individuals’ expressions of digital identity together (de Vries 2010, p.  77; Manders-Huits 2010, p.  45), comparing them to draw useful boundaries around similar groups (Wachter and Mittelstadt 2019, p.  13). Doing so allows social media companies to “draw non-intuitive and unverifiable inferences and predictions about the behaviors, preferences, and private lives of individuals” (Wachter and Mittelstadt 2019, p. 4). Numerous individuals’ expressions of digital identity are thus constantly being processed into discrete, machine-readable categories. At most, this means that what Goffman would term ‘group’ or ‘role’ identities might be understood by social media companies, inferred from common features of a population, but not an individual’s digital identities. Indeed, this is inevitable, as individuals are only ever members of a category for their algorithms – points in a dataset at a relative level of abstraction rather than persons (Cheney-Lippold 2011, p. 176). Secondly, because new data are always being gathered, the algorithms continually redefine and refine the boundaries of corporatised identities (O’Neill 2016), tweaking them to improve their effectiveness10 (de Vries 2010, p. 81). Who I ‘am’ (which categories describe me) is therefore in a constant state of flux, not only due to my actions, but also how the algorithm understands those actions to relate to others’. Whilst I may be statistically identified as ‘male’ today, tomorrow I might be  This distinguishes corporatised identities from the  idealised identities that shape individuals’ performances. The former are transient, with their boundaries constantly being re-drawn, whilst the latter are socially-determined and so somewhat fixed by the possibilities of a particular society.

10

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

63

recategorised as ‘female’ based on new performances (Cheney-Lippold 2011, p. 173). When this happens, however, the entire advertising infrastructure shifts to treat me completely differently, targeting me with different adverts and content, destabilising the contexts in which I act online. This means that algorithms now partially co-create and “supplement existing discursive understandings of a category” (Cheney-Lippold 2011, p. 173). Gender classifications are just one example, but they illustrate that fickle, group-level distinctions define corporatised identities, not anything approaching the granularity of an individual’s actual performed identity. In contrast, as we have already seen from Goffman, individuals generally try to consistently perform coherent identities across situations, doing intersubjective work to ensure their identities are being correctly understood. This clearly conflicts with the continuous modulation of categories and the pursuit of economically effective, but not necessarily accurate, categorisations. The third difference stems from the commercial motivations behind this process. Individuals are categorised to show them content that similar users have found engaging (de Vries 2010, p. 77), but also to make decisions “about how best to predict, persuade, and ultimately control the behaviour of the user” (Burr and Cristianini 2019, p. 463). The end goal of this process, after all, is converting attention into profit through various monetisation techniques11 (Wu 2017; Zuboff 2015). So, despite rhetorical protestations to the contrary, social media companies are not audiences seeking to truly interact with the performing individual, despite the “power of performativity” turning “crucially on its interactive edge” and the “relationship” between performer and audience (Schieffelin 1998, p. 200). Rather, leveraging their ubiquity as platforms, these companies are inserting themselves between genuinely interacting individuals as parasitic accessories to their performances in order to monetise their interactions. Accordingly, social media companies are agnostic about digital identities tout court, because only comparable, machine-readable identity characteristics need to be inferred from individuals’ expressions of digital identity. Indeed, only those inferences that might be economically-exploitable are relevant for rendering their vast stores of raw data useful (Cheney-Lippold 2011, p. 170). Elements of identities that do not serve this purpose or cannot be inferred from expressions of digital identity are therefore discarded or computationally unobservable, leaving companies with limited, but commercially useful, shadows of a potential consumer’s identities. Fourth, and finally, this means that digital performances and artefacts are wilfully conflated and mined in search of any exploitable insights. My past is, after all, as exploitable as who I now am and wish to be. Beyond the conflation of self and other we have discussed, past and present, performance and artefact, and all an individual’s multiple identities, too, are bundled together and mined for commercial predictions. As such, whilst corporatised identities reflect salient, generalisable features of the expressions of digital identity that linger in the network and can be

 International markets, not concerns for “data justice” or ethics, therefore drive this process (Taylor 2017, p. 3).

11

64

C. H. Smith

computationally modelled (itself a large caveat) (Hildebrandt 2019, p. 92), due to their indiscriminate and inferential construction they will never be equivalent in terms of detail, scope or contemporaneity to any true digital identity. No genuine consideration of self – no relational, negotiated social interaction – occurs, in favour of inferences derived from collated and often outdated expressions of self. To appreciate this, recall the imagery of artefacts and exhibitions. Evidently, an archaeologist cannot ‘know’ the identity of an individual they find buried, beyond perhaps in the simple sense of identification, because they never had a social relationship with them. At best they might be able to infer some educated guesses from artefacts left behind (e.g. diaries). But, similarly, the algorithms utilised by social media companies can do little better, if at all, despite such guesses being how these companies claim to ‘know’ their users. In reality, both archaeologist and algorithm can at best only speculate, as they are removed from their objects of interest – there are no negotiated performances occurring between them. Furthermore, whilst algorithms do not only consider digital artefacts – they can also access ongoing performances  – the former are almost certainly given more weight in their categorisations than a human might. Whilst human memories are relatively undependable, and analogue performances can fade over time, data storage is cheap and reliable, so digital performances are rarely, if ever, ‘forgotten’12 (Manders-Huits 2010, p.  52). But this means that artefacts can easily outweigh those most fleeting performances an individual is currently giving. A recently reformed smoker might thus continue to be pigeon-holed by adverts that seek to sell a product aimed at who they were but no longer wish to be.

3.3.2  Initial Issues with Corporatised Identities Corporatised identities, then, are clearly not digital identities. Nonetheless, like many similar “flawed models” (Bridle 2018, p. 34), corporatised identities can exert a potentially problematic influence on the reality they are only meant to be modelling (Hildebrandt 2019, pp.  91–92). In practice this means  that, once assigned a corporatised identity, individuals are repeatedly shown content matched to their ‘type’. A cycle of reinforcement therefore proceeds (Elmer 2003), as individuals are shown digital artefacts nudging them to act in ways which re-confirm the original categorisation, in turn deepening the algorithm’s confidence in a correct classification13 (Lessig 1999, p. 154). Unsurprisingly, this then feeds back into the identity formulation process and individuals’ future performances (Burr et al. 2018; de Vries 2010). As Lanier (1995, p. 67) colourfully puts it, the result is that individuals are

 Unless legally obligated to ‘forget’ by, e.g., Article 17 of the European General Data Protection Regulation. For more, see: (Politou et al. 2018; Villaronga et al. 2018). 13  Success is achieved when a “categorization fits the behavior of a user”, without regard for whether a user actually “embodies that category” (Cheney-Lippold 2011, p. 179). 12

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

65

reduced to a “cartoon model” of their purported interests, and one that is self-­ reinforcing.14 This cannot authentically be described as mapping an identity, though, but at least partially defining it – shaping fluid human identities to match rigid categories that algorithms can compute, rather than reflecting the complexities of identities that actually exist ‘out there’ in the social world. However, at this point a sceptical reader might respond that, in everyday life, we are also always sorting people into various groups, getting those classifications wrong and recategorizing people – and that a central part of identity construction involves negotiating with our peers where we stand in their respective social categories. Are social media companies’ categorisations not just further examples this? This, I believe, is a misunderstanding – social media companies’ classifications are markedly different. Ordinarily, the categorisation (or stereotyping, pigeon-­ holing, and so on) that individuals do to one another is multifarious; varied beliefs and biases mean everyone treats each other in slightly different ways. This results in exposure to a valuable plurality of viewpoints, against which individuals can examine their own lives and identity trajectories (See, e.g. Muldoon 2015, pp. 182–184) – empowering them to figure out who they want to be, which identities to perform and how they want to be seen by others. It supports, in other words, Mill’s “experiments of living” (Mill 1984, p. 115), contributing directly towards their well-being. And, far from being undesirable, this helps individuals to grow and explore new identities, contributing to a flourishing pool of potential lifestyles in liberal society.15 Online, however, things are different. As we have seen, the construction methods of corporatised identities ensure that the only categorisations made by social media companies are those which ultimately serve a narrowly single-minded and economic impetus. This is radically unlike the diverse stereotyping and categorising that friends, colleagues and strangers do. Indeed, appreciating this “strips away the illusion that the networked form has some kind of indigenous moral content” to reveal the socially-parasitic, commercial impetus behind social media’s supposedly-­ social design, and shows that individuals trying to authentically socially interact are actually being used as the “brazen means to others’ commercial ends” (Zuboff 2019a, p. 19). I will return to the normative implications of this shortly.

3.3.3  Digital Identities Nonetheless, having distinguished corporatised identities, we can now better ascertain the nature of digital identities themselves. An immediate and major difference is that digital identities, much like their analogue equivalents, are intensely personal

 My thanks to Christopher Burr for drawing my attention to this formulation of the issue.  I gesture here to the rich liberal literature on value pluralism. For more, see e.g. (Crowder 1998; Galston 1999). 14 15

66

C. H. Smith

and so largely defined by their uniqueness. Whilst corporatised identities attend to similarity and sameness at the level of categories, digital identities are characterised by their distinctive individuality at the level of persons; no two digital identities will likely ever be the same in either aspiration or performance. Ontologically, this follows from the “informational nature” of identities, according to which digital identities are comprised of the rich perceptions (or narratives) generated by an individuals’ actions (Floridi 2011b, p. 556), far beyond those data required for identification and monetary exploitation. Indeed, it is through individuals experiencing and remembering digital social interactions that they perceive others’ complex identities. Since two individuals’ minds will never possess all the same information, they will never perform identical identities, as the manifold diversity of previous experiences (stored as memories) will frame and condition their future interactions. As such, two distinct individuals will always interact in unique ways and perform subtly different identities, whether online or offline. Another vital difference is that, both online and offline, the motivation for identity-­relevant action stems from the desire to perform a particular idealised identity successfully. Evidently digital identities, not corporatised identities, drive this process online, not least because individuals usually lack direct epistemic access to the corporatised identities that companies extrapolate. Although moving performances online therefore changes the ways in which successful negotiation might be achieved, and introduces new mechanisms and techniques for identity management, the fundamental aim – of successful performance – remains consistent. This aim, crucially, is not one shared by corporatised identities and the social media companies that create them for an identification and governance agenda. As we have seen, the creation of corporatised identities is principally commercially motivated. As such, whilst digital identities primarily obtain between persons, existing as shared constructions regardless of the medium by which they are performed, corporatised identities exist on mainframes, belonging to the companies that generate them. The former are irreducibly social; negotiated, ongoing, and reciprocal. By contrast, the latter are one-sided, privileged and readily exploited by one party. Even if companies partially construct the performance environment, conditioning individuals’ performances, corporatised identities are not a shared endeavour but a powerful tool for behavioural manipulation.16 Similarly, the importance of digital performances to digital identities, rather than digital artefacts, cannot be stressed enough. Recalling how misguided it would be to suggest that an archaeologist ‘knows’ a corpse, the centrality of intersubjective online experience to digital identities should now be clear. A performance’s ephemeral nature stems directly from this intersubjectivity, as without the possibility of

 Facebook’s emotional contagion experiments, for instance, explored their ability to influence individuals’ mental states without their knowledge (Kramer et al. 2014). For further discussion of behavioural manipulation in the context of digital well-being, see Klenk (Chap. 4, this collection). In particular, Klenk’s assertion that such manipulation can deny individuals both autonomy and control over valuable aspects of their lives chimes well with the arguments I later advance in §3.

16

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

67

reacting and adapting to an audience what is actually being created is a digital artefact, because performances “exist only in the present”17 (Schieffelin 1998, p. 198). Identities are fluid and, although somewhat fixed idealisations shape how individuals act, they are not fixed in the manner which corporatised identities require to enable artificial comprehension (Hildebrandt 2019, p. 91). This conflicts with the desire to categorise that provokes social media companies’ constructions of corporatised identities in the first place. And, even though the boundaries of corporatised identities are constantly changing, this does not emulate the fluidity that runs through an individual’s intentional, shifting performances of an identity. Algorithmic redefinition is a disjointed exercise in reactive speculation, not one of coherent but evolving presentation. Credence is lent to this claim of fluidity, or aversion to identity-fixing, by appeal to psychological studies. For instance, being confronted with artefacts that individuals do not currently identify with, even if those prior performances were integral to their identity at the time, can generate significant discomfort (Tian 2017, p. 204). Such “mismatched expectations” surrounding who individuals believe themselves to be can lead, in particular, to embarrassment and anxiety (Tian 2017, p. 205). As I have shown, this is because identities are ongoing and multifaceted constructions, constantly changing and adapting. Fixing snapshots of identities in social media networks therefore creates the potential for conflicts between currently lived identities and the fossilised remains of identities that have been left behind. This leads to a final key insight regarding the nature of digital identities. As a “continuous” process of reflection, identity work consists in “evaluating and identifying with one’s attributed identifications” (Manders-Huits 2010, p. 50, emphasis mine). Individuals, in other words, must feel that the identities they are performing are truly ‘theirs’. In Paul Ricoeur’s terminology, this feeling links to an individual’s ipse identities, those which they first-personally experience and recognise as their “unique selfhood” (de Vries 2010, p. 74). People clearly identify with their digital identities, given their continued attempts to present them, but would not necessarily feel the same about a corporatised one – indeed, like with any other group categorisation, people often find it alienating, arresting or uncomfortable to see how they have been (mis)characterised (de Vries 2010, p. 81; Newman and Newman 2001, p. 526). This arises from indignance; a feeling that ‘that’s not who I am’. Bearing all of this in mind, I therefore tentatively submit that digital identities should once again be allowed to underpin individuals’ online performances on social media if we are to avoid these issues. Indeed, before corporatised identities were created, digital identities surely did fulfil this role: individuals intersubjectively performed their identities, utilising the internet to explore them in new ways. With the increasing production of corporatised identities, though, individuals’ digital identities – those they lived and felt to be their own – have been systematically displaced. Instead, as platforms have realised their potential monetary value, they  That said, prior performances “inform present ones” (Davis 2014, p. 514), so there is some continuity of identity.

17

68

C. H. Smith

have begun building and exploiting corporatised identities at the expense of digital identities. This development, however, has damaging corollary effects: namely, reducing individuals’ autonomy over their own identities and how they are presented to others. And, having already touched on these issues, we are now better equipped to explore them in detail.

3.4  Relational Autonomy and the Harms of Corporatised Identities Clearly, as corporatised identities are algorithmically generated behind closed doors, individuals cannot straightforwardly exercise autonomy over these identities. However, at the same time, corporatised identities readily condition individuals’ social contexts, enabling social media companies to exert influence over their digital well-being. This primarily occurs through the filtering of information, as these companies control the internet’s advertising and social infrastructures  – a fact that is potentially worrying because, as individuals, “we interact, flourish, or suffer depending on the flows of information in which we partake” (Floridi 2019, p. 379). But, again, because all this filtering is oriented towards maximising engagement, ad clicks and revenue, social media sites have been redesigned to be increasingly addictive (Andreassen 2015). After all, for these companies we only “appear as statistical objects of study, abstracted from our personal preferences and life plans, and from our individual capacities and freedom to choose” (Manders-Huits 2010, p. 45). In other words, a substantial problem with producing corporatised identities is that individuals are not treated as autonomous agents – they are not afforded adequate moral concern for their capacity to choose. This has significant implications for eudaimonic approaches to well-being, because possessing sufficient autonomy is vital (Deci and Ryan 2008, p. 6). Indeed, autonomy is of central importance to all conceptions of the “fully functioning person” that defines eudaimonic well-being (Ryan and Deci 2001, p. 161).18 Harm to an individual’s digital well-being is therefore an inevitable outcome of the production of corporatised identities if this process is damaging to autonomy. Thus, to support these conclusions, the remainder of this section fleshes out a particular conception of relational autonomy and explores how it is adversely affected by corporatised identities.

18  See Calvo, Peters, Vold, and Ryan (Chap. 2, this collection) for more on this relationship, as seen through the lens of self-determination theory. Their work makes great strides towards the development of a more systematic approach to diagnosing autonomy-compromising digital design.

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

69

3.4.1  A General Account of Relational Autonomy Respect for individual autonomy is woven into the very fabric of liberal democracies as a “cardinal moral value” defended morally and in law (MacKenzie 2008, p. 512). Traditional understandings of autonomy, however, have been criticised as excessively masculine, individualistic and atomistic, driving the development of relational approaches to autonomy that seek to recognise individuals as “emotional, embodied, desiring, creative, and feeling” (MacKenzie and Stoljar 2000, p. 21). At a fundamental level, all such theorists agree that individuals are irreducibly socially rooted and that their “identities are formed within the context of social relationships” (MacKenzie and Stoljar 2000, p. 4). The parallels with the intersubjective account of identity above should therefore be self-evident, and this also sits comfortably with eudaimonic well-being’s recognition of the need for an “appropriate and situated notion of autonomy” (Devine et al. 2008, p. 132). Nonetheless, navigating the numerous strands of relational autonomy would be beyond our requirements,19 as MacKenzie (2014, 2019) has developed a multidimensional model that integrates the various approaches along three interrelated axes. These are: • Self-governance, which “involves having the skills and capacities necessary to make choices and enact decisions that express, or cohere with, one’s reflectively constituted diachronic practical identity” (MacKenzie 2014, p. 17); • Self-determination, which “involves having the freedom and opportunities to make and enact choices of practical import to one’s life, that is, choices about what to value, who to be, and what to do” (MacKenzie 2014, p. 17); and, • Self-authorization, which “involves regarding oneself as having the normative authority to be self-determining and self-governing… [i.e.] authorized to exercise practical control over one’s life, to determine one’s own reasons for action, and to define one’s values and identity-shaping practical commitments” (MacKenzie 2014, p. 18). Along each dimension, various circumstances can therefore either support or restrict an individual’s autonomy. This includes internal conditions, such as the individual’s own psychology, as well as external conditions, like “social norms, institutions, practices, and relationships” that can “effectively limit the range of significant options available” (MacKenzie and Stoljar 2000, p. 22). Brainwashing children, for instance, can not only limit their beliefs and desires to those their parents find acceptable, but can also impede the development of their critical faculties, leading to dependencies. Relational approaches to autonomy recognise that this would rob them of authority over their own lives, replacing their freedom to live as they wish with a narrow conception of the good life. And, in line with this, the eudaimonic understanding of well-being, too, explicitly recognises the detrimental effects for well-being entailed by such losses to autonomy (Devine et al. 2008, p. 113).

19

 See, e.g.: (Barclay 2000; Baumann 2008; Christman 2004; MacKenzie 2014; Westlund 2009).

70

C. H. Smith

As I will now demonstrate, all three dimensions of MacKenzie’s model can ground criticisms of corporatised identities. Consequently, I remain broadly neutral towards particular interpretations of relational autonomy because, if deploying corporatised identities potentially impinges upon elements of all three axes, it would suggest they are generally incompatible with relational autonomy and so undermine digital well-being, too. If liberal democracies seek to promote respect for relational autonomy, then I propose the practice of creating corporatised identities should thus be altered – or even rejected altogether. Regardless, MacKenzie’s model illuminates the theoretical utility of distinguishing corporatised identities from the digital. 3.4.1.1  Self-Governance The dimension of self-governance fundamentally considers (i) the individual’s competency or internal capabilities to make and act upon free decisions in line with their wishes, and (ii) whether their choices and preferences are their own – i.e. the authenticity of their intentions (MacKenzie 2019, p. 149). It particularly focuses on the individual’s moral psychology; whether they possess the capacity to be autonomous, or whether disability or dependency has made them unable to form or execute their intentions (MacKenzie 2019, p. 147). As a relational theory, however, attention is also paid to the social and institutional environment required for the development and sustenance of effective self-governance, and issues such as stereotyping and adaptive preferences pertinently illustrate how inauthenticity can be forced upon individuals (MacKenzie 2014, p. 32), alienating them from themselves and generating internal conflict over their identity and value commitments (see e.g. Khader 2011). Where corporatised identities are utilised, both competency and authenticity conditions are potentially undermined, both online and offline. Firstly, when individuals rely on algorithms to recommend purchasing choices, they can often “surrender to technology” and settle for inferior products (Banker and Khetani 2019, p. 2).20 Importantly, corporatised identities underpin these recommender systems, as without companies ‘understanding’ individuals they cannot guide their actions. This, however, can undermine an individual’s competency and breed dependence, with algorithms simplifying complex situations and nudging users into commercially-­ motivated actions (Hildebrandt 2019, p. 105) despite these products affecting how individuals can perform identities, and despite purchasing them in itself being a kind of identity performance. Consider, for instance, Facebook and Google recommending ‘promoted’ restaurants paying for more exposure. But, in relying on recommendations, individuals plainly cede competency and opportunity for self-­governance to a company using the influence granted by corporatised identities for profit-related purposes.

20  Conversely, individuals’ best interests can sometimes be best served by autonomously surrendering to technology, as could be the case with fitness trackers. However, concerns surrounding dependence would clearly remain in these cases.

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

71

Secondly, algorithmic reliance can also remove opportunities for the development and performance of authentic digital identities. Algorithms can nudge us, for instance, to buy more than we need, not only reducing self-control but also promoting consumerism  – identity traits that individuals may not reflexively endorse. Furthermore, as these systems expand beyond simple purchases it will only become more difficult to avoid their recommendations. The game ‘Pokémon Go’, for instance, was quietly monetised by Google/Alphabet through the sale of virtual land in real-world locations. So, companies like Starbucks paid for in-game monsters to reside near their cafés, herding players to stores and boosting sales, whilst players were not informed this is how monsters were distributed (Zuboff 2019b, ch. 10). Following the blockbuster success of this gambit,21 a business model built on covertly manipulating individuals has thus been realised, with social media companies keen to release the latent value of vast stores of digital identity data (Zuboff 2019a, p.  19). Nonetheless, if the values that individuals are pushed to endorse through these systems are not their own, then authenticity is lost, with commercial values supplanted for individuals’ own. In this case, for example, a desire for videogaming (a kind of digital identity performance) in nature is being contorted into an opportunity for coffee sales. Consequently, the rich variety of preferences that underpin a digital identity are being collapsed or flattened in favour of those preferences which can be economically exploited, stymieing the authentic development of individuals’ varied preferences and harming their well-being. Indeed, these systems all display significant biases for increasing “sales, ad exposure, user engagement, and […] other strategic goals” that are likely to conflict with an individual’s own values (Banker and Khetani 2019, p. 4). Corporatised identities are thus leveraged to target individuals with sophisticated behavioural manipulation systems designed to service these companies bottom lines at the expense of an individual’s capabilities for self-governance, undermining both the authenticity of their preferences and potentially their competency to make agential decisions. Whilst these systems are only just emerging, the potentially damaging effects for digital identities are serious: with diminished self-governance, individuals will be less able decide what identities to perform, depending on algorithms whilst being herded into performing identities that suit a commercialised agenda. Indeed, because these autonomy-damaging systems all rely on corporatised identities to operate, this gives prima facie reasons, on grounds of preserving self-­ governance, to want digital identities to underpin individuals’ actions on social media instead. Individuals, after all, retain far more control over digital identities than the corporatised identities companies generate from harvesting their online interactions.

 Pokémon Go broke five world records, including fastest mobile game to gross $100m (Swatman 2016).

21

72

C. H. Smith

3.4.1.2  Self-Determination The dimension of self-determination is preoccupied with external, structural threats to autonomy and its development in individuals. MacKenzie defines these in terms of (i) freedom conditions – the personal and political liberties protecting individuals from coercion, domination and exploitation (MacKenzie 2014, p.  25)  – and (ii) opportunity conditions, which canvas the significant options individuals can choose from in society (MacKenzie 2014, p. 26). The two are interlinked, as in situations where liberties have been curtailed, individuals often also possess inadequate or insufficient meaningful life-options to choose from, and so their autonomous status is undermined (MacKenzie 2019, p.  147). In particular, MacKenzie is clear that having significant options cannot be reduced to an unlimited “array of consumer choices” (MacKenzie 2019, p.  148); there must be a legitimate variety of life-­ choices available for individuals to pursue, free from dominating forms of power and interference, for self-determination to be achieved. Social media companies cannot currently forcibly curtail their users’ freedoms. However, they are nevertheless narrowing the range of available significant opportunities. Filter bubbles and echo chambers, for example, are polarising individuals’ identities, making their beliefs more extreme (Burr et  al. 2018; Pariser 2012). Specifically, the most engaging emotions are anger, jealousy and outrage, so content generating these reactions is shown to individuals more often (Fan et  al. 2014). Whilst (negatively speaking) individuals can post what they like, as only illegal and explicit materials are actively censored, the addictive mechanisms that elicit individual contributions are thus tuned to amplify and encourage content that generates these extreme emotions, polarising discourse. Consciously or not, if they wish to maximise engagement with friends, individuals are therefore conditioned to post such content by an external, coercive algorithmic force that only understands the world through corporatised identities. To expound on this point, consider the rise of ‘influencers’ on social media. Influencers are the celebrities of social media culture (Cotter 2019, p.  896). Essentially, they sell a lifestyle through the products that go with it, and therefore rent themselves as vehicles for advertisements on social media with the hopes of nudging their followers into purchasing sponsored products (Brown and Hayes 2008). Accordingly, many potentially authentic social relationships between followers and influencers are reduced to little more than a friendly-faced exchange of possible consumer choices. This is only compounded by the planned nature of many influencer endorsements – posts must often be signed off by advertising executives months in advance, meaning that such posts are artefacts, not intersubjective performances, long before they have even been posted. But, accordingly, influencers’ identities are definingly shaped by the algorithms aimed at promoting those who best maximise engagement and revenues22 (Cotter 2019, p.  901). Success on

22

 Influencers might therefore fail to fulfil the authenticity conditions of self-governance.

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

73

Instagram, TikTok or YouTube, after all, requires moulding your identity into one that ‘the algorithm’ will favour. This has clear determining effects on influencers’ identities but also affects the performances and identities of their followers  – normal people interacting in the disguised marketplaces of social media. This is because companies that competitively sponsor influencers usually gain the most publicity. Importantly for self-­ determination, however, this comes at the expense of significant alternative options  – even those that might be free  – regardless of whether the alternatives might actually be in individuals’ better interests, or better support their social interactions. After all, simply having more (purchasing) options is not enough; alternatives must also be significant as, “if one has an inadequate range of significant options to choose from, one’s autonomy is diminished” (Brison 2000, p. 285). Even putting influencers aside, however, social media companies generally control which of individuals’ digital identity performances are revealed to others; algorithms, after all, are constantly using corporatised identities to decide which posts to highlight to friends or not. Whilst self-determination over digital identities is thus not entirely undermined by these companies, the structural barriers to resisting their inferences are clear to see. As Shoemaker puts it (via Manders-Huits 2010), corporatised identities undermine individuals abilities to “present [their] self-identity to others in the manner [they] see fit”, meaning their “autonomy is undermined” as they are “unable to be the manager of [their] own reputation” (Shoemaker 2010, p. 13). Indeed, an algorithm’s mediation of digital identity performances removes “a key element of self-determination” even if they would have shared the information themselves (Shoemaker 2010, p.  13). In other words, individuals are reduced to their interests and behaviours as can be understood by these algorithms and which serve their ends (Williams 2005, p. 108). But, in these cases, an external imposition is clearly determining who individuals are seen to be – defining their identities for others by filtering posts, and hence limiting opportunities for self-determination, through the algorithm’s role in structuring how and what information is revealed on social media sites. 3.4.1.3  Self-Authorisation Self-authorisation is concerned with an individual seeing themselves as deserving of self-respect, self-trust and self-worth/esteem (MacKenzie 2019, p.  149). This reflective element is irreducibly social, as it is through our relationships that these evaluative attitudes are built up. Individuals are only ever empowered to “speak and answer” for themselves if they are treated as people in their own right and so regard themselves as autonomous (MacKenzie 2019, p. 149). Accordingly, socially stigmatising practices that undermine self-authorisation can in turn undermine an individuals’ self-governance and self-determination (MacKenzie 2019, p. 150). Immediately, and even discounting Facebook’s emotional manipulation experiments, the notion of self-worth can be easily linked to the expansive literature on social media’s negative effects on self-image. Humans naturally compare

74

C. H. Smith

themselves with others but social media disproportionately exposes individuals to the highlights of others’ lives, damaging their self-esteem (Vogel et al. 2014, p. 206). Heavy users of social media are therefore generally more depressed (Feinstein et al. 2013), and report lower levels of well-being (Kalpidou et  al. 2010; Shakya and Christakis 2017).23 Crucially, social media usage also increases the perceived gap between who individuals want to be and who they think they are (Haferkamp and Krämer 2010). Or, in other words, the gap between an individual’s idealised identities and actually performed digital identities, likely due in part to the re-presentation of embarrassing or outdated expressions of digital identity. In other words, it can damage the “properties or beliefs about ourselves we value and respond to emotionally in relation to our self-esteem” (Manders-Huits 2010, p. 46). Taken to its extreme, this alienation of the individual from themselves, and the accompanying erosion of self-worth and self-respect, is terminal for autonomy along this dimension. Indeed, due to the increased prevalence of negative self-­ evaluative feelings, social media users often report feelings of listlessness and isolation, and that they could have spent their time more fruitfully (Primack et al. 2017). Additionally, excessive social media usage often precludes individuals from pursuing activities that could actually increase genuine face-to-face interactions and a sense of fulfilment (Newport 2019, pp. 168–169). For some individuals, then, social media takes time away from more fulfilling pastimes that increase feelings of self-­ respect and self-esteem, directly undermining their sense of self-authorisation. What is more, algorithms are inherently socially stigmatising, given that they operate through classification at a generalisable level: they are, quite literally, stereotyping processes.24 It is this that allows companies to use individuals as a means to their economic ends. However, stigmatisation that treats people as less than human can damage self-trust and their understanding of their own digital identities. Not only can individuals surrender to algorithmic choices in the face of complexity, but self-authorisation clearly plays a role here, as regarding oneself as unable to make a competent autonomous choice betrays a lack of self-respect and self-trust. Indeed, one reason for handing more responsibility for decisions over to algorithms is that they are often regarded by the public to be more competent choosers (Zittrain 2019); algorithms are trusted to have ‘gotten it right’, so individuals often feel they should obediently follow recommendations suitable for ‘their’ category (de Vries 2010, p. 82). Algorithms, though, only identify statistical correlations between datapoints, not causal links (Zittrain 2019) – there is no reasoning taking place. Taken in by a veneer of algorithmic competency, reliance on machine intelligences can thereby weaken an individual’s understanding of their own identities, challenging

 It is worth noting some of these conclusions have been recently challenged by work testing the link between digital technology and adolescent well-being (Orben et  al. 2019; Orben and Przybylski 2019), although this bears no impact on the identity effects that Haferkamp and Krämer researched. 24  Categorisation appears in traditional marketing but, when achieved algorithmically, deep granularity results in categorically different predictive abilities (See e.g. Canny et al. 2011). 23

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

75

who they believe themselves to be and altering the ways in which they perform their identities at the expense of self-authorisation. Finally, it is vital to remember that algorithmic curators have no respect or understanding for individuals as people – they are, after all, only members of a set. There is no social relationship between the algorithmic ‘audience’ and performer; the relationship is parasitic (Hildebrandt 2019, p. 107). Their aim, additionally, is not to support identity experimentation but to encourage reliance and addiction in order to generate corporatised identities that can be monetised. Autonomous control over digital identities is therefore undermined through a process explicitly designed to generate reliance and compliance with a corporatised classification. These algorithms are, quite literally, “traps” that measure success in terms of their “captivation” and retention of individuals (Seaver 2018, p.  9). I can think of no better metaphor for an autonomy-decreasing mechanism than this. Consequently, and in concert with my previous points, this underlines why a return to the encouragement of digital identities, which individuals can not only exert control over, but also socially perform and develop, is so urgently required.

3.5  Concluding Remarks This could all understandably be read in oppressively bleak terms. But, in closing, I want to emphasise that it need not be like this. The relatively unfettered performance of digital identities used to motivate most social media interactions and shows that corporatised identities are not essential for a flourishing social media environment; they are a parasitic addition whose uptake has been driven by a recent drive for monetisation. Nonetheless, social media now permeates every second and sphere of daily life, ensuring that individuals’ significant options, values and preferences are being constantly conditioned and constrained by a consumerist agenda. This, however, is a choice, and one that could be overturned in favour of a less harmful mechanism for monetisation. Regardless, on the account I have given here, the production and utilisation of corporatised identities cannot endure without continuing to harm individuals’ performances of their authentic digital identities, limiting their relational autonomy in a way that I do not believe is compatible with either liberal democratic respect for autonomy or regard for their (eudaimonic) digital well-being. So, in summary, having distinguished the constitutive elements of expressions of digital identity towards the beginning of this chapter, I then explained how corporatised identities are being conflated with digital identities proper. This, I believe, constitutes a useful theoretical contribution to modern Goffmanian identity theory and helps expose how the displacement of digital identities on social media has come to potentially undermine individual autonomy when understood in relational terms. Indeed, I have argued that designing social media sites around the algorithmic production of corporatised identities, at the expense of individuals’ digital identities, has likely undermined their self-governance, self-determination,

76

C. H. Smith

and self-authorisation. It was for these reasons that I concluded that digital identities must once again be allowed to motivate individuals’ interactions on social media unencumbered. Only upon doing so can those individuals’ digital well-being, and control over their own digital identities, be rendered compatible with social media.

References Andreassen, C.S. 2015. Online Social Network Site Addiction: A Comprehensive Review. Current Addiction Reports 2 (2): 175–184. https://doi.org/10.1007/s40429-015-0056-9. Bakir, V., and A. McStay. 2018. Fake News and The Economy of Emotions. Digital Journalism 6 (2): 154–175. https://doi.org/10.1080/21670811.2017.1345645. Banker, S., and S.  Khetani. 2019. Algorithm Overdependence: How the Use of Algorithmic Recommendation Systems Can Increase Risks to Consumer Well-Being. Journal of Public Policy & Marketing: 1–16. https://doi.org/10.1177/0743915619858057. Barclay, L. 2000. Autonomy and the Social Self. In Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, ed. C. MacKenzie and N. Stoljar, 52–71. New York: Oxford University Press. Bartlett, J. 2018. The People Vs Tech: How the Internet Is Killing Democracy. London: Ebury Press. Baumann, H. 2008. Reconsidering Relational Autonomy. Personal Autonomy for Socially Embedded and Temporally Extended Selves. Analyse & Kritik 30 (2): 445–468. https://doi. org/10.1515/auk-2008-0206. Bay, M. 2018. The Ethics of Psychometrics in Social Media: A Rawlsian Approach. In Proceedings of the 51st Annual Hawaii International Conference on System Sciences (HICSS’18), 1722–1730. https://doi.org/10.24251/hicss.2018.217. boyd, danah. 2007. Why Youth (Heart) Social Network Sites: The Role of Networked Publics in Teenage Social Life. In Youth, Identity, and Digital Media, ed. D. Buckingham, 119–142. Cambridge, MA: MIT Press. https://doi.org/10.31219/osf.io/22hq2. boyd, danah, and J.  Heer. 2006. Profiles as Conversation: Networked Identity Performance on Friendster. In Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS’06), vol. 3, 1–10. Presented at the Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS’06). https://doi.org/10.1109/ HICSS.2006.394. Bridle, J. 2018. New Dark Age: Technology and the End of the Future. London/Brooklyn: Verso Books. Brison, S.J. 2000. Relational Autonomy and Freedom of Expression. In Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, ed. C.  MacKenzie and N. Stoljar, 280–299. New York/Oxford: Oxford University Press. Brown, D., and N. Hayes. 2008. Influencer Marketing. London: Routledge. Accessed 18 Aug 2019. Bullingham, L., and A.C.  Vasconcelos. 2013. ‘The Presentation of Self in the Online World’: Goffman and the Study of Online Identities. Journal of Information Science 39 (1): 101–112. https://doi.org/10.1177/0165551512470051. Burr, C., and N. Cristianini. 2019. Can Machines Read our Minds? Minds and Machines 29 (3): 461–494. https://doi.org/10.1007/s11023-019-09497-4. Burr, C., N. Cristianini, and J. Ladyman. 2018. An Analysis of the Interaction Between Intelligent Software Agents and Human Users. Minds and Machines 28 (4): 735–774. https://doi. org/10.1007/s11023-018-9479-0. Canny, J., S. Zhong, S. Gaffney, C. Brower, P. Berkhin, and G.H. John. 2011, April 5. Granular Data for Behavioral Targeting Using Predictive Models. https://patents.google.com/patent/ US7921069B2/en. Accessed 19 Aug 2019.

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

77

Chaudhry, I. 2015. #Hashtagging Hate: Using Twitter to Track Racism Online. First Monday 20 (2). https://doi.org/10.5210/fm.v20i2.5450. Cheney-Lippold, J. 2011. A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control. Theory, Culture and Society 28 (6): 164–181. https://doi. org/10.1177/0263276411424420. Christman, J. 2004. Relational Autonomy, Liberal Individualism, and the Social Constitution of Selves. Philosophical Studies 117 (1/2): 143–164. https://doi.org/10.1023/B:PHIL.00000145 32.56866.5c. Cotter, K. 2019. Playing the Visibility Game: How Digital Influencers and Algorithms Negotiate Influence on Instagram. New Media & Society 21 (4): 895–913. https://doi. org/10.1177/1461444818815684. Crowder, G. 1998. From Value Pluralism to Liberalism. Critical Review of International Social and Political Philosophy 1 (3): 2–17. https://doi.org/10.1080/13698239808403245. Davis, J.L. 2014. Triangulating the Self: identity Processes in a Connected Era: triangulating the Self. Symbolic Interaction 37 (4): 500–523. https://doi.org/10.1002/symb.123. ———. 2016. Identity Theory in a Digital Age. In New Directions in Identity Theory and Research, ed. J.E. Stets and R.T. Serpe, 137–164. New York: Oxford University Press. https:// doi.org/10.1093/acprof:oso/9780190457532.003.0006. de Vries, K. 2010. Identity, Profiling Algorithms and a World of Ambient Intelligence. Ethics and Information Technology 12 (1): 71–85. https://doi.org/10.1007/s10676-009-9215-9. Deci, E.L., and R.M. Ryan. 2008. Hedonia, Eudaimonia, and Well-being: An Introduction. Journal of Happiness Studies 9 (1): 1–11. https://doi.org/10.1007/s10902-006-9018-1. Devine, J., L. Camfield, and I. Gough. 2008. Autonomy or Dependence – Or Both?: Perspectives from Bangladesh. Journal of Happiness Studies 9 (1): 105–138. https://doi.org/10.1007/ s10902-006-9022-5. Ellison, N.B., and danah boyd. 2007. Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication 13 (1): 210–230. https://doi. org/10.1111/j.1083-6101.2007.00393.x. Elmer, G. 2003. Profiling Machines: Mapping the Personal Information Economy. Cambridge, MA: MIT Press. Fan, R., J.  Zhao, Y.  Chen, and K.  Xu. 2014. Anger Is More Influential than Joy: Sentiment Correlation in Weibo. PLoS One 9 (10): 1–8. https://doi.org/10.1371/journal.pone.0110184. Feinstein, B.A., R. Hershenberg, V. Bhatia, J.A. Latack, N. Meuwly, and J. Davila. 2013. Negative Social Comparison on Facebook and Depressive Symptoms: Rumination as a Mechanism. Psychology of Popular Media Culture 2 (3): 161–170. https://doi.org/10.1037/a0033111. Floridi, L. 2011a. The Construction of Personal Identities Online. Minds and Machines 21 (4): 477–479. https://doi.org/10.1007/s11023-011-9254-y. ———. 2011b. The Informational Nature of Personal Identity. Minds and Machines 21 (4): 549–566. https://doi.org/10.1007/s11023-011-9259-6. ———. 2019. Marketing as Control of Human Interfaces and Its Political Exploitation. Philosophy & Technology 32 (3): 379–388. https://doi.org/10.1007/s13347-019-00374-7. Galston, W.A. 1999. Value Pluralism and Liberal Political Theory. The American Political Science Review 93 (4): 769–778. https://doi.org/10.2307/2586111. Goffman, E. 1959. The Presentation of Self in Everyday Life. Anchor Books. Haferkamp, N., and N.C. Krämer. 2010. Social Comparison 2.0: Examining the Effects of Online Profiles on Social-Networking Sites. Cyberpsychology, Behavior and Social Networking 14 (5): 309–314. https://doi.org/10.1089/cyber.2010.0120. Hildebrandt, M. 2019. Privacy as Protection of the Incomputable Self: From Agnostic to Agonistic Machine Learning. Theoretical Inquiries in Law 20 (1): 83–121. Hod, O. 2018, June 11. All of Your Facebook Memories Are Now in One Place. Facebook Newsroom. https://newsroom.fb.com/news/2018/06/all-of-your-facebook-memories-are-nowin-one-place/. Accessed 17 June 2019.

78

C. H. Smith

Hogan, B. 2010. The Presentation of Self in the Age of Social Media: Distinguishing Performances and Exhibitions Online. Bulletin of Science, Technology & Society 30 (6): 377–386. https://doi. org/10.1177/0270467610385893. Hongladarom, S. 2011. Personal Identity and the Self in the Online and Offline World. Minds and Machines 21 (4): 533–548. https://doi.org/10.1007/s11023-011-9255-x. Kalpidou, M., D. Costin, and J. Morris. 2010. The Relationship Between Facebook and the Well-­ Being of Undergraduate College Students. Cyberpsychology, Behavior and Social Networking 14 (4): 183–189. https://doi.org/10.1089/cyber.2010.0061. Khader, S.J. 2011. Adaptive Preferences and Choice: Are Adaptive Preferences Autonomy Deficits? In Adaptive Preferences and Women’s Empowerment, ed. S.J.  Khader, 74–106. New  York: Oxford University Press. https://www.oxfordscholarship.com/view/10.1093/acpr of:oso/9780199777884.001.0001/acprof-9780199777884-chapter-3. Accessed 30 April 2019. Kramer, A.D., J.E.  Guillory, and J.T.  Hancock. 2014. Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks. Proceedings of the National Academy of Sciences 111 (24): 8788–8790. Lanier, J. 1995. Agents of Alienation. Interactions 2 (3): 66–72. https://doi. org/10.1145/208666.208684. Lessig, L. 1999. Code and Other Laws of Cyberspace. New York: Basic Books. Lyon, D. 2014. Surveillance, Snowden, and Big Data: Capacities, Consequences, Critique. Big Data & Society 1 (2): 1–13. https://doi.org/10.1177/2053951714541861. MacKenzie, C. 2008. Relational Autonomy, Normative Authority and Perfectionism. Journal of Social Philosophy 39 (4): 512–533. https://doi.org/10.1111/j.1467-9833.2008.00440.x. ———. 2014. Three Dimensions of Autonomy: A Relational Analysis. In Autonomy, Oppression, and Gender, ed. A. Veltman and M. Piper, 15–41. Oxford: Oxford University Press. https://doi. org/10.1093/acprof:oso/9780199969104.003.0002. ———. 2019. Feminist Innovation in Philosophy: Relational Autonomy and Social Justice. Women's Studies International Forum 72: 144–151. https://doi.org/10.1016/j.wsif.2018.05.003. MacKenzie, C., and N.  Stoljar. 2000. Introduction: Autonomy Reconfigured. In Relational Autonomy: Feminist Perspectives on Autonomy, Agency, and the Social Self, 3–31. New York: Oxford University Press. Manders-Huits, N. 2010. Practical Versus Moral Identities in Identity Management. Ethics and Information Technology 12 (1): 43–55. https://doi.org/10.1007/s10676-010-9216-8. Marwick, A.E., and danah boyd. 2011. I Tweet Honestly, I Tweet Passionately: Twitter Users, Context Collapse, and the Imagined Audience. New Media & Society 13 (1): 114–133. https:// doi.org/10.1177/1461444810365313. Mill, J.S. 1984. In Utilitarianism, On Liberty and Considerations on Representative Government, ed. H.B. Acton. London: Dent. Muldoon, R. 2015. Expanding the Justificatory Framework of Mill’s Experiments in Living. Utilitas 27 (2): 179–194. https://doi.org/10.1017/S095382081400034X. Newman, B.M., and P.R. Newman. 2001. Group Identity and Alienation: Giving the We Its Due. Journal of Youth and Adolescence 30 (5): 515–538. https://doi.org/10.1023/A:1010480003929. Newport, C. 2019. Digital Minimalism: choosing a Focused Life in a Noisy World. New  York: Penguin Business. O’Neill, C. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown Publishing Group. OED Online. 2006. Corporatize. OED Online. Oxford University Press. https://www.oed.com/ view/Entry/267634. Accessed 21 July 2019. Orben, A., and A.K. Przybylski. 2019. Screens, Teens, and Psychological Well-Being: Evidence from Three Time-Use-Diary Studies. Psychological Science 30 (8): 1254–1254. https://doi. org/10.1177/0956797619862548. Orben, A., T. Dienlin, and A.K. Przybylski. 2019. Social Media’s Enduring Effect on Adolescent Life Satisfaction. Proceedings of the National Academy of Sciences 116 (21): 10226–10228. https://doi.org/10.1073/pnas.1902058116.

3  Corporatised Identities ≠ Digital Identities: Algorithmic Filtering on Social Media…

79

Pariser, E. 2012. The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. New York: Penguin Books. Phillips, D.J. 2009. Ubiquitous Computing, Spatiality, and the Construction of Identity: Directions for Policy Response. In Lessons from the Identity Trail: Anonymity, Privacy, and Identity in a Networked Society, ed. I. Kerr, V.M. Steeves, and C. Lucock, 303–318. New York: Oxford University Press. Politou, E., A. Michota, E. Alepis, M. Pocs, and C. Patsakis. 2018. Backups and the Right to be Forgotten in the GDPR: An Uneasy Relationship. Computer Law and Security Review 34 (6): 1247–1257. https://doi.org/10.1016/j.clsr.2018.08.006. Primack, B.A., A. Shensa, J.E. Sidani, E.O. Whaite, L. Yi Lin, D. Rosen, et al. 2017. Social Media Use and Perceived Social Isolation Among Young Adults in the U.S. American Journal of Preventive Medicine 53 (1): 1–8. https://doi.org/10.1016/j.amepre.2017.01.010. Ryan, R.M., and E.L. Deci. 2001. On Happiness and Human Potentials: A Review of Research on Hedonic and Eudaimonic Well-Being. Annual Review of Psychology 52 (1): 141–166. https:// doi.org/10.1146/annurev.psych.52.1.141. Schieffelin, E.L. 1998. Problematizing Performance. In Ritual, Performance, Media, ed. F. Hughes-­ Freeland, 194–207. London: Routledge. Seaver, N. 2018. Captivating Algorithms: Recommender Systems as Traps. Journal of Material Culture: 1–16. https://doi.org/10.1177/1359183518820366. Shakya, H.B., and N.A. Christakis. 2017. Association of Facebook Use With Compromised Well-­ Being: A Longitudinal Study. American Journal of Epidemiology 185 (3): 203–211. https://doi. org/10.1093/aje/kww189. Shoemaker, D.W. 2010. Self-Exposure and Exposure of the Self: Informational Privacy and the Presentation of Identity. Ethics and Information Technology 12 (1): 3–15. https://doi. org/10.1007/s10676-009-9186-x. Swatman, R. 2016. Pokémon Go Catches Five New World Records. Guinness World Records. https://www.guinnessworldrecords.com/news/2016/8/pokemon-go-catches-five-worldrecords-439327. Accessed 11 Aug 2019. Taylor, L. 2017. What Is Data Justice? The Case for Connecting Digital Rights and Freedoms Globally. Big Data & Society 4 (2): 1–14. https://doi.org/10.1177/2053951717736335. Tian, X. 2017. Embodied Versus Disembodied Information: How Online Artifacts Influence Offline Interpersonal Interactions. Symbolic Interaction 40 (2): 190–211. https://doi.org/10.1002/ symb.278. Villaronga, E.F., P. Kieseberg, and T. Li. 2018. Humans Forget, Machines Remember: Artificial Intelligence and the Right to Be Forgotten. Computer Law and Security Review 34 (2): 304–313. https://doi.org/10.1016/j.clsr.2017.08.007. Vogel, E.A., J.P. Rose, L.R. Roberts, and K. Eckles. 2014. Social Comparison, Social Media, and Self-Esteem. Psychology of Popular Media Culture 3 (4): 206–222. https://doi.org/10.1037/ ppm0000047. Wachter, S., and B.  Mittelstadt. 2019. A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review 2019 (2): 1–126. Westlund, A.C. 2009. Rethinking Relational Autonomy. Hypatia 24 (4): 26–49. https://doi. org/10.1111/j.1527-2001.2009.01056.x. Whitley, E.A., U. Gal, and A. Kjaergaard. 2014. Who Do You Think You Are? A Review of the Complex Interplay Between Information Systems, Identification and Identity. European Journal of Information Systems 23 (1): 17–35. https://doi.org/10.1057/ejis.2013.34. Williams, R.W. 2005. Politics and Self in the Age of Digital Re(producibility). Fast Capitalism 1 (1): 104–121. Wu, T. 2017. The Attention Merchants: The Epic Scramble to Get Inside Our Heads. New York: Vintage. Zittrain, J. 2019. The Hidden Costs of Automated Thinking. The New Yorker. https://www.newyorker.com/tech/annals-of-technology/the-hidden-costs-of-automated-thinking. Accessed 27 July 2019.

80

C. H. Smith

Zuboff, S. 2015. Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology 30 (1): 75–89. https://doi.org/10.1057/ jit.2015.5. ———. 2019a. Surveillance Capitalism and the Challenge of Collective Action. New Labor Forum 28 (1): 10–29. https://doi.org/10.1177/1095796018819461. ———. 2019b. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Ower. London: Profile Books. Charlie Harry Smith is a philosopher and political theorist pursuing a doctorate at the Oxford Internet Institute, University of Oxford. He is funded through the ESRC’s Grand Union Doctoral Training Partnership. Charlie’s research considers the normative and theoretical issues emerging at the intersection of digital identities and digital government. In particular, he investigates the UK Government’s ongoing engagement with the private sector to develop federated digital identity systems. He holds an MSc in Political Theory from the London School of Economics and a BA (Hons) in Philosophy from Durham University, which included a year studying abroad at the University of Hong Kong. Research Interests: Digital Identity, Digital Government, Federated Identity and National ID Cards. [email protected]

Chapter 4

Digital Well-Being and Manipulation Online Michael Klenk

Abstract  Social media use is soaring globally. Existing research of its ethical implications predominantly focuses on the relationships amongst human users online, and their effects. The nature of the software-to-human relationship and its impact on digital well-being, however, has not been sufficiently addressed yet. This paper aims to close the gap. I argue that some intelligent software agents, such as newsfeed curator algorithms in social media, manipulate human users because they do not intend their means of influence to reveal the user’s reasons. I support this claim by defending a novel account of manipulation and by showing that some intelligent software agents are manipulative in this sense. Apart from revealing a priori reason for thinking that some intelligent software agents are manipulative, the paper offers a framework for further empirical investigation of manipulation online. Keywords  Digital well-being · Persuasive technology · Manipulation · Intelligent software agents · Digital ethics

4.1  Introduction Social media usage is soaring globally: the average Internet user spends 2 h and 15 min per day on social media (Global Web Index 2018), which amounted to about 30% of time spent online in 2017 (Young 2017). According to the (Internet World Statistics 2018), the number of social media users will rise to 3 billion people in 2021, which would, based on current estimates, amount to almost 38% of the world’s population using social media. So, more people than ever before are interacting with intelligent software agents, such as Facebook’s newsfeed curator algorithm, regularly.

M. Klenk (*) Delft University of Technology, Delft, The Netherlands e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_4

81

82

M. Klenk

This article analyses the nature of our interactions with intelligent software agents and answers the question of whether some such interactions are manipulative, which has ramifications, as will be shown, for questions about digital well-­ being. In particular, I argue that if manipulative software-to-human relationships are detrimental to well-being, then that is because they impugn the autonomy of human users. The paper thereby aims at closing a research gap within digital ethics. Existing research on the ethics of social media has predominantly addressed two aspects: the nature of user-to-user interactions online and the effects of software-to-user interactions online.1 However, the nature and ethical status of software-to-user interactions have received scant attention. With the advent of intelligent software agents (Burr and Cristianini 2019; Burr et al. 2018), that have a least some agency-like characteristics (Floridi and Sanders 2004), this is a significant omission. The paper explains and justifies the link between the nature of software-to-user interactions and digital well-being and provides conceptual clarity to the description and ethical evaluation of manipulative software-to-user interactions, which should enable further normative and empirical analysis of a ubiquitous form of online behaviour. Finally, it aims to contribute to the study of manipulation itself, which, as one commentator puts it, is “in desperate need of conceptual refining and ethical analysis” (Blumenthal-­ Barby 2012, 345).2 I proceed as follows. After explaining and justifying the relation between digital well-being and online manipulation in more detail in Sect. 4.1, I defend a novel account of manipulative action in Sect. 4.2 using conceptual analysis.3 According to the proposed account, a manipulative action consists of the attempt to exert directed but careless influence on someone (which will be made precise below). In Sect. 4.3, I argue that we have reason to think that intelligent software agents are manipulative in this sense. Building on a framework for analysing software-to-user interactions introduced by Burr et al. (2018), I show that some intelligent software agents are manipulative because they aim to direct human users to specific actions while not intending to reveal reasons to them for doing so. A corollary of my argument is that it is a priori that some such interactions are manipulative because intentions to maximise a given behaviour necessarily crowd out intentions to reveal reasons for such behaviour.

1  Regarding the former, see, for example, discussions of the prevalence of user deception, e.g. Hancock and Gonzales 2013; Tsikerdekis and Zeadally 2014. Regarding the latter, see discussions of effects of social media use on user well-being, e.g. Huang 2017; Reinecke and Oliver 2016. 2  Though intelligent software agents indubitably exert an influence on human users, it is unclear whether that influence qualifies as, for example, persuasive, manipulative, or coercive; see (Alfano et al. 2018). The concept of manipulation, in particular, is often used too coarsely, as something “in-between” persuasion and coercion; cf. (Faden et al. 1986). In focusing on influences exerted by artificially intelligent software agents, the article goes beyond previous discussion of the ethics of persuasive technologies; cf. (Berdichevsky and Neuenschwander 1999; Fogg 1998; Spahn 2012). 3  Note that the focus on manipulative action leaves open that an account of manipulated action contains further conditions that the one’s defended here.

4  Digital Well-Being and Manipulation Online

83

In conclusion, the paper provides a novel analysis of manipulation, shows that some software-to-human interactions are manipulative, and explains how such interactions are detrimental to digital well-being.

4.2  Digital Well-Being and Online Manipulation Well-being in the broadest sense is what we have when we are living lives that are good for us (cf. Tiberius 2006). Digital well-being is concerned with the impact of technology on the extent to which we do and can lives that are good for us (Floridi 2014).4 In this section, I explain and justify the relevance of online manipulation for well-being in three steps. First, I motivate this paper’s particular concern with software-to-human online interactions by sketching two preliminary reasons for thinking that intelligent software agents are manipulative. Second, I argue that the nature of software-to-human interactions, in general, is relevant (positively or negatively) for well-being. Finally, I suggest that manipulative interaction, in particular, is detrimental to well-being because of its effects on user autonomy. In doing so, my aim is not to defend a particular theory of well-being; my aim is modest: To indicate how manipulation online is relevant for well-being on several prominent conceptions of well-being, which introduces the in-depth discussion in Sects. 4.2 and 4.3 of this chapter.

4.2.1  I ntelligent Software Agents and Humans – Signs of a Troubled Relationship Our interactions with intelligent software agents are a source of ethical concern. As Burr et al. (2018, 756) note, the designers of the relevant technologies have themselves begun to raise warnings about their use. A statement by the ex-president of Facebook seems to show how the behaviour of intelligent software agents was intentionally designed to manipulate (Pandey 2017, emphasis added, cited in Burr et al. 2018): The thought process that went into building these applications […] was all about how do we consume as much of your time and conscious attention as possible, and that means that we needed to sort of give you a little dopamine hit every once in a while, because someone liked or commented on a photo or a post or whatever, and that’s going to get you to contribute more content […] It’s a social validation feedback loop […] It’s exactly the sort of thing a hacker like myself would come up with because you’re exploiting a vulnerability in human psychology.

4  It can also be asked whether technology may change what it means to live a good live; I address this question in an unpublished manuscript.

84

M. Klenk

Design choices were made so that human users display the desired behaviour which, in the designer’s own words, is exploitative of the users’ vulnerability. More specifically, two features of intelligent software agents prompt an investigation of the software-to-user relationship with a view to possible manipulation, and its effects on user well-being. First, intelligent software agents know a lot about the human user they interact with. It is now evident that intelligent software agents can learn a lot about human users from their behaviour online, even if no ‘personal’ information, such as one’s name or address, are provided (cf. van den Hoven et al. 2018). Burr and Cristianini (2019) have argued that the reliability of inferences about users’ beliefs, desires, personality, and behaviour is considerable.5 The data trail that human users leave online certainly gives room to suspect that enough about the user’s beliefs and desires is or can be known to make online manipulation credible (see also Buss 2005; cf. Alfano et al. 2018). Second, there is evidence that intelligent software agents can, to a considerable extent, steer the mental states and the behaviour of human users with whom they interact, plausibly based on the knowledge attained on them. Coupled with the widespread intuition that a manipulator likewise steers the behaviour of his victims, often suggested by the analogy that manipulators ‘pull strings’ of the manipulated patients, these findings also seem to suggest that software-to-user manipulation occurs (cf. Burr et al. 2018, 752ff). The influence on human users is not limited to behaviour but extends to emotions. By changing the presentation of content online, advertisers can purposefully influence the attitudes of human users (e.g. Kim and Hancock 2017). Therefore, it is safe to say that some intelligent software agents have a tremendous amount of information about human users and that they can use that information to steer human users in desired directions. While this is indicative, it is admittedly speculative and still indeterminate as to the classification of the influence at hand (i.e. it leaves open whether the influence is e.g. manipulative). So, it needs to be determined whether the influence exerted by intelligent software agents qualifies as manipulation, and if so, how it could bear on user well-being.

 This does not necessarily mean that intelligent software agents have an accurate picture of a human user’s true (digital) identity, which may be more fluid and performative, as Smith describes in this collection (Chap. 3). Nevertheless, they have an accurate picture of what Smith calls a user’s “corporatised identity.” As Smith acknowledges, and as explained below, it seems that reliable inferences about a user’s corporatized identity are sufficient for exerting considerable influence on that user. 5

4  Digital Well-Being and Manipulation Online

85

4.2.2  H  ow the Nature of Our Interactions Affects Our Well-Being I begin with the question of how manipulative action could bear on user well-being. Manipulation is an interaction with a specific nature. It can be shown that the nature of our interactions (that is, whether they are, for instance, persuasive, manipulative, seductive, or coercive) matters for (i.e. positively or negatively affects) our well-­ being. Hence, manipulative interactions may affect well-being. Consider the four predominant philosophical approaches to well-being: hedonism, desire-based theories, objective list theories, and life-satisfaction theories (Parfit 1984, 493–502; cf. Tiberius 2006, 494). All allow that the nature of our interactions matters for well-being, either directly or indirectly. On objective-list theories, such as Nussbaum’s (2000) capabilities approach, some types of interactions, such as affiliations based on mutual care and respect, matter directly because they are final ends. That is, roughly, that we have reason to value such interactions for their own sake. For example, when someone makes an effort to persuade you of something rationally, they are appealing to your reasons and leave you the freedom to come to your conclusions; they show care and respect for you as a rational being, which directly matters for your well-being. So, proponents of objective-list theories of well-being can recognise the relevance of our interactions for well-being directly. The relevance of our interactions for well-being, independently of its direct effects on pleasure or the satisfaction of one’s desires, might seem more difficult to explain for hedonists, desire-based theorists, and life-satisfaction theorists. However, the nature of an interaction plausibly matters instrumentally, even though it need not do so directly or in the short run. For example, the direct, short term effects of a given interaction might increase pleasure or satisfy one’s desires, but there are indirect, long-term effects of the interaction, too. Different types of interaction may have different effects on our capacities for decision-making in the long run.6 For example, paternalistic interactions, in which one person makes decisions for another, may bring the paternalised person pleasure and desire-satisfaction in the short-run, but disable her ability to make fruitful decisions in the future. Hence, paternalistic interactions may be detrimental to pleasure indirectly, and in the long-­ run. In addition, empirical research has indicated that autonomy matters for people’s life satisfaction and their well-being, hedonistically understood (cf. Reis et  al. 2000).7 Therefore, the relevance of our interactions for well-being should at least

 See Levy (2017).  See also Cavalo et al. (Chap. 2, this collection), specifically Sects.4.1 and 4.2.

6 7

86

M. Klenk

matter indirectly for hedonists and desire-based theorists, because it is plausible that the nature of our interactions influences its effects.8 Hence, the four prominent conceptions of well-being make some room for taking the nature of our interactions as a determinant of well-being seriously. The observations above suggest that determining the nature of software-to-human interactions matters for evaluating digital well-being.

4.2.3  M  anipulative Action Is, More Often Than not, Detrimental to Well-Being We are left with the question of how, precisely, well-being is affected by manipulative interactions. I argue in this section that manipulative interactions are detrimental to well-being by undermining autonomy and so the threat of online manipulation is a threat to our well-being. Autonomy forms a part of many prominent objective-list theories of well-being (Ryff and Singer 1998; e.g. Nussbaum 2000). There are several reasons why proponents of objective-list theories consider autonomy as positively relevant for well-­ being. First, in treating someone so as to preserve her autonomy, we treat that person with respect for her rationality (Levy 2017, 498). That enables her, the thought goes, to develop and exert her capabilities as a rational being. Second, the relevance of autonomy for well-being is linked to moral responsibility. On many accounts of moral responsibility, being responsible is linked to being responsive to reasons (Fischer and Ravizza 1998). To manipulate someone is to fail to treat them as responsible agents and, therefore, detrimental to their reason-responsiveness.9 But practising and developing rationality and responsibility are, on prominent objective list theories, what living a good life is all about (cf. Vallor 2016). Hence, autonomy in that sense is immediately relevant for well-being on objective list theories of well-being. As before, hedonists and proponents of desire-based and life-satisfaction theories should recognise the relevance of autonomy indirectly. A lack of autonomy has been shown to affect happiness and life-satisfaction negatively, and it is at least not 8  I am assuming here that these claims about the relevance of the nature of interaction apply, ceteris paribus, to the relevance of the nature of software-to-human interactions, in particular. That inference might be faulty if relevant normative properties (e.g. the property of being manipulative) of interactions supervene on properties that are lacking, or cancelled out, in software-to-human interactions (e.g. the property of being intentional). Most importantly, we need to assume that intelligent software agents are, in the relevant sense, agents (cf. Floridi and Sanders 2004). For reasons of space, however, I cannot fully assess that assumption in this paper but the discussion in Sect. 4.3 gives some reason to think that it is explanatorily useful to regard them as agents in the relevant sense, which may be sufficient in normative contexts, too. 9  In addition, some have argued that autonomy is valuable independently from its relation to wellbeing; (cf. Sen 2011). So, even if the link between autonomy and well-being is doubted, there may be independent reason to be concerned about manipulation’s impact on autonomy.

4  Digital Well-Being and Manipulation Online

87

beneficial to the development of one’s decision-making capabilities, which matters for desire-satisfaction (cf. Ryan and Deci 2000).10 To illustrate how manipulative action threatens to undermine autonomy, we need to get ahead of ourselves a little and preview some implications of my account of manipulation that I introduce in more detail in Sect. 4.3. According to my account of manipulative action, the manipulator does not intend to influence in such a way that he reveals his victim’s reasons for doing as the manipulator wishes. In short, the manipulator cares about something being done but does not care to show that there are good reasons to do it (more on this in Sect. 4.3).11 If that account is valid, then we should expect that humans at the ‘receiving ends’ of manipulative relationships will have less opportunity to assess their reasons for acting, which, in turn, limits their capabilities, to assess and evaluate possibilities for thought and action (cf. Burr et al. 2020). As shown above, that is bad for their well-being, on several prominent construals of well-being. So, we have reason to expect that manipulative action generally negatively affects autonomy, and, therefore, to generally negatively affect well-being. Therefore, the relation of manipulation, autonomy and well-being suggests that it is an essential task for the scholarship on digital well-being to assess the degree to which (online) technologies are manipulative. Admittedly, this assessment is merely preliminary because the precise nature of the relationship between manipulation and well-being depends on the true account of well-being. For now, it should suffice that we can find room for the relevance of manipulation for well-being on several prominent accounts of well-being. With a better view on the relationship between manipulation and digital well-­ being, we can now explain what manipulation is and show that it can be detrimental to well-being.

4.3  M  anipulative Action as Directed and Intentionally Careless Influencing I defend the claim that manipulative action is intentional, directed influencing of a manipulatee, or patient of the manipulative action, coupled with carelessness about revealing the manipulatee’s reasons for behaving as intended by the manipulator.12  More more detailed discussion about the concept of autonomy in relation to digital well-being, and specifically the concept of autonomy as understood within Ryan and Deci’s Self-Determination Theory, see Calvo et al (Chap. 2, this collection). 11  Manipulation is sometimes defined in terms of autonomy (e.g. as autonomy undermining), but that is not an account that I defend (reference for criticism of autonomy account). But ISAs might undermine autonomy. 12  The account is akin to a broader understanding of bullshitting (cf. Frankfurt 2005) applied to more than speech acts and de-coupled from truth. In that respect, the account is similar to Frankfurt’s analysis of bullshit, because it makes do with a disregard for reasonability rather than requiring the intention to violate reasonability. 10

88

M. Klenk

In other words, to act manipulatively is intending someone else to do something through a means that is not directed at revealing reasons to that agent. More precisely, directed influencing means that the manipulator intends the patient of his manipulation to exhibit some particular behaviour – this is to exclude that accidental influences can count as manipulative. Carelessness does not denote sloppiness, negligence, or failure about choosing a method that reveals reasons to the patient of one’s manipulation but rather the utter disregard for even attempting to do so.13 Manipulators are careless in the sense that they do not intentionally direct any effort at influencing their subjects in a way so that they can see the reasons for following suit – even though the manipulator might, actually and accidentally, reveal his patient’s reasons to them. We can now examine how the proposed account explains some paradigm cases of manipulation. Here are three examples of manipulative action: Advertising The advertising manager for a home detergent wants to increase sales by conveying the product’s superior cleaning power, compared to other home detergents, in illustrative video clips. That there is no evidence for the product’s superior cleaning power is immaterial because the video clip is not aimed at revealing such reasons to clients anyway.14 Nudging The school board decides that the students of its schools should eat healthier and re-arranges the food display so that healthy foods are more cleverly displayed in school cafeterias.15 Children Little Daniel does not want to go to bed. His mother promises him chocolate the next day if he goes to bed now.

We have manipulative actions in all three cases, which, intuitively, is the correct result. The advertising manager, the school board, as well as Daniel’s mother are acting manipulatively because the means of influence they chose, respectively, are not intended to reveal reasons to their patients (buyers of the detergent likely do not have such reasons, whereas pupils in the cafeteria, and little Daniel, likely do). The intuitive idea behind the account of manipulative action offered here is that to manipulate someone is to make that person do or believe something in a way that disregards any reasons that that person might eventually have for doing or believing so. So, as suggested by the Advertising case, it does not matter whether or not there are any reasons for the patient to act in the desired way. As suggested by the Nudging and Children case, the account does not require that there be no reason for the manipulated person to be doing something, only that the manipulator chooses a

 Thus, carelessness ought to be understood in the sense in which someone might be careless or carefree about how people think about him, not directing any effort to trying to influence his public image in any way. The proposed sense of being careless is entirely compatible with actually and accidentally being crafty at creating a good public image. 14  As suggested earlier, the proposed account of manipulation purports to be morally neutral – in what follows, I do not consider whether and, if so, why manipulation is morally problematic. 15  Recent meta-analyses on nudging effectiveness provides ample clues; see (Cadario and Chandon forthcoming). 13

4  Digital Well-Being and Manipulation Online

89

method of influence that is not intended to reveal any such reasons to the manipulatee.16 More precisely: Manipulative action: M aims to manipulate a patient S only if a) M aims to have S exhibit some behaviour b through some method m and b) M disregards whether m reveals eventually existing reasons for b to S.

I will say that an agent acts manipulatively, or engages in manipulative action, or is manipulative whenever that agent meets the criteria for manipulative action. Manipulation, on this account, is a success term: The success criterion is whether or not a directed, careless influence is intended. It is not crucial whether the manipulator succeeds in fooling the patient, but only that he aims to do so. There can, in effect, be very bad manipulators.17

4.3.1  Details of the Account A few clarificatory remarks are needed about the notions of a method and how a method can reveal eventually existing reasons for the targeted subject. A method can be understood in a broad sense to include any action performed by M to make S exhibit some behaviour, such as a gesture or speech act. I use the term method rather than action because what might intuitively seem like non-actions, such as not reacting to another person’s call, also count as a method in the relevant sense. In the right context, not acting is a bona fide form of influence as it provokes particular behaviour in others. The requirement to ‘disregard whether m reveals eventually existing reasons for b to S’ is a tad more complicated and it helps to proceed step-wise. First, ‘eventually existing reasons’ is a modal term that suggests that there might or might not be reasons for S to exhibit some behaviour b. The critical point is that the manipulator does not aim to reveal any such reasons. To explain this part of the second requirement, I focus on non-manipulative action and sketch manipulative action as its negation. During persuasion, or non-­ manipulative influence in general, one typically intends one’s action to show to the target of one’s behaviour that what one wants from them is reasonable. Persuasive interactions have not only causal, behavioural aims (e.g. to get someone to do or believe something), but normative ones too: they are aimed to get others to, in a sense, see that what one asks of others is reasonable or true.

 I do assume that, until proven otherwise, we should regard manipulation as a unified concept; for criticism see (Ackerman 1995). 17  In other words, agents that intend to manipulate (and, therefore, succeed) but fail to get their target victim to behave in their intended ways. Thanks to [redacted for blind review] for prompting me to clarify this point. 16

90

M. Klenk

The most obvious cases of persuasive influence are arguments (understood as actions). By ‘providing an argument’, one aims that the other takes up a certain belief and that one’s action reveals reasons to the other. In such typical cases of persuasion, the method m at once is intended to make a causal difference and a normative one, too, because it reveals reasons to the target for exhibiting the intended behaviour. For persuasive action on my account, it is not required that the patient subject becomes aware of the reasons he or she has for performing the intended behaviour through that route – it is sufficient if the persuader intends so – many interactions and attempts at influence in the epistemic realm work through references to testimony. Consider, for example, claims like “You should stop smoking because I read that all the experts agree that smoking causes cancer,” or that “You ought to believe that the moon landing took place because all credible sources say so.” In both cases, a speech act is how one aims to exert directed influence, and that speech act is intended to convey reasons for complying. Hence, the action is not manipulative. Manipulative action thus understood requires the manipulator to intentionally employ some way of influencing the target to effect a target behaviour and lack an intention about revealing to the patient any reasons that might exist to act by the manipulator’s aims. A typical manipulative action according to my account can be glossed as expressing the manipulator to be thinking roughly as follows: ‘I want you to perform behaviour b, so I do m, and I would have chosen m even if it did not reveal your reasons for doing b to you.’ The manipulator intends his influence to have a particular effect on the manipulatee, and chooses his influence accordingly, but he is oblivious to whether his chosen means of influence reveals any reasons for exhibiting the intended behaviour to the victim (it does not follow that the manipulator is oblivious to whether there are reasons, for the manipulation patient, for following suit). Some manipulators like parents and liberal paternalist choice architects do care about a manipulatee’s reasons for exhibiting the intended behaviour, but they do not care about revealing these reasons through their chosen method of influence. Parents may not care because their children do not sufficiently grasp reasons yet. Paternalist choice architects may not aim for it because other means of influencing are more effective. In both cases, whether or not there are reasons for the manipulation patient to act, and whether they are revealed through the chosen method, is a mere side-effect.

4.3.2  Advantages of the Account Why should we accept the proposed account of manipulative action in analysing the behaviour of intelligent software agents? One reason is that the account offers advantages over alternative accounts of manipulation in the philosophical literature on manipulation.

4  Digital Well-Being and Manipulation Online

91

To begin with, the account does not require any form of deception to be involved in manipulation. It is a common misconception that deception is always a component of manipulation. Because the account does not require deception to be involved, this is a reason in favour of the account. Barnhill’s Open House case illustrates that speech acts are not needed for manipulation (Barnhill 2014, 58): Open House: Your house is for sale. Before holding an open house for prospective buyers, you bake cookies so that the house will smell like cookies, knowing that this will make the prospective buyers have more positive feelings about the house and make them more inclined to purchase the house.

Manipulative actions need not involve speech acts (thus they need not involve stating falsities) and even making true claims that lead manipulatees to behave rationally can sometimes be manipulative rational and rational claims may be manipulative (Gorin 2014, 75; Barnhill 2014, 80). In cases like Barnhill’s, the chosen method of influence (olfactory influence) does not reveal reasons to the manipulatee for showing the target behaviour (buying the house), but the manipulator chose that method nonetheless. Hence, this is a case of manipulative action. Moreover, the account does not require the manipulatee to behave in less than ideal ways. Hence, it is possible to classify non-informational nudging as manipulative, which is plausible, at least in the non-moralised sense in which I use the term manipulation here. Nudges often do lead to behaviour that is closer to the ideal than ‘un-nudged’ behaviour, and other accounts of manipulation cannot make sense of the intuition that nudging is manipulative nonetheless. Hence, the view is an improvement over another popular view in the philosophical literature on manipulation, which entails that manipulation necessarily involves (attempting to) make the patient behave in less than ideal ways (Noggle 1996; Noggle 2018; Scanlon 1998). Another reason for adopting the proposed account of manipulation is that it jibes well with concerns about autonomy, harm, and (frustrations of) self-interest that pervade attempts at analysing the concept of manipulation, without making these concerns necessary elements of manipulation. The account captures the intuition that to persuade (as opposed to manipulating) is to take a certain interest in enabling the other person to deliberate reasonably. In that sense, the account jibes well with previous discussions of a close link between autonomy and manipulation (cf. Coons 2014; Wood 2014; Frankfurt 1971), even though it does not spell out manipulation as the submission of autonomy. Moreover, it explains how manipulative behaviour does not aim to help someone see how acting as the manipulator intends may be “keeping with their rational assessments of [an] outcome” (Kligman and Culver 1992, 186–87). It also explains how manipulative action often leads to harm and that it violates the self-interest of its victims because intending to reveal reasons for some behaviour is a (minimal) way for preventing one’s target from performing harmful actions and from living up to its self-interest. Again, however, it does not make harm or self-interest part of the definition of manipulation, because that would exclude cases such as Nudging or Children from counting as manipulative.

92

M. Klenk

Similarly, the account explains why manipulative actions often lead to the manipulated subject violate norms or rules (Noggle 1996), because this may be a side-effect of not revealing the reasons a subject has for performing certain actions. Again, however, that is not required for manipulative action, as illustrated by the cases discussed above. Finally, the account does not require that the manipulator disregards whether S has reasons for doing x or believing y. Instead, the emphasis is on the method the manipulator uses and whether that method reveals eventually existing reasons to S. I take the preceding considerations to show that the proposed account of manipulation has sufficient plausibility, and advantages over alternative accounts, to take it as revealing some crucial elements of manipulation. This is sufficient to employ it in a study of software-to-human interactions. It should be clear, however, that, on this broad account of manipulative action, many ways of interacting with others and influencing them count as manipulative action, perhaps more than what we are commonly used to expect.18 We can now use this account in investigating in evaluating the manipulativeness of ISAs (or, rather, the extent to which ISAs perform manipulative actions).

4.4  Intelligent Software Agents Manipulate Human Users Thus far, I have sketched a broad account of manipulative action. We can now put together observations about the behaviour of intelligent software agents and a clearer view of what manipulation is. I will show in this section that at least some intelligent software agents act manipulatively toward human users. I will also suggest that there are a priori reasons for thinking so. The argument can be formalised as follows: 1. If intelligent software agents attempt directed and careless influencing of human users, then intelligent software agents manipulate human users. 2. Some intelligent software agents attempt directed and careless influencing of human users 3. So, some intelligent software agents manipulate human users. Premise 1 follows from the account of manipulative action given in Sect. 4.2. The focus is now on defending premise 2.

 I do not think, however, that this account implies that unreasonably many actions are instances of manipulation. People plausibly do engage in manipulative actions toward children (as argued above), but that need not be a morally problematic instance of manipulation. In many other instances where we do not take the care to muster a persuasive interaction and instead retort to a manipulative one, it seems correct to suggest that we are manipulative in these cases. In such cases, there seems to be a slight blemish in manipulating others: all things being equal (including, for example, the effectiveness of the method), having used a persuasive form of influence would have been preferable.

18

4  Digital Well-Being and Manipulation Online

93

4.4.1  A  n Agent-Based Framework to Study Manipulation Online Following Burr et al. (2018), I analyse intelligent software agents as players in a game. Burr et  al. suggest that there are three features of every such interaction (2018, 736): 1. The ISA has to choose from a set of available actions that bring about interaction with the user. For example, recommending a video or news item; suggesting an exercise in a tutoring task; displaying a set of products and prices, and perhaps also the context, layout, order and timing with which to present them. 2. The user chooses an action, thereby revealing information about their knowledge and preferences to the controlling agent, and determining the utility of the choice the ISA made. 3. The cycle repeats resulting in a process of feedback and learning. The system is programmed to seek maximum rewards (to wit, to maximise its utility function) and its utility typically depends on the actions of the human user. Burr et al. (2018, 737) note that, in some cases, the utility function depends on the so-­ called click-through rate of the human user, which “expresses the probability of users clicking through links.” They argue that there can be situations of cooperation and situations of competition, depending on whether the utility functions of intelligent software agents and human users are aligned or not, respectively (Burr et al. 2018, 740). They go on to describe several different types of interaction between software agents and humans, amongst them coercive, deceptive, or persuasive interactions.19 The important point is that some intelligent software agents aim to maximise user engagement: thus, their utility function depends on the probability of a user clicking on a given link. As Burr et al. (2018) indicate, learning about an intelligent agent’s utility function is sufficient to learn about its intentions and beliefs. These claims should be accepted conditional on the claim that this is indeed the right model of the intentions of intelligent software agents. As Burr et al. note, their model is an assumption about the way that intelligent software agents work, which may be challenged. For now, however, I take this to establish that some intelligent software agents are aiming at maximising user engagement. In other words, they indent to maximise a certain behaviour of human users.

 Burr et al. break down persuasive behaviour into ‘nudging’ and ‘trading’ behaviour, and regard the former as more manipulative than the latter. Trading is defined as being a mutualistically beneficial interaction. My account of manipulation supports the statement that nudging is more manipulative than nudging because in order to trade one represents another’s reasons for acting, which is not necessarily the case for nudging.

19

94

M. Klenk

4.4.2  Applying the Agent-Based Framework Based on this framework, and the account of manipulation defended above, we can ask whether the system can be engaged in the pursuit of maximising user engagement while intending to reveal thereby the human user’s reasons for acting in this line.20 If the answer is ‘No’ for a given intelligent software agent, then that agent acts manipulatively. Since being aware of and guided by one’s reasons for acting is a component of (or at least correlated with) well-being, as illustrated in Sect. 4.1, we would be missing a chance of enhancing user well-being through such interactions.

4.4.3  Empirical Evidence for Manipulation Online Work on existing intelligent software agents, reviewed by Burr et al. (2018) strongly suggests that some ISAs aim for maximum engagement and, therefore, they are unlikely to intend to adjust their behaviour to reveal reasons to the user.21 Still, it is possible, of course, that the ISA reveals through its method of influence to the user reasons for doing as suggested by the ISA. For example, the nutrition app Cronometer sends emails to customers that explicitly state that behaving in a certain way (e.g. logging at least one activity in the first week of using the app) reliably leads to a certain behaviour with the majority of their users (e.g. continued use of the app). Thus, the app’s method of influence is to provide the user with a reason, and it can reasonably be argued that revealing this reason to the user is intentional. It might, however, be purely accidental that revealing reasons for acting in a certain way happen to be the most reliable method of getting the user to act. However, in that case, the intention is more aptly characterised as ‘choosing a method that maximises the likelihood of desired action’ rather than as ‘choosing a method that reveals reasons for acting.’  The argument relies on a claim about intentions are mutually exclusive in the sense that one cannot, as a matter of conceptual possibility, intend things that are metaphysically exhaustive, thus, one cannot intend at the same time two things that are mutually exclusive. One cannot, for example, intend to leave the room and intend to remain in the room. That is because the meaning of an intention is to build up action potential in a certain direction, and that cannot be done in mutually exclusive directions. 21  One might agree in principle with the claim that revealing reasons would often be superfluous to the initial aim of maximising utility for some ISAs (e.g. those that deploy some form of deception or nudging), but insist that revealing reasons would nevertheless be compatible with other forms of interaction (i.e. trading), especially if it helps establish trust with the user. The point is well taken: ISAs might instrumentally reveal reasons to the humans they interact with. After all, revealing reasons can be a valuable means, for example to increase human users’ trust. ISAs are unlikely, however, to aim for reason-revealing as a final end. Thus, they are acting manipulatively. As discussed in Sect. 4.1, that may not directly impact a user’s well-being (at least not on hedonist or desire-satisfaction theories), but indirectly. I return to this point below. Thanks to Christopher Burr for raising this objection. 20

4  Digital Well-Being and Manipulation Online

95

Existing work on the aims and behaviour of intelligent software agents, therefore, suggests that at least some of them are manipulative vis-à-vis human users simply because the aim of maximising engagement crowds out the aim of revealing reasons. That claim, however, may seem to rest on unsure footing (and, consequently, premise 2 of the argument defended in this paper remains open to criticism, too). An initial worry might be that intelligent software agents do not have real aims or intentions.22 If they do not have intentions, then they cannot be manipulative. At the very least, however, their behaviour can be described as exhibiting intention-like states, such as aims, which is sufficient for the behaviour to count as manipulative. A more serious problem relates to the empirical evidence base. Thus far, very few works have engaged with the intentions of the designers of intelligent software agents or the intelligent software agents themselves.23 However, the account of manipulation defended in this paper requires that empirical investigations of online manipulation must focus on the intentions of intelligent software agents (the ‘supply-­side’ of possible manipulative behaviour, so to speak), rather than the effects on human users (the ‘output’ side, so to speak). With current work predominantly focusing on the ‘receiving’ end of potentially manipulative action (i.e. the user), we lack more detailed accounts of what the potential ‘supply-side’ of software-­to-human interactions intends.24 Thus, given that the aims of the manipulator are relevant, the directive should be to establish the intentions of intelligent software agents and, derivatively, their creators. The creators of intelligent software agents may be seen to use ISAs as a method of influencing human users. In that sense, there might thus even be multi-layered accounts of manipulation. Such investigations can take many angles. One of them would be to analyse the business models of institutions that create intelligent software agents (Joseph 2018; Niu 2015). Analyses of a company’s expressed aims, as well as their actual strategies, may allow reasonable inferences about the intentions or aims of intelligent software agents. The advertising-based business model of many companies that offer allegedly free services online, while also employing intelligent software agents, such as Facebook and Google, suggests that engagement maximisation is indeed their actual intention. Thus, though the evidence base is currently still building, it seems reasonable to conclude that there is empirical support for the claim that some intelligent software agents attempt directed and careless influencing of human users. Thus, at least some intelligent software agents are manipulative.  Some accounts of intentional action require to agent to be able to give an account of his intentions.  The work of Burr et al. (2018) being an exception. 24  Of course, this is not to denigrate the importance of evidence about how users are affected by digital technologies. The present account of manipulative action leaves open the conditions for manipulated action, and studying the ‘output’ side will be crucial to ascertain whether users are manipulated by intelligent software agents. See Chap. 2 by Calvo et al, in this collection, for a very useful case study of the impact of Youtube’s recommender algorithm on user autonomy. 22 23

96

M. Klenk

4.4.4  A priori Evidence for Manipulation Online In addition to the reviewed empirical considerations, a priori reasons are suggesting the manipulativeness of some intelligent software agents. That is because it is doubtful that intelligent software agents can intend to reveal reasons to human users because it is doubtful that they have a grasp of what reasons are. Their understanding of human behaviour can plausibly, and parsimoniously, be described as reduced to correlations between properties. For example, an intelligent software agent may grasp that users with a property ‘aged between 25 and 30,’ often or reliably display a property ‘interested in travelling,’ as well as the property ‘likes experiences not had before.’ On that basis, the software agent can predict various decisions, for example that the prospects of travelling to new places will excite the user.25 However, grasping such relations does not amount to grasping reasons, at least if reasons are understood as irreducible to and partly independent from the actual desires of an agent. On all but the most subjective accounts of reasons (which are caricatures), an agent’s reasons are not reducible to the agent’s present desires. ‘Robust realist’ account of reasons locate them outside the agent’s desires (Scanlon 1998; Parfit 2011), and even though current subjectivists locate them in an agent’s desires, they take only what might be called ‘considered’ desires to ground reasons (Schroeder 2007). In neither case is it possible to ‘read off’ an agent’s reasons from the agent’s behaviour, or the behaviour of comparable agents. Thus, if it is true that intelligent software agents cannot grasp an agent’s reasons for acting (or believing), then they cannot aim to reveal such reasons to the agent. Since there are reasons to suspect that intelligent software agents cannot, in principle, grasp reasons for action, there is reason to suspect that their interactions with human are necessarily manipulative.26 It is worth emphasising that this does not mean that manipulative intelligent software agents are necessarily morally bad. The conceptual analysis of the term manipulation can and should proceed on the assumption that the concept is not completely moralised to begin with (cf. Wood 2014). For that reason, a full moral evaluation of the actions of intelligent software agents is still outstanding. I have been deliberately careful in writing that the concept of manipulative action is not completely moralised, because it seems true to say that it is slightly moralised at least in the following sense. Given the direct and  Thanks to Stephan Jonas for discussion and helpful input on this point.  See also the chapter by Smith (Chap. 3, this collection), whose argument implies that intelligent software agents represent the identity of human users inaccurately in such way that the autonomy of human users is compromised by the interaction. Since there seems to many to be a close link between autonomy-subversion and manipulation, Smith’s argument may provide another angle to argue for the necessary manipulativeness of intelligent software agents. However, given the account of manipulative action defended here, and more general considerations about manipulated behaviour that are beyond the scope of this paper, I doubt that there is such a tight conceptual link between autonomy-subversion and manipulation.

25 26

4  Digital Well-Being and Manipulation Online

97

indirect effects of manipulation on well-being (on various prominent accounts of well-being, as discussed in Sect. 4.1), it seems true that manipulative action is, ceteris paribus, worse for another’s well-being than non-manipulative action. Though well-being and moral goodness are distinct, it also seems plausible that manipulative action is, ceteris paribus and defeasibly, morally worse than nonmanipulative action. That is because it fails to respect agents’ rationality, or indirectly negatively affects final goods such as happiness.27 Thus, whenever an agent has the means to achieve a particular goal (such as getting someone else to do or believe a particular thing) and a non-manipulative method would be as efficient and safe, it seems that the non-manipulative method is to be preferred. Choosing a manipulative method instead would, therefore, constitute some moral failing, even though the degree of that failure is to be determined.

4.5  Conclusion Social media use is soaring globally. Plausibly, some of that rise in popularity is due to the actions of intelligent software agents, such as newsfeed curators or recommendations engines. After defending an account of manipulation as the directed and careless influencing of manipulatees, the paper argued that there are both empirical as well as a priori reasons for thinking that at least some intelligent software agents are manipulative vis-à-vis human users. This argument has ramifications for the debate about digital wellbeing. Insofar as manipulative action, by definition, lacks the intent to reveal to others the reasons for their action, the victims of manipulative action are at greater risk of action unreasonably or, even if they act reasonably, of being unaware of why they are acting reasonably. They might, therefore, miss out on valuable aspects of life. In conclusion, the nature of at least current software-to-human interactions is not conducive to digital well-being. Future work should deepen the empirical insight into the intentions of (the designers of) intelligent software agents to determine the actual extent of manipulation, for which the account of manipulation introduced in this paper offers a suitable starting point.28

 Thanks to Christopher Burr for prompting me to clarify this point.  I am grateful to Christopher Burr for insightful comments on a previous draft, and to audiences at the 2019 Media Ecology Conference in Toronto and the Digital Behavioural Technologies Workshop in Munich for discussion of a previous version of this paper. My work on this paper was supported by a Niels Stensen Fellowship. In addition, work on this project was part of the project ValueChange that has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 788321.

27 28

98

M. Klenk

References Ackerman, Felicia. 1995. The Concept of Manipulativeness. Philosophical Perspectives 9: 335. https://doi.org/10.2307/2214225 . Alfano, Mark, J.  Adam Carter, and Marc Cheong. 2018. Technological Seduction and Self-­ Radicalization. Journal of the American Philosophical Association 4 (3): 298–322. https://doi. org/10.1017/apa.2018.27 . Barnhill, Anne. 2014. What Is Manipulation? In Manipulation: Theory and Practice, ed. Christian Coons. Oxford: Oxford University Press. Berdichevsky, Daniel, and Erik Neuenschwander. 1999. Toward an Ethics of Persuasive Technology. Communications of the ACM 42 (5): 51–58. https://doi.org/10.1145/301353.301410 . Blumenthal-Barby, J.S. 2012. Between Reason and Coercion: Ethically Permissible Influence in Health Care and Health Policy Contexts. Kennedy Institute of Ethics Journal 22 (4): 345–366. Burr, Christopher, and Nello Cristianini. 2019. Can Machines Read Our Minds? Minds and Machines 83 (5): 1098. https://doi.org/10.1007/s11023-019-09497-4 . Burr, Christopher, Nello Cristianini, and James Ladyman. 2018. An Analysis of the Interaction Between Intelligent Software Agents and Human Users. Minds and Machines 28 (4): 735–774. https://doi.org/10.1007/s11023-018-9479-0 . Burr, C., M. Taddeo, and L. Floridi. 2020. The Ethics of Digital Well-Being: A Thematic Review. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00175-8 . Buss, Sarah. 2005. Valuing Autonomy and Respecting Persons: Manipulation, Seduction, and the Basis of Moral Constraints. Ethics 115 (2): 195–235. https://doi.org/10.1086/426304 . Cadario, Romain, and Pierre Chandon. forthcoming. Which Healthy Eating Nudges Work Best? A Meta-Analysis of Field Experiments. Marketing Science. https://doi.org/10.2139/ ssrn.3090829 . Coons, Christian, ed. 2014. Manipulation: Theory and Practice. Oxford: Oxford University Press. Faden, Ruth R., Nancy M.P.  King, and Tom L.  Beauchamp. 1986. A History and Theory of Informed Consent. New York: Oxford University Press. Fischer, John Martin, and Mark Ravizza. 1998. Responsibility and Control: A Theory of Moral Responsibility, Cambridge Studies in Philosophy and Law. Cambridge: Cambridge University Press. Floridi, Luciano. 2014. Fourth Revolution: How the Infosphere Is Reshaping Human Reality. Oxford: Oxford University Press. Floridi, Luciano, and J.W.  Sanders. 2004. On the Morality of Artificial Agents. Minds and Machines 14 (3): 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d . Fogg, B.  J. 1998. Persuasive Computers: Perspectives and Research Directions. In Making the Impossible Possible: 18-23 April Los Angeles; CHI 98 Conference Proceedings, ed. Clare-­ Marie Karat, 225–32. New York: ACM Press. Frankfurt, Harry G. 1971. Freedom of the Will and the Concept of a Person. The Journal of Philosophy 68 (1): 5. https://doi.org/10.2307/2024717 . ———. 2005. On Bullshit. Princeton: Princeton University Press. Global Web Index Social. 2018. Unpublished manuscript, last modified May 03, 2019. Gorin, Moti. 2014. Towards a Theory of Interpersonal Manipulation. In Manipulation: Theory and Practice, ed. Christian Coons, 73–97. Oxford: Oxford University Press. Hancock, Jeff, and Amy Gonzales. 2013. Deception in Computer Mediated Communication. In Pragmatics of Computer-Mediated Communication, eds. Susan C. Herring, Dieter Stein, and Tuija Virtanen, 363–83. Handbooks of pragmatics, eds. Wolfram Bublitz, Andreas H. Jucker, Klaus P. Schneider, Volume 9. Berlin: de Gruyter Mouton. Accessed May 29, 2019. Huang, Chiungjung. 2017. Time Spent on Social Network Sites and Psychological Well-Being: A Meta-Analysis. Cyberpsychology, Behavior and Social Networking 20 (6): 346–354. https:// doi.org/10.1089/cyber.2016.0758. Internet World Statistics. 2018. World Internet Users Statistics and 2018 World Population Stats. Accessed June 02, 2018.

4  Digital Well-Being and Manipulation Online

99

Joseph, Sarah. 2018. Why the Business Model of Social Media Giants Like Facebook Is Incompatible with Human Rights. http://theconversation.com/why-the-business-model-ofsocial-media-giants-like-facebook-is-incompatible-with-human-rights-94016. Kim, Sunny Jung, and Jeff Hancock. 2017. How Advertorials Deactivate Advertising Schema: MTurk-­ Based Experiments to Examine Persuasion Tactics and Outcomes in Health Advertisements. Communication Research 44 (7): 1019–1045. https://doi. org/10.1177/0093650216644017 . Kligman, M., and C.M. Culver. 1992. An Analysis of Interpersonal Manipulation. The Journal of Medicine and Philosophy 17 (2): 173–197. https://doi.org/10.1093/jmp/17.2.173. Levy, Neil. 2017. Nudges in a Post-Truth World. Journal of Medical Ethics 43 (8): 495–500. https://doi.org/10.1136/medethics-2017-104153. Niu, Evan. 2015. This Company Has the Best Business Model in Social Media. Accessed November 30, 2017. Noggle, Robert. 1996. Manipulative Actions: A Conceptual and Moral Analysis. American Philosophical Quarterly 33 (1): 43–55. ———. 2018. The Ethics of Manipulation. In Zalta 2018. Nussbaum, Martha Craven. 2000. Women and Human Development: The Capabilities Approach. The Seeley Lectures 3. Cambridge: Cambridge University Press. Pandey, E. 2017. Sean Parker: Facebook Was Designed to Exploit Human “Vulnerability”. https:// www.axios.com/sean-parker-facebook-exploits-avulnerabil ity-in-humans-2507917325.html. Parfit, Derek. 1984. Reasons and Persons. Oxford: Clarendon. ———. 2011. On What Matters. Oxford: Oxford University Press. Reinecke, Leonard, and Mary Beth Oliver, eds. 2016. The Routledge Handbook of Media Use and Well-Being. New York: Routledge. Reis, Harry T., Kennon M. Sheldon, Shelly L. Gable, Joseph Roscoe, and Richard M. Ryan. 2000. Daily Well-Being: The Role of Autonomy, Competence, and Relatedness. Personality and Social Psychology Bulletin 26 (4): 419–435. https://doi.org/10.1177/0146167200266002. Ryan, Richard M., and Edward L.  Deci. 2000. Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. American Psychologist 55 (1): 68–78. Accessed September 09, 2019. Ryff, Carol D., and Burton Singer. 1998. The Contours of Positive Human Health. Psychological Inquiry 9 (1): 1–28. https://doi.org/10.1207/s15327965pli0901_1. Scanlon, Thomas. 1998. What We Owe to Each Other. 3rd ed. Cambridge, MA: The Belknap Press/Harvard University Press. Schroeder, Mark Andrew. 2007. Slaves of the Passions. New York: Oxford University Press. Sen, Amartya. 2011. The Idea of Justice. Cambridge, MA: Harvard University Press. Spahn, Andreas. 2012. And Lead Us (Not) into Persuasion…? Persuasive Technology and the Ethics of Communication. Science and Engineering Ethics 18 (4): 633–650. https://doi. org/10.1007/s11948-011-9278-y . Tiberius, Valerie. 2006. Well-Being: Psychological Research for Philosophers. Philosophy Compass 1 (5): 493–505. https://doi.org/10.1111/j.1747-9991.2006.00038.x. Tsikerdekis, Michail, and Sherali Zeadally. 2014. Online Deception in Social Media. Communications of the ACM 57 (9): 72–80. https://doi.org/10.1145/2629612. Vallor, Shannon. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press. van den Hoven, Jeroen, Martijn Blaauw, Wolter Pieters, and Martijn Warnier. 2018. Privacy and Information Technology. In Zalta 2018. Wood, Allen W. 2014. Coercion, Manipulation, Exploitation. In Manipulation: Theory and Practice, ed. Christian Coons, 17–50. Oxford: Oxford University Press. Young, Katie. 2017. Social Media Captures over 30% of Online Time. Accessed November 30, 2017. https://blog.globalwebindex.net/chart-of-the-day/social-media-captures-30-of-online-time/. Zalta, Edward N., ed. 2018. Stanford Encyclopedia of Philosophy. Summer 2018.

100

M. Klenk

Michael Klenk is a Philosopher of the Ethics of Technology and Metaethics. He is a Postdoctoral Researcher at Delft University of Technology. His current research explores philosophical and ethical issues related to human interactions with intelligent machines. A primary goal of his research is to develop a method for re-thinking human interactions with intelligent machines from an ethical perspective. He was a visiting postdoctoral scholar at Stanford and the Institute for Business Ethics in St Gallen. He completed his PhD at Utrecht University in 2018. Research Interests: Ethics of Technology, Digital Ethics, Metaethics, Epistemology and Human-Computer Interaction. [email protected]

Chapter 5

What Contribution Can Philosophy Provide to Studies of Digital Well-Being Michele Loi

Abstract  In the utilitarian tradition, well-being is conceived as what is ultimately (non-instrumentally) good for a person. Since the right is defined as a function of the good, and well-being is conceived as the good, well-being is also considered as an input to moral theory that is not itself shaped by prior moral assumptions and provides reasons to act that are independent of concern for others, and moral concerns. The idea of well-being as the ultimate, context independent, “good for individuals” has been accepted by many philosophers rejecting utilitarianism. Here I argue against such conception and maintain that the contribution of philosophy to the debate on digital well-being is not to determine a singular concept of well-being, which can be applied to all contexts, but to help with comparisons between lives and actions, particularly those that are affected by information environments. Thus, I distinguish four different concept-types of well-being, where each type corresponds to a different moral/political purpose: (1) a rights-related concept-type, (2) a market-­ based concept-type, (3) an emotional-functioning concept-type, 4) an information-­ related concept-type. Keywords  Digital well-being · Hedonism · Preference-satisfaction · Objective list theory · John Rawls · Thomas Scanlon · Derek Parfit · Digital platforms · Facebook

5.1  Introduction “the right and the good are complementary; any conception of justice, including a political conception, needs both, and the priority of right does not deny this” (Rawls, Justice as Fairness: A Restatement.)

M. Loi (*) Digital Ethics Lab, Digital Society Initiative and Center for Biomedical Ethics and the History of Medicine, University of Zurich, Zürich, Switzerland e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_5

101

102

M. Loi

This chapter defends two theses, a negative and a positive one. The negative thesis is that no justification of a substantive, systematic, and comprehensive philosophical view of the ultimate nature of well-being is possible which is fully independent from moral premises. The positive thesis is that philosophers should justify a plurality of different substantive views of well-being depending on the moral premises they rely on. None of these views should be considered the concept of well-being, that which makes a person’s life go well for the person whose life it is, in general. The relevance of this meta-philosophical view for digital well-being is that a plurality of different well-being concepts are relevant for different purposes in the discourse on digital well-being. Moreover, depending on the goals for which a concept of digital well-being is introduced, different moral premises are justified. Hence, one should expect a plurality of philosophically different concepts of well-being to play a role in the debate on digital well-being. The meaningful role for philosophy is not to decide which among these concepts is the concept of well-being, in an absolute sense, or in general. Based on this view, I maintain that the contribution of analytical philosophy to the debate on digital well-being should be to make explicit (what I argue to be) the necessary link between theorizing about well-being and moral or political goals of such theorizing. Coherently with this approach I distinguish four different concept-types of well-­ being, where each type corresponds to a different moral/political purpose: (1) a rights-related concept-type, (2) a market-based concept-type, (3) an emotional-­ functioning concept-type, and 4) an information-related concept-type. None of these concept-types is expected to express what makes a person’s life good for that person, ultimately in a sense that is both (a) fully context independent and (b) independent of assumptions about the function of the well-being concept in a moral context. Happily, for the discussion of the topic of digital well-being, the discussion can advance without waiting for philosophers to determine what well-being is ultimately in an absolute sense. Indeed, the arguments here support the view that there is little value in participating in a conversation framed around that question. Each concept-type is justified with reference to the function it is assumed to play in a specific evaluative or normative exercise  – what these evaluative and normative exercises are must be determined in relation to the context of information environments, our control over them, and what it is reasonable to expect from them in specific circumstances and in specific social roles. The justification of this approach is a controversial philosophical thesis: well-­ being theories cannot be assessed as better/worse theories independently of moral beliefs. The first three sections of this paper defend this thesis. The argument I provide is the following: (1) we do not have sufficient intuitions, or pre-theoretical linguistic beliefs, about ‘well-being’ in the philosophical sense, because ‘well-being’ is used as a term of art within philosophy with a different meaning from its everyday use;

5  What Contribution Can Philosophy Provide to Studies of Digital Well-Being

103

(2) individuals have no practical reason to consider what is best in their lives from an exclusively prudential point of view, assessing the value of a life for them in abstraction from non-prudential (i.e. well-being constituting) values; (3) the concept of well-being is not suitable to guide individual deliberation about the good life. (4) for that reason, philosophical theories of well-being lack relevance with respect to individual decision-making about the life of that individual. (5) Yet we can explain the utility of the concepts of individual “interest” or “advantage” by justifying the role they play within moral and political frameworks of evaluation. (6) This counts in favor of conceiving well-being, not as “an input into moral thinking that is not already shaped by moral assumptions” (Scanlon 1998, 109) but in a contextual manner, where the context shapes the moral assumptions which in turn shape the concept of well-being. The essay consists of four sections. In Sect. 5.2 I characterize the meaning of well-­ being as a term of art. Since I cannot appeal to the reader’s intuitions about well-­ being, I characterize this meaning by listing the five most important propositions that (most) analytic philosophers agree upon, when they talk about well-being, even if they disagree about what ultimately constitutes well-being. These propositions describe the relation between well-being and other concepts and define the most fundamental conceptual distinctions between well-being (as it is used in this literature) and other concepts. So, the resulting characterization could be considered a functional one, not a substantive one. It is neutral with respect to the question “what does well-being consist of, ultimately?”. I subsequently refer to these fundamental ideas as the axioms of well-being. In Sect. 5.3 I argue that the concept of well-being, which fulfills all five axioms, does not have practical significance for decisions about how to live, in a first-personal perspective. In Sect. 5.4 I draw implications for the debate on digital well-being.

5.2  W  hat Do Contemporary Analytic Philosophers Talk About When They Talk About Well-Being? My ambition in this section is to characterize a concept of well-being with sufficient precision to be able to criticize philosophical discourse using such concept. It is plausible that ‘well-being’ is used in analytic philosophy as term of art, not in its lay usage meaning (Raz 2004; Crisp 2017). The concept of well-being, in a philosophical sense, is the concept of “what makes a person’s life go best” (Parfit 1984, 492) – in Appendix I of Reasons and Persons (Parfit 1984), also characterized in the same text as the concept of what “would be most in a person’s interest” (Parfit 1984, 492) and “what would be best for someone” (Parfit 1984, 492). Here Parfit introduces a tripartite typology of well-being theories including hedonism, desire-fulfillment and objective list theories, that has become standard in the literature. This has been

104

M. Loi

discussed and employed as a framework, with minor variations, even among analytic philosophers who do not share most of Parfit’s substantive moral views (Griffin 1986; Raz 1986; Scanlon 1993, 1998; Sumner 1996). ‘Well-being’ is understood as a term of art for a concept that is similar to happiness but potentially broader. The claim that well-being is happiness is regarded by these philosophers as a substantive philosophical claim, which can be false, not as stating that the two are synonyms. While the debate about the nature of happiness and of the good life is at least as old as philosophy itself, well-being as used in contemporary philosophy does not mean exactly happiness. The contemporary use of the concept has acquired a special prominence among philosophers sympathetic to utilitarianism (Raz 2004) but similar arguments are to be found in the work of philosophers whose substantive moral views bear little resemblance to utilitarianism, consequentialism, and even welfarism (e.g. Raz 1986, 2004; Scanlon 1998; Darwall 2010). Some philosophers recognize explicitly that well-being in this sense is a philosophical term of art, including those who do not usually provide a definition in terms of characteristic philosophical expressions, such as “what would make this person’s life go, for him, as well as possible” (Parfit 1984, 492); “how well a person’s life is going for that person” and “what is good for” her (Crisp 2017); “the life which is good for the person whose life it is” (Raz 2004, 260); “what makes a person’s life go better and worse from the point of view of the person who lives it” (Scanlon 1998, 109); “what it is for a single life to go well”(Griffin 1986, 7); “the good of a person in the sense of what benefits her”(Darwall 2010, 1); and “to deliberate about what is good for someone, or the good of someone, is to ask about what is beneficial or advantageous […] for or to him in particular” (Kraut 2007b, Chap. 1 italics in the original). Expressions such as “good for X” are notoriously ambiguous, so most analytic philosophers integrate such definitions with examples or further distinctions that are meant to ensure that the appropriate meaning of the expression “good for X” is conveyed to the reader. I am skeptical that a reader will be able to assess the plausibility of claims about well-being by virtue of having pre-theoretical intuitions about the correct and incorrect linguistic norms of concept deployment. Thus, I will characterize the concept of well-being by identifying five assumptions shared by many philosophers who write about well-being in analytic philosophy. What characterizes this practice? Philosophers are interested in determining what well-being is at a high level of abstraction. They are not interested in questions such as “what kinds of events are associated with positive emotions?”, but rather in questions such as “does well-being ultimately consists only of positive emotions?”. A “move in the game” in this discussion is advancing a proposition P, typically a very simple and abstract one, about what well-being ultimately is – for example, “well-being is the satisfaction of desire”. This amounts to identifying the “good-for-­ making property” (GFMP), that property of X, by virtue of which, X being an event in some person P’s life, L, non-instrumentally makes L a better life for P.  More simply, it is that property that is shared by all things that are good for (any) P (where

5  What Contribution Can Philosophy Provide to Studies of Digital Well-Being

105

P is any entity for which a good life or well-being are possible).1 According to the desire-fulfillment theory, the GFMP is “fulfilling a desire of an individual”. According to hedonism it is pleasure. According to an objective-list theory including ten elements, for instance, (e.g. friends, health, artistic expression, play, knowledge, esteem, contributing to the happiness of others, etc) the GFMP is the disjunctive property of being one of the items in this list. Philosophers generally prefer a simpler theory ceteris paribus – e.g. hedonism is preferable to the ten-item list from the theoretical point of view, ceteris paribus. Even more importantly, the theory least vulnerable to serious counterexamples is to be preferred. For example, the proposition that well-being is the satisfaction of a person’s actual desires is subject to the counterexample that what satisfies a person’s desires may be disappointing or harmful to her (Griffin 1986).2 The proposition that it consists of pleasure and the absence of pain, or any other desirable mixture of conscious mental states, is vulnerable to Nozick’s experience machine objection, i.e. the objection that the best experience one can have is not the best life one can have, if this experience is caused by experience machine (Nozick 1974, 42). Such claims about well-being (“well-­ being consists of P”, “X is an instance of P but it is not good for a person”) can be considered the “moves of the game” in the linguistic practice of philosophy concerning well-being. I will present five general propositions that play a different role in this language game. They are not substantive claims about what well-being ultimately consists in, about which participants to the practice disagree. They are rather, propositions that express the distinction between well-being and other notions. Many philosophers tend to agree about these propositions more than they agree about substantive views about well-being. For example, they tend to use these propositions as premises in arguments that are meant to convince philosophers who hold different substantive views. I will refer to these propositions as axioms. They are: First axiom: well-being is what is good for a person in an evaluatively important sense of “good for”.

The problem with defining well-being as what is “good for a person” is that this expression is ambiguous (Hurka 1987). Good for can mean many different things in different contexts. Without any ambition of being exhaustive, I distinguish at least a functional and an evaluative meaning of this expression (Crisp 1997). The functional sense is the one in question when we say that “calcium is good for my bones; frequent servicing is good for the engine of a car; water is good for plants; and the existence of a large population of intravenous drug-users is good for the HIV virus”(Crisp 1997, 500). The claims that that food, health, sensory s­ timulation, and

1  As Richard Kraut observes, this kind of philosophical inquiry about well-being resembles the kind of questions found in Socratic dialogues, e.g. when Socrates asks its interlocutors what is the property all the virtues have in common (Kraut 2007b, Chap. 1). 2  Many cases in which this happens are cases in which the subject’s desires are uninformed, so the objection is avoided by informed desire-satisfaction theory. But not all counterexamples can be avoided in that way (Kraut 2013).

106

M. Loi

warmth are always good for all humans, may simply refer to the biological goals humans are biologically built to pursue. This is a descriptive fact that in itself does not imply the desirability of such states for their own sake – no more than the fact that a large population of intravenous drug-users is good for the HIV virus implies its desirability.3 Most philosophers in this debate want to distinguish neatly the merely functional meaning of “good for” from the meaning of “good for” referring to well-being. This is why the first axiom of well-­being includes the idea that “good for” in the relevant sense is salient from the evaluative or normative point of view. The relevant axiom is that well-being is what is “good for” a person in the sense which implies its being “worth having as a constituent of a life worth living”(Crisp 1997, 500) for its own sake. Second axiom: well-being is an inclusive good.

Well-being is not just one particular way in which a life may be said to go well. For example, Scrooge’s life may be said to go well financially, but there are many other dimensions in which such a life does not go well. The life of a severely cognitively incapacitated individual may go well from the emotional point of view, and yet it may be considered a bad life in terms of overall well-being. One may argue that the two examples are very different: a financially good life is only good for an individual instrumentally while a life comprehending positive emotions is good intrinsically. What the two examples have in common is that both are narrow conceptions of well-being, while analytical philosophers discuss the reference of a broader, more inclusive concept. This is a distinct conceptual requirement characterizing the typical philosophical concept4. Third axiom: what makes a person’s life go well in the sense of well-being is conceptually independent from what makes it good from the moral point of view

Equivalently, one may claim that, “although the notion of well-being is important for morality, it is not itself a moral notion” (Scanlon 1998, 109), and that “it represents what an individual has reason to want for him or herself, leaving aside concern for others and any moral restraint or obligations” (Scanlon 1998, 109). The third axiom can be made more precise if it is expressed as the conjunction of two claims:

3  Some neo-Aristotelian virtue theorist may object to this distinction, because they regard natural facts about the function and goals of living entities as normatively relevant (Foot 2001; Hursthouse 2001; Kraut 2007b). Neo-Aristotelians typically argue on conceptual grounds that the question about what is good for humans and what benefits them are related. In addition to this, neo-Aristotelians typically adopt moralized notions of the good life, whereby “acting well” also in the moral sense is considered a constitutive element of the human good or purpose. Thus, neo-Aristotelians typically reject some of the axioms listed here as characterizing the dominant philosophical parlance (e.g. they may reject any of the axioms three, four and five, sometimes all of them). 4  Alexandrova (Alexandrova 2017b, XXXIV) characterizes this as “the agent’s overall all-thingsconsidered well-being”.

5  What Contribution Can Philosophy Provide to Studies of Digital Well-Being

107

1. It is conceptually possible for a life to be at the same time the best life a person can have from the point of view of that person’s well-being, and the worst life from the moral point of view. 2. It is conceptually possible for a life to be the best life a person can have from the moral point of view and the worst life from the point of view of well-being. These claims should not be confused with the much stronger claim that it is conceptually impossible for the life that is the best a person can have from the moral point of view and the best a person can have from the point of view of well-being to be the same life. It is certainly conceptually possible that this could happen and maybe it has even happened in the case of one or more individuals. A world in which this is always the case for all individuals is conceivable. But this is not to say that such a possible world is necessary.5 Fourth axiom: the life that is “best for a person”, in the sense of well-being, is not always and necessarily “the life a person has most reason to prefer, all things considered”.

The fourth axiom follows directly from the third, if one assumes that people have a reason to act as morality requires and that this reason may at least sometimes defeat all other practical reasons people have. The axiom is regarded as plausible also independently of this assumption. Let us provide the kind of example that is often used to prove this point. Mary has all the personal traits and predisposition it takes to be an excellent surgeon. She is ready to embrace new challenges, careful and meticulous in the execution of her tasks, open to criticism and willing to reconsider the value of her performance in order to improve it. Mary is very precociously recognized as the most talented person in her medical school to work as a brain surgeon. Unfortunately, Mary also scores very high in the personality trait of ‘neuroticism’. She is prone to self-doubt, to feel frustration, to worry about her performance and her patients, to fears, and the negative consequences of this personality trait are amplified by having responsibilities towards others. Mary loves being a surgeon and helping others, but she is also aware of the cost of such life in terms of her quality of life. She remembers the day of her promotion as head surgeon as the happiest of her life, but since that day, she has not been able to sleep more than 5 h at night, without the help of medications. Mary feels personally accomplished by filling her role and she thinks that all things considered this is the life she has most reasons to choose, considering all the good she can bring to the world by curing her patients (in spite of this, she does not think that she has a moral duty to live such life). Yet, Mary also believes that if she had made a different choice – e.g. selling her stocks and other savings and retiring in the countryside to grow a vineyard and produce her own wine – she would have obtained a life higher in well-being. This description, it is

5  Philosophers who stress this point often evoke the image of Mafiosi drinking their Martinis by the pool, which is meant to suggest an actual example of people with a life high in well-being but low in moral value. I do not find the first part of this claim to be persuasive.

108

M. Loi

argued, seems entirely coherent. Yet it would be contradictory if the fourth axiom were false. Fifth axiom: the concept of a life that is good for a person is not the concept of a life that is good objectively, that is, either in the sense that it is constituted by events or actions that are good in an impersonal sense (e.g. intrinsically good or worth promoting) or the concept of an objectively good instantiation of a natural kind.

The fifth axiom can be associated with two claims; first, the life that is best for one person from the point of view of well-being is not, as a matter of conceptual necessity, the life that contains the greatest amount of good from an impersonal point of view. Second, the life that is best from the point of view of objective goodness is not necessarily the best life for the person who lives it. A life that is good in the sense of well-being is “good for” the person whose life it is. A life that is good simpliciter is good in an impersonal sense or because it contains many events or actions that are good in an impersonal sense. State of affairs, events and actions that are good in an impersonal sense are those that everyone has reasons to want to see realized (Hurka 1987; Sumner 1996). A “reason” here may refer to a moral reason – e.g. the idea that some things are intrinsically good from the moral point of view (Moore 1993b). What makes a life “good” in this sense are the impersonally good events/actions that a life contains (Moore 1993a; Hurka 1987). For example, a life may be full of objectively valuable states: a person’s life may include knowledge and artistic excellence (Moore 1993a). One may say that such life-events are objectively valuable and that they make the life in question morally worth promoting, from everyone’s point of view. But it does not follow as a matter of logic, so it is typically argued (Hurka 1987; Sumner 1996; Scanlon 1998), that they make that life good for the person whose life it is, in the sense of well-being. Let us now consider a distinct conceptualization of objective goodness, what some philosophers have called ‘perfectionist value’ (Griffin 1986; Sumner 1996; Hurka 1996). Perfectionist value is the value something has by virtue of being a good instance of its kind, e.g. being a good orchid in the sense of being a good instance of an orchid. According to a tradition starting at least with Aristotle, when we apply this concept to a human person we ought to consider that person’s disposition to perform the characteristic functions of humans as a kind (Foot 2001; Kraut 2007a). In other words, a good human is a human with the stable disposition to perform characteristically human functions well (McDowell 1980; Foot 2001; Hursthouse 2001). Perfectionism about well-being is the view that “[t]he level of well-being for any person is in direct proportion to how near that person's life gets to this ideal” (Griffin 1986, 56). Since being a good instance of a kind and being good for in the sense of benefit are not the same concepts, many analytic philosophers regard this identification as a substantive view about the GFMP (Griffin 1986, 57), not as a conceptual truth about the meaning of “well-being”. As a substantive view about well-being, the Aristotelian idea of a single ideal of the good life suggests a certain rigidity (Kraut 2013; Griffin 1986, 58). By contrast, what makes an individual’s life go well appears to be much more variable and subjective (Sumner 1996), at least in the sense that there is no “single right balance” (Griffin 1986, 58)

5  What Contribution Can Philosophy Provide to Studies of Digital Well-Being

109

of objectively valuable goods, corresponding to a single ideal of the good life valid for all.

5.3  D  oes the Concept of Well-Being Have a Practical Role Independent of Moral Assumptions? In Sect. 5.2 I have listed the axioms of the philosophical usage of well-being. If well-being is conceptually independent of both morality and vicinity to human perfection, as suggested by axioms 3, 4 and 5, then we ought to find out what well-­ being ultimately consists in by consulting our lay intuitions about what makes our lives – and human lives in general – go best (in the relevant sense). I will now argue that a philosopher cannot determine what is the best, or most plausible, theory of well-being independently of philosophical tests that presuppose the validity of some moral beliefs. The usual philosophical method of reflective equilibrium (Rawls 1971) requires that we select the theory that best explains our particular evaluative beliefs, where our particular beliefs may also change in response of considering arguments deriving from the theory itself. If a theory of well-being is independent of moral belief, it should be possible to identify the best philosophical theory of the nature of well-being through arguments that are independent of our moral beliefs. By ‘moral beliefs’ I mean the following six classes of beliefs: (1) beliefs about what is morally right, required by morality, morally permissible and impermissible; (2) beliefs about moral duty in the strict sense; (3) beliefs about moral virtue; (4) beliefs about what states of affairs are impersonally good in the moral sense; (5) beliefs about what traits or features of living entities are morally admirable (in the sense that they should be admired, from the moral point of view); and (6) beliefs about what one has a prima facie and/or a pro tanto moral reason to do, feel or admire, where pro-tanto moral reasons may derive from any of the above. First, we do not have rich pre-theoretical intuitions about well-being. This is due to the fact that well-being, characterized as what is “good for a person”, is a philosophical term of art. Philosophers define well-being to mean what is good for a person; they do not argue that this is the way the term is used in ordinary talk (Crisp 2017; Raz 2004). For example, ordinary folks may use the term to refer to health, or to positive feelings. Second, we can elicit reflective beliefs about well-being, only if we identify a role for the concept of well-being. Suppose that this role is unrelated to moral beliefs. Then, the concept belongs to prudential reflection, identified as a type of reflection that determines the individual good of a subject in a sense that should not be confused with the good in a moral or perfectionist sense. But ordinary prudential

110

M. Loi

reflection (e.g. should I take this job?) involves a mix bag of reflections concerning all the different ways a life can be good. Third, real-world existential choices are best interpreted as reflecting a complex balance of practical considerations, reflecting the attractions of different goods, that are appreciated at a lower level of abstraction than the categories of the “moral good” versus the “good for a person”. These less abstract goods can be described in quite abstract terms as pleasures, enjoyment, achievements, moral virtue, authenticity, excellence in the realization of one’s abilities, self-realization through work, etc. Some of these goods are clearly prudential, some are clearly moral, however, some are in between, e.g. the good of accomplishment, as I shall argue next. Fourth, reasonable decisions follow from attaching the right amount of importance to different combinations of these “intermediate goods”, which different possible lives may contain in different amounts and degrees. In order to do that wisely, one does not need to keep a separate accounting sheet where the contribution of particular choices, decisions, or feelings towards well-being and their contribution to the moral value of a life are entered as distinct accounting columns. If the third and fourth premises hold true, well-being does not play an important role in deciding how to live, if conceived independently of any “concern for others and any moral restraint or obligations” (Scanlon 1998, 109). I will now argue for the third and fourth premises. Consider the prudential value of the good of the accomplishment of something worthwhile. Many philosophers have argued that this is an essential element of the GFMP in an “objective list” conception of it (Griffin 1986; Raz 1986; Scanlon 1998; Arneson 1999). But the worth of many personal accomplishments  – e.g. achieving the independence of a country, saving the whales from extinction, preventing unnecessary human suffering by working as a physician for a charity – is not conceptually independent of their moral value and any partition of its value into distinct moral and prudential dimensions appears somewhat arbitrary. Consider the following example. One may think that the most effective means for promoting the impersonal moral value of saving lives is to choose the most remunerative profession one can find, which enables one to maximize the amount of charitable donations to charities for the sake of saving lives.6 It seems plausible that a reasonable agent could believe this and yet prefer a life in which she plays a more direct role saving human lives directly, considered a better life for her in a broader sense. It would be odd to characterize the latter as a life that is better for the individual independently of its moral value, so the distinction between morality and the personal good is not sharp. It may be objected that the agent may “prefer” to save lives directly, rather than indirectly, because it takes a certain level of skill and knowledge, or perhaps courage, to save a human live directly, and also because such skill, knowledge, and courage can be good for the subject, and a cause of self-esteem, also considered 6  This is a simplified version of the effective altruism idea (Effective Altruism 2019). Being fair to the ideal, effective altruism may be achieved not only through the “earning to give” strategy but also with direct engagement depending on the individual and his or her circumstances.

5  What Contribution Can Philosophy Provide to Studies of Digital Well-Being

111

intrinsically good. In reply, a subject may prefer a life in which he saves lives directly to one in which he saves the same amount of lives indirectly and achieves the same skills, knowledge, courage and self-esteem doing other things that do not involve benefiting others. While one could assume that the two lives contain the same amount of impersonal moral value, it seems odd to describe the preference for the first life as prudential, if prudence by definition concerns reasons that are do not stem from “concern for others and any moral restraint or obligations” (Scanlon 1998, 109). Instead the choice seems to follow from a particular way of valuing others that calls for personal involvement. Second, one arguably does not need to use the concept of well-being to make wise decisions about the best life one can live. This is because well-being is transparent to practical reason in the first personal perspective (Scanlon 1998; Raz 2004). Here, ‘transparent’ means that if I have good reasons to prefer life A to life B, because life A contains a different mixture of goods from life B, the further fact of the contribution of the goods in A to my well-being does not provide me a further reason to prefer A to B. What makes A more desirable than B are the goods in it, not the overall well-being value which supervenes on them. Consider Mary, the head surgeon from our previous example, again. Suppose at 57 years of age she wonders whether she should quit or instead continue until she is 65. If Mary chooses to leave her job and become a small wine producer, her reasons to do so are provided by the pleasantness, lack of stress, and higher health promised by that prospective life. Mary could believe that these goods are more important than all of the further life-­ accomplishments she could still achieve as a surgeon. At this point in her career, the vineyard life would be the most choiceworthy life, all things considered for her – the life she has most reasons to choose. Does this assessment need the concept of well-­ being? First, Mary may identify the vineyard as the overall best choice now, because it is more enjoyable. She may, for example, consider also the moral goodness of saving lives, but conclude she has contributed enough to that goal and she is not morally obliged to do more. Mary can reach this conclusion without first, asking “what makes my life better for me, independently of concerns for others and moral considerations?” and then, having established it is the vineyard life, evaluating further that the life that is best for her should take priority over any moral concern. Indeed, when I strive to think about the contexts in which it would really be hard to exercise practical reason without utilizing the concept of well-being, I end up thinking about contexts in which what is at stake is an individual’s relation to others, for example assessing the stakes of a collaboration or achieving fair deals. For instance, we need some kind of concept of well-being to determine if two parties in an exchange both benefit from it (win-win situation)7. So it is not surprising that the concept of well-being (or welfare, as it is often called) has a meaningful role to play in economics: assessing such exchanges, both normatively and in terms of the likelihood that actors would spontaneously engage in them, is a significant part of what 7  Kraut describes the context of cooperation, negotiation, bargain, compromise, competition and fight as one in which the desire-fulfillment notion of well-being is plausible. Our goals in that context justify treating people’s preferences as given (Kraut 2007b, Chap. 30).

112

M. Loi

economists do. Arguably, win-win arrangements are also relevant in political philosophy. For example, some social contract theories define social justice in terms of the fairness of an arrangement, where such fairness is also defined in terms of mutual gains or Pareto optimality (Barry 1989). More generally, the concept of well-being seems important when questions of justice or fairness, or our moral duties to disadvantaged others, need to be assessed. Paradoxically, the question “what is well-being, how should we measure it?” seems more relevant when we move from the domain of individual prudence to the domain of social morality – of ‘what we owe to each other’ to use Scanlon’s (1998) apt expression. Section 5.4 draws the implications of this view for the debate on digital well-being.

5.4  How to Justify a Philosophical View of Digital Well-Being Given the arguments in the previous section, one may reach the conclusion that all philosophical discourse concerning well-being is bound to be entirely fruitless. Yet, this does not follow from the argument I advanced. When one asks “what constitutes well-being?” in the context of an inquiry that has a practical purpose one is not asking what makes a human life go well, ultimately, in some general sense. Instead one is answering to a different question. The question, is, roughly, the following: what are those dimensions of the good life, of individual interest, or benefit, or advantage, that it makes sense to combine together for the purpose of making the kind of comparisons (both intrapersonal and interpersonal) that one needs to make, in order to achieve a given goal? A theory of well-­ being in this sense is always a theory of what makes individuals “better off”, but the specific respect in which one is said to be “better off” is determined contextually8. Different practical purposes imply different adequacy conditions for the concept of being “better off” which is used in different contexts. Notice that by “practical” I do not mean here “immediate”. A practical goal can be to address the plague of poverty, or to identify what makes a society just. Here I will sketch four different roles that a concept of well-being may play for purposes related to the moral assessment of information environments. By ‘information environment’ I mean what Floridi calls the ‘infosphere’: the sum total of (meaningful) information that is used as resource, produced, or taken as a target of other actions (Floridi 2013, 2014). The list is far from being exhaustive. For example, it does not include the fully moralized Aristotelian idea of the good, digital life understood as the life instantiating the digital virtues (Plaisance 2013; Vallor 2016).

8  This claim coheres with a contextualist analysis of well-being (Alexandrova 2017a, 3–25) and with the (different) idea that the expression “well-being” indicates different, but related, concepts in different contexts.

5  What Contribution Can Philosophy Provide to Studies of Digital Well-Being

113

5.4.1  Rights-Related Concepts of Digital Well-Being First, one may distinguish a rights-related concept of digital well-being. This is the concept of digital well-being that fulfills the moral function of comparing lives (both intra-personally and inter-personally) for the sake of evaluating if persons’ rights are exposed to risk. The relevant rights here are also those that persons have everywhere, not just and in virtue of interacting with information environments. For example, subsistence and security can be considered elements of rights-related well-being, because it can be argued that a minimal level of security and subsistence is a presupposition of the existence of all other rights (Shue 1996). For example, I do not really enjoy the substance of a right to free speech if I enjoy no protection of my security against thugs who may be paid to beat me if I speak against the local governor. Similarly, I do not enjoy it if I can be fired if I express a political position that my employer disagrees with. Subsistence and security are rights-related aspects of well-being even in information environments. Clearly, my rights are under threat if, due to a security breach, malicious third parties can break into my online bank account and deprive me of those material resources on which my life depend. One rights-related aspect of well-being of special importance in information environments is the good of privacy. Arguably if one has a right to dignity – the capacity to exist in public without feelings of humiliation (Sen 1995) – then the good of privacy has to be presupposed.

5.4.2  Market-Based Concepts of Digital Well-Being Second, we may need a market-based concept of well-being. This is relevant from the point of view of a policy-maker who aims to assess the economic value produced by information environments. Which dimensions belong to this concept is defined by considering the limits of the state, which is a question of political philosophy. In a liberal society, for example, one may want such authorities  – e.g. competition and antitrust authorities – to steer away from substantive and controversial comprehensive assumptions about the good life, when evaluating the economy. In this context, an actual-desire fulfillment theory of consumer’s well-being could be justified as a complement to a rights-related concept of digital well-being. The two forms of well-being together can be seen as parallel but distinct directives steering a liberal government’s economic policy. First and foremost, political authorities should protect the rights-related well-being of citizens; second, and only in ways compatible with the first constraint, political authorities should promote a market-­ based conception of well-being. In the first role, the regulator makes strong assumptions about the interests of people: these are objective, not subjective, and are defined as the interests protected by fundamental rights. In the second role, the political authority takes the desire-satisfaction view that consumers themselves determine what is good for them through their actual desires. The last claim is not

114

M. Loi

valid in an absolute sense, but it is a reasonable assumption given the function of political power in a liberal perspective. So, this is a suitable concept for competition authorities, protecting the digital market from the abuses of dominant digital players. Their legitimate purpose is to ensure that citizens qua consumers obtain from markets what they want. Their goal is not to ensure that they get what a benevolent paternalist authority thinks it would be good for them. It is a conception of well-­ being derived from a political idea of citizens’ freedom in a liberal political arrangement and, clearly, it depends on it for its justification.

5.4.3  Emotional-Functioning Concepts of Digital Well-Being Third, the debate on digital well-being may need emotional-functioning concepts of well-being9. Such concepts are relevant in the perspective of educators, medical professionals, and more broadly carers. They are relevant in the perspective of someone, or some agency, responsible (because of a professional obligation, or of a contractual obligation) for the long-term sustainability of good emotional functioning of an individual who interacts with an information environment. Good emotional functioning can be considered a wide-purpose capability and an enabler of most other achievements in a person’s life. By borrowing a Rawlsian terminology (Rawls 1971), it may be considered a “natural primary good” – an all-purpose natural resource. The kind of emotional functioning conception that can play this role identifies well-being with having happiness, in a sense that differs from hedonism. Happiness in this sense is a feature of the individual as a whole, so it is not reducible to pleasant experience (Haybron 2011). To some extent, the concept belongs to empirical psychology, rather than philosophy, since determining when and how much an information environment impacts emotional functioning is clearly a task of empirical psychology.10 But methodological questions about the well-being definitions in empirical psychology are not wholly independent of normative questions, that have a philosophical dimension (Alexandrova 2017b). Some psychologists may characterize emotional well-being a net positive contributor to human flourishing, understood along neo-Aristotelian lines (see Axiom 5).11 Notice that this concept differs in both substance and goals from both market-­ related and from rights-related well-being. Desire-satisfaction – what market regulators legitimately promote  – may not lead to emotional well-being. Even the interests protected by fundamental political rights have an order of priority that does 9  For further discussion of the role of emotions in digital well-being, see Marin and Roeser (Chap. 7, this collection) 10  I owe this point to a referee of this text. 11  For example, Fredrickson’s (2001) explicitly aims to identify a contributor to optimal functioning. This is a flourishing construct, related to the (neo-)  Aristotelian theory of the good life. Alexandrova (2017b, XXXVI) lists five empirical well-being measures as relate to this theory/ concept.

5  What Contribution Can Philosophy Provide to Studies of Digital Well-Being

115

not have to reflect their contribution to emotional well-being. At the same time, emotional well-being is normatively important due to its connection to both flourishing and rights, in particular the right to health. For example, trade unions may have a legitimate interest to assess the implications of introducing artificial intelligence in the workplace on the emotional well-being of workers.

5.4.4  Information-Related Concepts of Digital Well-Being Last, but not least, it is fruitful to formulate and discuss information-related concepts of digital well-being. These are concepts of well-being that group together all those dimensions of a person’s well-being that it is reasonable to regard as the mission of an agent controlling an information environment to promote. Consider the information environment controlled by Facebook, Inc.  – comprising the web-­ platforms Facebook and Instagram. The choices of the management of this company can only reasonably be regarded responsible for some aspects of digital well-being, not all. After all, not all aspects of a person’s life can be improved by improving Facebook and Instagram. Not all ways in which Facebook and Instagram affect users can be improved by its owners and managers. Yet, some are, and it may be useful in some cases to group such dimensions together, e.g. for the purpose of evaluating a creator or manager of an information environment, in a moral perspective. In such context, one may have reasons to consider only those aspects of the good life of the users, that the entity generating the environment can affect and for which it can be considered accountable. Information-related concepts of well-being are relative to contexts, because different information environments are designed to generate information about different goods. Consider the hypothetical Facebook’s claim to design its platform functionalities only with its users’ interest in mind. It is not obvious that Facebook should take it on itself to promote the emotional well-being of its users, or their market-related well-being. (The former may lead Facebook to aggressively polish emotionally unsettling legitimate content; the latter may justify more privacy invasive and successful profiling techniques.) Facebook’s explicit mission, is, after all, to enable personal connections and communities. So, there are contexts in which it is just appropriate to measure how Facebook lives up to its stated mission. Treating it as the steward of all possible (and reciprocally contradictory) dimensions of the well-being of its users may be wholly inappropriate.12 The duty to promote

 This domain-contextualism of well-being concepts used to evaluate platforms can be justified in two ways: normatively, it coheres with the liberal political idea that only individual citizens are responsible for their well-being as a whole. Pragmatically, it is supported by the idea that mid-level theories of well-being (those that can be studied scientifically) ought to be “practical and usable in addition to being plausible” (Alexandrova 2017b, 56), since measuring meaningful proxies of the agent’s overall all-things-considered well-being is not typically feasible.

12

116

M. Loi

cybersecurity on the platform, on the other hand, can be justified by appealing to the rights-related conception of users’ interest. A final problem to be considered concerns the trade-off between different concepts of well-being that are at stake. There is not a sufficient space here to provide an comprehensive answer, so I will only provide a sketch of a possible solution that follows John Rawls’s (1971) idea of a possible hierarchy between different goods and the principles concerning them. This is only provided as an illustration of how a pluralist conception of well-being may be coupled with a theory that is pluralist in terms of the principles it involves. The principles of digital justice I propose are the following: 1. Harm principle – information environments are well-ordered, a) only if they do not significantly increase risk of significant harm to which any user is exposed, and b) if (i.e. not only if) they reduce the (aggregative) risk of significant harm to which all users are exposed. 2. Non-discrimination principle – unequal prospects of obtaining goods for some participants to information environments, which result from the expression of human preferences in that information environment, are just only if users who meet the relevant preferences of other users to a similar degree have similar prospects of earning the information-related goods of those digital environments, irrespective of their unequal irrelevant traits (such as, depending on the context, sex, gender, race, ethnicity, religion, etc). 3. Benefit principle  – inequalities in power and responsibility between different roles of participants in information environments are just only if a less unequal distribution would be less beneficial for the role that has the least power in shaping the digital environment in question. The three principles are hierarchically ordered, that is, the nth principle must be satisfied before the nth+1 is satisfied. An information environment that satisfies them is well-ordered. The three principles of digital justice in question relate to the different goods described above as follows: the first principle uses a notion of “harm” that does not refer to preference satisfaction, but instead to rights-related interests, e.g. security, health and privacy. The second principle of digital justice uses the concept of “goods” which are information-related ones, i.e. the ones directly controlled by an information environment and related to its primary mission. For example, it is not necessarily unfair for Facebook users to have different expectations of human connections and communities, as long as inequalities are uncorrelated to irrelevant traits. Finally, power and responsibility in the third principle refer to the powers and responsibility in designing and managing an information environment and the word “beneficial” refers again to the information-related goods of that environment. The third principle implies that it is fair for information environments to be organized hierarchically, and for some individuals to exercise power and influence over others by virtue of designing and managing them, (only) if this is necessary to benefit the users. But no benefit in terms of information-­ related goods can justify an information environment that is discriminatory (principle 2) or that puts people’s fundamental rights at significant risk (principle 1).

5  What Contribution Can Philosophy Provide to Studies of Digital Well-Being

117

5.5  Conclusion In conclusion, there are reasons to be skeptical that philosophy can contribute to the debate on digital well-being by revealing the ultimate nature of well-being in general. It is more fruitful to use philosophy as an aid to constructing and justifying a plurality of well-being concepts, which are meant to fulfill different theoretical and practical functions. The nature of well-being is thus relative to the function the concept plays in a context. The context can be linked with a set of moral presuppositions, which may not be valid in a different context. I have highlighted four different possible functions for discourse on digital well-being and distinguished the corresponding well-being concept types. The validity of a construct of digital well-being, understood as a public standard, will always depend from the ethical plausibility of the premises and goals that stand behind it. Acknowledgements  The author wishes to thank Prof. Gianfranco Pellegrino for many thoughtful comments and criticism to an earlier version of this paper.

References Alexandrova, Anna. 2017a. A Philosophy for the Science of Well-Being. New  York: Oxford University Press. ———. 2017b. “Introduction.” In A Philosophy for the Science of Well-Being, by Anna Alexandrova, XIII–XLV. New York: Oxford University Press. Arneson, Richard J. 1999. Human Flourishing versus Desire Satisfaction. Social Philosophy and Policy 16 (1): 113–142. Barry, Brian. 1989. Theories of Justice., California Series on Social Choice and Political Economy 16. Berkeley: University of California Press. Crisp, Roger. 1997. Raz on Well-Being. Oxford Journal of Legal Studies. 17: 499. ———. 2017, Fall. Well-Being. In The Stanford Encyclopedia of Philosophy, ed. Edward N. Zalta. Metaphysics Research Lab/Stanford University. https://plato.stanford.edu/archives/fall2017/ entries/well-being/. Darwall, Stephen. 2010. Welfare and Rational Care. Princeton: Princeton University Press. Effective Altruism. 2019. Effective Altruism Home. Effective Altruism. Accessed October 24, 2019. https://www.effectivealtruism.org/. Floridi, Luciano. 2013. The Ethics of Information. Oxford: Oxford University Press. https://books. google.it/books?hl=en&lr=&id=_XHcAAAAQBAJ&oi=fnd&pg=PP1&dq=floridi+the+ethics +of+information&ots=fXmJ5-WtQU&sig=qBb5n0b99KK2jBLGh09PmSRkmN8. ———. 2014. The 4th Revolution: How the Infosphere Is Reshaping Human Reality. Foot, Philippa. 2001. Natural Goodness. Oxford: Oxford University Press. Fredrickson, Barbara L. 2001. The Role of Positive Emotions in Positive Psychology: The Broaden-and-Build Theory of Positive Emotions. American Psychologist 56 (3): 218. Griffin, James. 1986. Well-Being: Its Meaning, Measurement, and Moral Importance. Oxford: Claredon Press. Haybron, Dan. 2011, Fall. Happiness. In The Stanford Encyclopedia of Philosophy, ed. Edward N.  Zalta. Metaphysics Research Lab/Stanford University. https://plato.stanford.edu/archives/ fall2011/entries/happiness/. Hurka, Thomas. 1987. `Good’ and `Good For’. Mind 96 (381): 71–73.

118

M. Loi

———. 1996. Perfectionism. Oxford: Oxford University Press on Demand. Hursthouse, Rosalind. 2001. On Virtue Ethics. New York: Oxford University Press. Kraut, Richard. 2007a. Nature in Aristotle’s Ethics and Politics. Social Philosophy and Policy 24 (2): 199–219. ———. 2007b. What Is Good and Why: The Ethics of Well-Being. Kindle. Cambridge, MA: Harvard University Press. ———. 2013. Desire and the Human Good. The American Philosophical Association Centennial Series: 255–270. McDowell, John. 1980. The Role of Eudaimonia in Aristotle’s Ethics’. In ed. Amélie Oksenberg Rorty, 359–76. University of California Press. Moore, George Edward. 1993a. Principia Ethica. Edited by Thomas Baldwin. 1st ed. Cambridge: Cambridge University Press. ———. 1993b. The Conception of Intrinsic Value. In Principia Ethica, ed. Thomas Baldwin, 1st ed., 280–298. Cambridge: Cambridge University Press. Nozick, Robert. 1974. Anarchy, State, and Utopia. New York: Basic Books. Parfit, Derek. 1984. Reasons and Persons. Oxford: Clarendon Press. Plaisance, Patrick Lee. 2013. Virtue Ethics and Digital ‘Flourishing’: An Application of Philippa Foot to Life Online. Journal of Mass Media Ethics 28 (2): 91–102. https://doi.org/10.108 0/08900523.2013.792691. Rawls, John. 1971. A Theory of Justice. 1st ed. Cambridge, MA: Harvard University Press. Raz, Joseph. 1986. The Morality of Freedom. Oxford: Oxford University Press. ———. 2004. The Role of Well-Being. Philosophical Perspectives 18: 269–294. Scanlon, Thomas. 1993. Value, Desire, and Quality of Life. In The Quality of Life, Digital Edition. Oxford: Oxford University Press. https://www.oxfordscholarship.com/view/10.109 3/0198287976.001.0001/acprof-9780198287971-chapter-15. ———. 1998. What We Owe to Each Other. Cambridge, MA: Belknap Press of Harvard University Press. Sen, A.K. 1995. Inequality Reexamined. New York: Harvard University Press. Shue, Henry. 1996. Basic Rights: Subsistence, Affluence, and U.s. Foreign Policy. 2nd ed. Princeton: Princeton University Press. Sumner, L.W. 1996. Welfare, Happiness, and Ethics. Oxford/New York: Clarendon Press/Oxford University Press. Vallor, Shannon. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford University Press. Michele Loi is Senior Researcher at the Digital Society Initiative and at the Institute of Biomedical Ethics and the History of Medicine of the University of Zurich. A political philosopher by training (PhD, LUISS Guido Carli 2007), with a thesis on the concept of well-being, since 2013 Michele Loi has been working on ethical issues at the intersection of public health ethics and data ethics. He has held research positions in Milan, Braga and Zurich (ETH) in departments of philosophy, bioinformatics and system biology and has been a consultant for the World Health Organization’s Global Health Ethics department, contributing to ethical guidelines on responses to infectious disease epidemics and public health surveillance. His current research focuses on the ethics of algorithms and big data. At a high level of abstraction, he has always been asking the same question ‘How should we reinterpret our political and moral values as the technology around us changes ourselves and society?’ [email protected]

Chapter 6

Cultivating Digital Well-Being and the Rise of Self-Care Apps Matthew J. Dennis

Abstract  Increasing digital well-being is viewed as a key challenge for the tech industry, largely driven by the complaints of online users. Recently, the demands of NGOs and policy makers have further motivated major tech companies to devote practical attention to this topic. While initially their response has been to focus on limiting screentime, self-care app makers have long pursued an alternative agenda, one that assumes that certain kinds of screentime can have a role to play in actively improving our digital lives. This chapter examines whether there is a tension in the very idea of spending more time online to improve our digital well-being. First, I break down what I suggest can be usefully viewed as the character-based techniques that self-care apps currently employ to cultivate digital well-being. Second, I examine the new and pressing ethical issues that these techniques raise. Finally, I suggest that the current emphasis on reducing screentime to safeguard digital well-being could be supplemented by employing techniques from the self-care app industry. Keywords  Digital well-being · Self-care · Screentime · Persuasive technology

6.1  Introduction: Seeking Digital Well-Being Overstating today’s concern with digital well-being is hard. Over the last 3 years there has been fervid interest in this topic, both from the general public and from the tech industry.1 Developing online products with a focus on our well-being was a major theme of Google’s influential I/O annual showcase this year (2019), and over the last 24 months each of the so called ‘big-five’2 social networking companies

 This interest is often reflected in today’s broadsheet press. See Roose (2019) and Schwatz (2019).  Twitter, Facebook, Instagram, Google, YouTube.

1 2

M. J. Dennis (*) Department of Values, Technology, and Innovation, TU Delft, Delft, The Netherlands e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_6

119

120

M. J. Dennis

have launched a dedicated page detailing how their products can be used to create a healthy digital life. These initiatives are motivated by a broad recognition among Silicon Valley executives over recent years that their products have had a number of unintended effects. Not only is there now wide agreement that online technologies have exacerbated collective political problems (Trottier and Fuchs 2015; Graham and Dutton 2019), but there is now growing evidence that aspects of these products have contributed to a veritable assault on our individual digital well-being (Goh et al. 2019; Goodyear et al. 2018; Samad et al. 2019). While in global terms the number of us getting online is quickly and steadily rising, tech companies are worried that a growing minority of culturally-influential and economically-wealthy digital natives are choosing to radically limit their online presence because of what they regard as pernicious effects on their lives.3 Some users have even used digital platforms to detail how little time they spend online. It would be cynical to suggest that trying to retain these users is the sole motivation for this surge of corporate interest in digital well-being, but it would be naïve to regard these two trends as entirely unconnected. Awareness of the problems associated with the ‘always-on’ culture gained a massive leap in industry attention in 2013 when then Google employee, Tristan Harris, sent a memo to each of Google’s 100,000 employees about the chronic amount of meaningless distractions that he claimed their products generate in users. Harris’ PowerPoint presentation has now become an industry classic (Harris 2013). It highlights the damage done to digital well-being though Google’s own products, some of which Harris had worked on himself. After the fall-out from his slideshow had subsided, Harris was promoted to ‘Chief Design Ethicist’ at Google, but left to found his own NGO that promotes digital well-being specifically in 2015. This organisation, The Center for Humane Technology, is a now an influential lobbying group. It petitions companies within the tech industry to adopt a mandate this is more friendly to the effects of their products on users. It focuses on concerns relating to digital well-being, and offers practical advice to users recommending how to limit their consumption of screentime of digital products. Following the trajectory of Harris’ concerns, this article has four aims. First, I introduce self-care apps, a new use of app technology that adopts a remarkably different approach to digital well-being than Harris and his colleagues at The Center for Humane Technology. Here I lay out the main techniques of these apps, focusing on their potential to move beyond traditional practices of self-care. Second, I expose the problems with using apps to improve our digital well-being, an issue that is closely tied to the fact that they require us to spend ever more time online. Third, I introduce the ethical issues that pertain to self-care app technology. While certain issues can be regarded as extrapolations of previously explored issues in ethics and the philosophy of technology, some pertain to self-care apps specifically. Finally, in 3  In a recent article, Sullivan and Reiner claim that these pernicious effects include: ‘distracted driving, unfocused conversations, scarce opportunities for contemplation, etc.’ (2019, p. 2). The editors of this volume also recommend Peters et al. for a concise summary of the key literature on this topic (2018, pp. 1–2).

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

121

conclusion, I sketch out the ways in which self-care apps could be improved, focusing on how we can mitigate some of the ethical problems that are associated with the technology in its current form.

6.2  The Rise of Self-Care Apps In an age when many of us are starting to worry about the time we spend online, it is perhaps appropriate that iTunes declared ‘self-care’ the App Trend of the Year in 2018 (iTunes 2019). Self-care apps originally developed out of electronic fitness programmes, such as Strava and 8Fit. Instead of monitoring one’s fitness, however, these apps aim to improve one’s digital well-being. They offer two main ways to do this: (1) these products either monitor and evaluate the time we spend online, or (2) they provide a digital means for the user to cultivate their well-being, for example by allowing the user to engage in an online mindfulness practice. Doing both these things is regarded as an essential way to cultivate the digital well-being of online users. Many self-care apps provide compelling evidence that they cause statistically measurable changes in well-being. Happify, an industry leader which regularly clocks up a staggering 60,000 monthly downloads (Crunchbase 2019a), greets first-­ time users with the encouraging information that 86% of regular users (defined as those that use the app 3–4 times weekly) notice an improvement in their happiness in 2 months (Happify 2019). Recently, these claims have received independent support from members of the positive psychology community. Positive psychologist, Acacia Parks, has now co-authored two studies on Happify that claim to detect statistically measurable changes in users’ well-being (Parkes et al. 2018, forthcoming). Other apps rely on different claims and offer their own statistics. There have been no independent reviews of Headspace, for example, but there is plenty of evidence for the effect of mindfulness practices on well-being (Tang et  al. 2015; Khoury et al. 2013), as well largely positive studies on the effects of digital mindfulness apps specifically (Mani et  al. 2015; Cavanaugh et  al. 2013; Boettcher et al. 2013). As one might expect, each self-care app is packaged up and branded differently, and endorses its own self-care techniques to best increase the user’s well-being. The techniques that these apps employ either use online technology to digitise traditional practices of self-cultivation, or they are designed to counter the pernicious effects of being online too much. This is a key distinction, which I return to below. Nevertheless, both techniques assume that digital well-being can be improved by sculpting and moulding the user’s behaviour. Both kinds of technique aim to modify the life of the self-care participant, either in terms of changing behaviours, or in terms of refining character traits. While the big five social-media companies have largely modelled their resources to improve digital well-being on the sort of

122

M. J. Dennis

rule-­based approach that focuses on limiting screentime that Tristan Harris advocates,4 self-care apps typically offer what could be described as a characterbased approach. Such a character-based approach has recently been elucidated in the work of Shannon Vallor (2012, 2016). In her influential book, Technology and the Virtues, she claims to offer an ‘ethical strategy for cultivating the type of moral character that can aid us in coping, and even flourishing, under [technologically] challenging conditions’ (Vallor 2016, p. 10, emphasis added). Furthermore, Vallor argues that we can find the conceptual resources for her approach from the virtue traditions of Aristotelianism, Buddhism, and Confucianism. While Vallor’s work on the importance of a character-based theoretical framework for living well with technology has become the locus classicus for discussion of this topic, it is important to note that there exists an earlier literature on the topic of character, human flourishing, and technology from philosophers based in the Netherlands (see Brey et  al. 2012; Poel 2012; Verbeek 2012). These authors also draw on a wide range of character-­ based concepts to understand how we can live well with online technologies. So how might apps that strive to practically adopt a similar character-based approach work? In the rest of this section, I identify what I believe we can usefully view as character-based techniques in self-care apps, returning to the question of whether these products can complement the rule-based approaches to screentime in Sect. 6.5. 1. Active documentation. Apps such as iMoodJournal and MoodNotes prompt users to record their mood when they are electronically notified via their smartphone throughout the day. This could improve the documentation of mood for users, such as those suffering from depression. For example, researchers on mood have found that if depressed patients self-report their mood to a medical professional it is often highly inaccurate. This is because mood recollection is mainly determined by the mood the patient is in when one is recollecting, which is in turn determined by local, environmental cues (MacQueen et  al. 2002). Using data that tracks a patient’s mood throughout the day makes diagnosis and treatment much more accurate. Active documentation gives healthcare professionals tools to gather data on patients in a way that would be impossible without this technology. 2. Instilling habits. Many self-care apps seek character change by instilling habits. From Aristotle onwards, instilling habits has been regarded as an effective means to cultivate one’s character, a claim that has been consistently backed by the empirical evidence that both contemporary virtue ethicists and positive psychologists draw upon. It should come as no surprise, then, that self-care app manufacturers have sought to incorporate habit formation into their products. Based

4  Although monitoring screentime was once a specialised self-care app function, it is telling that it was incorporated into Google’s Android and Apple’s iOS operating systems in their updates of 2018.

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

123

on research in positive psychology,5 digital self-care companies have incorporated gratitude ‘tasks’ into their products to precipitate the concomitant feelings of life-satisfaction that expressing gratitude gives to the one who expresses it (and presumably to one who is thanked). For example, developers from the Parisian company, Fabulous, prompt users to manifest gratitude to their friends and loved ones, as well as reminding them to do a daily ‘good deed’ via a smartphone notification. While the self-interested motivation for expressing gratitude in this case would not meet Aristotle’s criteria for the necessary disinterested nature of this character trait, it does give us a clue to how a practice pertaining to our moral development could be coded into an app. 3. Mindfulness techniques. While there are many dedicated meditation apps (Headspace, Buddhify, MindBliss), many of the largest self-care app companies, such as Calm, contain a mindfulness function. Perhaps it is no accident that tech users have sought to solve paradigmatic problems of digital wellbeing with techniques that explicitly aim to tackle anxiety, procrastination, and distraction  – problems that are associated with spending too much time online. Mindfulness practices have been fashionable in Silicon Valley since the 1990s, and there has been a widespread mainstream uptake of this kind of self-care technique, including in sport psychology, education, and the military. Much of this has been fuelled by the empirical evidence that mindfulness has a positive effect on the lives of meditators (Grossman et al. 2004). Apps offer the possibility of moving beyond traditional mindfulness techniques in two important ways. First, these exercises can be inserted between required daily activities – while travelling on public transport, say, or waiting to pick up the kids from school. Instead of devoting time to visit a mindfulness teacher, apps offer the possibility for time-pressed people to increase their well-being though practicing mindfulness, without the expense of travelling to a dedicated space to do so. Second, incorporated algorithms allow guided meditations to be targeted at individual users in a way that would be impossible in a group setting. For instance, according to the needs of users, Buddhify’s algorithms allow it to point users towards guided meditations on ‘friendship’, ‘anger’, ‘fear’, ‘sleeplessness’, etc. I return to the topic of such ‘personalisation’ in Sect. 6.4. 4 . Self-reflective notetaking. Written reflection on one’s daily activities is a practice of self-care that has a venerable history. Closely associated with the Imperial Stoic school, this practice enjoys much attention in today’s self-care communities. Apps such as Reflectly prompt users to record their daily thoughts on feelings by way of notifications. These diaries form a continuous record of how one’s character develops, which can be re-read to compare one’s current perceptions of an ongoing issue with one’s memories of it. More sophisticated apps such as Replika have an active AI function that gently probes the user’s mental, emotional, and existential state. Incorporating AI technology into such apps

5  Examples of replicated research on this topic are now widespread. For two comprehensive metaanalytical studies, see Davis et al. (2016) and Tunney and Ferguson (2017).

124

M. J. Dennis

holds particular potential because it provides a virtual confidant that stores details of the user’s life narrative in its memory, identifies patterns in negative thought patterns, and can even point to connections in these stored confidences. Given the anticipated future development of AI technology over the next decades, we should expect that self-care apps using this kind of tech will become increasingly better at interrogating a user’s written text or masquerading as a flesh-and-­ blood correspondent. For another example of a technology aimed at self-reflective writing, see Gibson and Willis in this collection. 5. Community building. Although the emphasis on the ‘self’ in self-care has been criticised for being individualistic, traditional self-care techniques strongly emphasise the communal dimension of character change. App developers have striven to meet the challenge by replacing an actual community with a digitally-­ mediated one. Headspace, for example, tells the user how many others are simultaneously mediating with its product in real time. As Headspace is the market leader in meditation apps, these numbers frequently run into the hundreds of thousands. As one might expect, the aim of this function is to boost users’ sense of community identity, which in turn boosts their own motivation to meditate. 6 . Gamification. This technique is contained in many of the categories outlined above, and it is one that is unique to online platforms. Many self-care apps make use of ‘streak’ technology, a gamified function that records the user’s unbroken daily engagement with the app concerned. Streaks are created when a user engages with the app consistently. Typically, users gain marks each day for using the app, and these collate to form a lengthening streak. This in turn motivates the users to maintain their progress on the app because the streak comes to be perceived as valuable. Apps such as Happify also use more overtly gamified techniques. Originally started by two entrepreneurs from the video-games industry with the self-professed aim to ‘gamify happiness’, this app uses a gaming interface to ask users to identify the emotions they are feeling, to reflect on them, then to sift them into positive and negative categories. Since Happify is available to users above 4 years old, this offers a way to engage with its younger audience. As mentioned above, these six techniques can be divided into two types. In the first instance, some aim to replicate techniques already found in traditional self-care. For example, the community-building element in Headspace compensates for the what might otherwise be regarded as the app’s highly individualistic format. By informing the user how many other people are simultaneously using Headspace, the user’s commitment to their meditation practice is supposed to be bolstered. Such a technique provides a good example of how online products replicate or improve upon existing techniques in the self-care community, although it is important to question how effective these techniques actually are. On the one hand, we might imagine this effect to be greater than the sense of community created in a traditional technique. Instead of meditating with a small group of flesh-and-blood individuals at a certain place (meditation studio, church hall, etc.), users are offered a global perspective as they are shown visual graphics that depict the numbers and locations of fellow

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

125

meditators throughout the world. On the other, it is easy to see how online products could actually heighten the sense of isolation that presumably comes from engaging in a self-care practice on one’s own. By replacing real-life self-care with a virtual equivalent, some may feel that apps can only offer a much lesser sense of community than traditional practices. In the second instance, some of these tools extend traditional self-care by explicitly using the online platform provided by smartphones and other mobile devices as a vehicle to massively increase uptake of self-care. Apps can be said to improve on traditional self-care in ways that entirely depend on the technology with which they are powered. Gamification is perhaps the best example of this, as there is no equivalent technique in traditional self-care. By using gamified techniques, self-care app developers can ensure their users engage with their products on a daily basis, can make the practice of self-care more fun and less arduous, and can closely monitor how committed users are to their self-care practice. Nevertheless, since these six techniques require us to spend more time online, we might worry that the very idea of online self-care is strange, especially when we think back to the rule-based approaches concerned with limiting screentime that are strongly advocated by Tristan Harris and the Center for Humane Technology. Can digital well-being be improved by spending more time online when doing this is the very thing that that some view as impacting on our digital well-being in the first place?

6.3  Charting a Continuum of Views on Digital Self-Care Given what I have said about the recent crisis in digital well-being, followed by a growth of online products aiming to minister to it, the reasons why online self-care might be regarded as odd should now be clear. If the problems that impact upon our digital well-being can be partially attributed to excessive online use, then seeking the solution to these problems by using online self-care products presents us with a continuum of views regarding their efficacy. Lying at either side of the continuum are the views that self-care apps can significantly affect our digital well-being. On one side is the view that this can be positive (I); on the other is the view that it can be negative (II). I. Self-care apps significantly improve our digital well-being insofar as they their digital status allows them to directly tackle the problems caused by spending time online. II. Self-care apps significantly reduce our digital well-being insofar as they require us to spend more time online (more screentime, etc.). Two less radical views can be found at the centre of the continuum. Each views proposes that self-care apps affect our digital well-being to a limited extent, either positively (III) or negatively (IV).

126

M. J. Dennis

III. Self-care apps improve our digital well-being to a limited extent. IV. Self-care apps reduce our digital well-being to a limited extent. Finally, we should also acknowledge that, despite the ambitious claims made for them, self-care apps may have no effect on our digital well-being. Choosing the correct view will, of course, ultimately depend on how each is unpacked.6 Nevertheless, the issue of excess screentime brings any apparent unease we may feel about the idea of online self-care into greatest relief because screentime is often singled out as the most powerful factor affecting digital well-being. It is strongly emphasised by the Center for Humane Technology, and is the focus of all the campaigns for digital well-being launched by Google, YouTube, Facebook, Instagram, and Twitter (mentioned in the Sect. 6.1). These campaigns are often framed in terms of the Center for Humane Technology’s idea of ‘time well spent’,7 suggesting that the technology companies have accepted the idea that either less screentime or spending time online more reflectively is the solution for improving digital well-being. This approach was further developed in 2018 when both market-­ leading operating systems (Android’s Pie 9 and Apple’s iOS 12) introduced rule-­ based functions that allowed users to do three things: (1) monitor the number of hours they spent online each day, (2) limit the time spent on specific apps, and (3) change their screens to ‘grayscale’ to reduce night-time visual stimulation. All these tools are symptomatic of a paradigm that assumes that it is essentially excess screentime that threatens our digital well-being. According to this paradigm, the best way to improve digital well-being is provide users with tools and techniques that allow them to implement rules to limit their time online, rather than providing them with online self-care. But is there a way to defend the second view? Might online self-­ care be able to improve our digital well-being to some extent, even if this requires that we spend more time online? While not all time spent on self-care apps is dedicated screentime, in defending Views II and IV it is important to concede that much of it is. For example, Calm includes a visual accompaniment to its guided mediations. Slowly-moving images, typically representing an idyllic pastoral scene, provide the user with a concrete stimulus on which to focus. As we have seen, there is evidence to suggest that mindfulness increases our digital well-being, so arguably Calm’s guided meditation mitigates the detrimental effects of excess online use, even though this technique requires yet more screentime. Similarly, the app Fabulous, mentioned above, promotes what it calls a ‘habit-forming sleep-preparation exercise’. To do this, the app gives users detailed instructions on how to relax via electronic notifications. To 6  Although the Center for Humane Technology does not actively promote the use of online self-care apps, it is likely that they would grant a limited role to this kind of technology in improving our digital well-being. Harris is surely right to say that there is also a role for non-online self-care (turning one’s device off, covering it with a Faraday bag, etc.), but it would be foolish to think that we can improve all aspects of our digital well-being using these techniques, so there may be some role for online self-care. I return to this question in the final section. 7  In January 2018, for example, Mark Zukerberg posted that his new year priority was ‘making sure the time we all spend on Facebook is time well spent’ (Zuckerberg 2018).

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

127

receive these notifications, however, users must break one of the cardinal rules of sleep gurus: never to take one’s smartphone into one’s sleeping area. This clearly reveals the tension outlined above: to access online self-care to improve one’s digital wellbeing, users must spend more time on their mobile devices in a way that could be said to jeopardise their digital well-being even more. Nevertheless, as we see below, the empirical literature shows that the effect of screentime on digital well-being is extremely complicated and requires a nuanced approach. There is a general consensus that excessive screentime impacts negatively on our lives, but precisely how this works is becoming ever more contested. Indeed, given the increasing importance of being online in the practical lives of many of us, there are even reasons why too little time spent online could be detrimental.8 Although many studies show that a large quantity of time online is not good, clearly not all time spent online is not the same quality. In fact, some more nuanced studies seem to suggest that ‘time well spent’ can be time spent online, and that it may even positively contribute to our digital well-being. For example, Oxford Internet Institute researchers, Amy Orben and Andrew Przybylski (2019), have challenged whether social media activity has such a deleterious effect on well-being as is routinely supposed. After subjecting three large-scale data sets to a specification curve analysis (SCA), they found that: The association we find between digital technology use and adolescent well-being is negative but small, explaining at most 0.4% of the variation in well-being. Taking the broader context of the data into account suggests that these effects are too small to warrant policy change. (Orben and Przybylski 2019, p. 173)

In a follow-up study that compares the effect of social media use to other well-­ known factors affecting well-being, they also concluded that: [S]ocial media use is not, in and of itself, a strong predictor of life satisfaction across the adolescent population. Instead, social media effects are nuanced, small at best, reciprocal over time, gender specific, and contingent on analytic methods. (Orben et al. 2019, p. 10226)

Findings like these are supported by a recent study  – #SortingOutSocialMedia (Birkjær and Kaats 2019) – commissioned by the The Happiness Research Institute. Here Michael Birkjær and Micah Kaats found that if social media users actively posted online then their life-satisfaction actually increases. This 2019 study employed a more sophisticated method than the Institute’s previous (and widely cited) 2015 study that showed that abstaining from social media sites such as Facebook generates ‘significantly higher level of life satisfaction’ (Tromholt et al. 2015, p. 6). The new study was able to isolate the fact that it is passively scrolling though the news feeds of social media that impacts on life satisfaction, a distinction that cannot be measured by simply monitoring screentime alone.

8  Many everyday tasks are now impossible without access to the Internet. This has some luminaries in the tech industry calling the smartphone a ‘digital passport’, a term that both connotes the freedoms associated with Internet technology, while also capturing how our access to this realm is strictly conditional on having a device to access it.

128

M. J. Dennis

While it is beyond the parameters of this chapter to compare the many studies on screentime and well-being, the research findings of Orben and Przybylski help to leave open the possibility that many online activities could improve our digital well-­ being. Excess screentime is admittedly one thing that damages our digital wellbeing, but this is not the whole story. Rather, like economists who insist that GDP is the only thing that counts when determining the well-being of a society, those who overemphasise the dangers of screentime may miss how being online can increase our digital well-being too. The effects of screentime are serious, but these studies show that these effects could be mitigated by online activities that boost our well-­ being in other ways. Given the findings outlined above, it is reasonable to be suspicious of the idea that it is screentime per se that is the problem, but rather we should think about what is presented to the user on the screen, how they engage with this content, and how engaging with this content connects to the rest of our lives. Although self-care apps do require that we spend more time looking at our screens, this does not necessarily have a negative impact upon us if their content is configured in such a way that it actively promotes digital well-being. Indeed, we may even be persuaded by the view that online techniques are especially well-suited in tackling problems that are caused by spending time online (Views I and III), albeit conceding that some screentime is required to achieve this. From this we can say that if self-care app technology has the potential to improve digital well-being, then it should be part of both a positive and a negative programme. Such a dualistic approach to digital wellness has two aspects: first, we should think about how online technology affects our wellbeing in general (how we live with technology), and how a judicious use of tech can be actively cultivated. Second, we should seek to understand the possibilities that online technology offers for self-care, specifically whether it can help address problems that are intimately connected to our habitual use of the online space. This is where online self-care technology may have a role. It may be that the conditions for flourishing may be so complicated in the twenty-first century that we need such a dualistic approach. Learning to live better with technology (often with less technology) is surely well worth doing, but so is designing technology in ways that actively counters how it is currently inimical to digital well-being. This means that the problems for human flourishing that technology generates can only be effectively addressed by using a more thoughtful version of very same technology itself.

6.4  Self-Care Apps in the Balance At the end of Sect. 6.2, I distinguished between 1) self-care apps that use techniques that aim to replicate traditional self-care techniques, and 2) self-care apps that intend to go beyond these traditional techniques, such as those that use gamification. Both these kinds of techniques raise important evaluative issues, either exclusively relating to self-care apps, or by applying to how these apps go beyond traditional self-care practices. These ethical issues can be summarised like this:

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

129

1. Privacy. Perhaps the commonest worry of both users and academics relates to data protection. This concerns the worry that self-care apps harvest a large amount of personal, highly sensitive, and potential valuable information that must be treated securely. Even if a self-care app happened never to be hacked by a hostile external agent, the data it gathers is vulnerable to a softer kind of exploitation. Self-care companies are necessarily privy to masses of highly sensitive information. While storing and analysing this information is necessary in order to improve their product, as self-care apps become ever more integral to our lives, new ethical issues arise. For example, the empirical evidence supporting Happify’s effectiveness to treat mild depression has motivated the company to seek FDA approval (PRNewsWire 2019). If this process is successful, then US doctors would be able to prescribe the app. Combined with the possibility of US insurance companies monitoring the progress of patients, this opens up conflicts of interests on multiple fronts. In addition to the ethical risks from the benign harvesting of app data, those associated with the malign hacking of a self-care product are even more worrying. Reflectly (mentioned in 3, iv) records the daily thoughts and feelings of its users in the form of a fully-searchable electronic journal. Apps such as Replika (also in 3, iv) probe their users using AI technology to answer highly personal questions. This information is both sensitive and extremely personal. The idea that this information is anything less than completely secure is deeply troubling, but like all online technologies, self-care apps are vulnerable to hacking. The mortification that many of us would feel at the thought of a stranger reading our diary perhaps offers an insight into what it would be like to be a victim of a data breach of their self-care app. When such detailed records are gathered and stored collectively, the potential for such malign exploitation is huge. 2. Monetisation. Self-care apps are heavily and rapidly monetised. This contrasts with the teaching and promotion of self-care practices and activities that is either freely provided by social or religious institutions, or that is costed at a nominal fee so all can participate. Self-care app companies introduce monetisation in different ways. Most offer free versions of their product, before encouraging users to upgrade to that ‘premium’ or ‘exclusive’ versions that is charged at a monthly, yearly, or life-time rate. We could also worry that widespread use of self-care apps may even diminish the use of traditional self-care providers, as these will be spurned for a glossier app equivalent. By monetising self-care, the concern is that we may change how these practices works in a traditional sense – and we may even distort them. At the very least we may be contributing to the creation of a culture in which self-care products are just another product for which one has to pay. Monetisation is complicated, however, and there is also something important to say about how self-care apps offer a service that is accessable to a greater range of users, regardless of their socio-economic status. Monetisation can certainly act as a bar for some users who would otherwise be granted gratuitous access self-care services, but it also renders certain expensive and specialised self-care services affordable to us all.

130

M. J. Dennis

3. Accessibility. Despite issues associated with monetisation, apps radically expand opportunities for self-care if we compare them to how many traditional self-care techniques are marketed today. The products of the contemporary self-care industry are marketed to an economically affluent social class, those who do not only have leisure, but they also have the financial ability to pay dedicated professionals (yoga coaches, spiritual gurus, etc.) to help improve their well-being. Self-care apps are able to significantly broaden those participating in self-care. The sheer scale of the numbers speak for themselves. Two of the leading self-­ care providers, Calm and Happify, collectively have an average of 65,000 downloads a month. Together they have a staggering 3.5 million active users (Crunchbase 2019b). This is because, compared to traditional self-care practices, apps are cheap, highly modifiable with inexpensive updates, and can be used or deleted at whim. This makes online self-care highly accessible to all, even to those from traditionally deprived socio-economic groups. As the marketing messages of these apps often claim, those who do not have the time or the money to pay for a meditation teacher can easily download a mindfulness app. Those who have cannot go on an intensive yoga retreat due to a packed work of family schedule can follow an online teacher’s asanas. But the issue of accessibility is complicated, however, as is shown when we consider the issue of hierarchy below. 4. Hierarchy. Despite persuasive claims to accessibility, it can also be argued that self-care apps create a two-tier self-care system, one that entrenches important differences in the quality of users’ experiences of self-care. These products generate greater accessibility to the manifold benefits of self-care, but arguably do so in a way that only provides an ersatz form of self-care to the financially or socially underprivileged. Traditional providers of self-care are often (perhaps naturally enough) the most voluble critics of the virtual versions of their products and practices (Mehrotra et al. 2017, pp. 707–11). These providers might, for example, claim that an online meditation product cannot have the same benefits as meditating in a group setting, despite the efforts of developers to include community building function (see Sect. 6.2, v). Not only may these apps create a hierarchy of those that practice different kinds of self-care, we may worry whether such a two-tier system has enough permeability between tiers. While apps are often used as a low-investment means to try a self-care practice, it may be unlikely that those who practice online self-care will be drawn to the real thing. In fact, it might reduce the likelihood that the real thing will be tried at all, especially if traditional providers are right to say that apps can only provide an ersatz form of self-care. 5. Personalisation. Self-care apps offer the possibility of delivering highly personalised self-care. Many of these products already include a significant amount of personalisation, typically based on the information obtained from an online questionnaire that users fill in after downloading the app. Such personalisation has the potential to target self-care to the needs of users in a way that would be impossible using a more traditional technique. Perhaps this is especially significant given the aims of self-care, which often intends to minister to the full spec-

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

131

trum of human idiosyncrasy. According to many self-care teachers, our well-being is greatest when we are able to live in a way that is most aligned with whatever makes us most ourselves.9 If currently self-care app algorithms are not workable or somewhat clunky, then we can imagine a time in the future when they might be much improved. Personalisation is one of the boons of having a face-to-face self-care teacher. If we can code this aspect of traditional self-care into an app’s algorithm, then we can enjoy the benefits that personalised self-care can bring. 6. Dependence. Given the highly-branded and distinctive nature of self-care apps, we may worry that customers will develop excess attachment to these products in a way that engenders a brittle commitment to the self-care practice concerned. Take the example of Headspace, the market leader in app-based mindfulness. In traditional mindfulness practices, there are many places and occasions when one can mediate. Global cities typically have many different offerings: different traditions, different teachers, different social settings, etc. By contrast, Headspace offers a single version of mindfulness practice, its guided meditations are given by a single in-house narrator (app founder, Andy Puddicombe), and its animations are instantly recognisable. Since meditation often depends on visual cues, the distinctiveness of illustrations is important: they will become intimately connected to the experience of meditation by anyone who regularly uses the app. Even if we ignore how Headspace is monetised for long-term users, the specific nature of this product suggests that it might generate dependence, even if this could be eventually overcome with familiarisation with another product. 7 . Diversity. We may worry that the widespread uptake of self-care apps could both threaten the survival of traditional providers and reduce the number of approaches within the sector as a whole. Fitness apps provide a useful case study. Over the last decade there has been a drive among personal trainer companies to document their sessions, and either host them in real time or provide them as pay-on-­ demand. On the one hand, this means that users can benefit from the tuition of a highly competent coach. On the other, trainers who do not manage to get a deal with an app company enjoy considerably less revenue, creating a centralised winner-takes-all system. This also effects users. Although they have the possibility of benefiting from a world-class fitness coach on an app platform, their pursuit of this could be said to endangers the livelihoods of real-life fitness coaches. The self-care industry is vulnerable to the same phenomenon. In the long-term, situating self-care online may reduce the diversity of practitioners within the industry, narrow the range of approaches, and unhealthily centralise the industry as a whole. From what we have seen above, there are (at least) seven evaluative issues that self-­ care apps raise. Each of these issues is complex, and many of them point to both aspects of self-care apps that are valuable and to issues concerning the widespread 9  This idea is often emphasised in the self-care literature that prioritises ‘finding one’s passion’. See Attwood and Attwood (2006) and Robinson (2009).

132

M. J. Dennis

adaptation of this technology. We would do well to be wary of all these issues, especially as some take us into an unpredictable ethical domain. So, given that overall improvement of online self-care technology will involve maximising its benefits and minimising any negative consequences, how can we best achieve this?

6.5  S  ketching a Vision for Twenty-First Century Digital Well-Being In this chapter I have sketched the main ways that self-care app developers currently employ online technologies to cultivate digital well-being. Both the growing media attention on this topic, and the interest of tech companies in improving (or smartening up) their digital well-being image, reflects a pervasive concern about the effect of our lives online, which many of us share. Similar to Tristan Harris’ journey, many of us have been initially enthralled by the power of digital technologies to improve our lives, only to become increasingly concerned about the hidden consequences of doing so. While we have seen that the recent empirical data on the effects of extended screentime is mixed, we can be certain that how we spend our time online deeply affects our digital well-being, and therefore we should attend to how we use the online space. This conclusion opens the door to the possibility of online self-care. Combining the evidence that there are ways to spend our time online that do not adversely affect our digital well-being with the evidence that there are online self-­ care techniques that can improve it, supports the idea that we can develop an online product that could actively increase our digital well-being. While excess screentime does affect our digital well-being, we need a more nuanced measure, one that takes into account how our digital well-being can actually be boosted by doing the right kind of things online. Nevertheless, assuming the techniques that online self-care employs prove to be effective, their advocates would benefit from a comprehensive vision of how digital well-being can be actively cultivated. Such a vision should also include the idea that cultivating digital well-being using character-based techniques needs to work in tandem with rule-based ones. This means that a vision for self-care app technology will involve understanding how to minimise the negative consequences that are associated with online self-care, while maximising the positive ones. Understanding ethical issues of online self-care technology required exploring the six main techniques through which it operates (summarised in Table 6.1), and uncovering seven related ethical issues that this technology requires us to address (summarised in Table 6.2). This provides us with a vantage point to discern an interesting correlation between the techniques of self-care apps and the ethical issues they raise. These ethical issues can be helpfully understood as being distributed around a key distinction that I first introduced in Sect. 6.2. Here I distinguished between two kinds of techniques that self-care apps employ. First, there are those techniques that aim to replicate traditional practices of self-care, typically using

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

133

Table 6.1  A summary of techniques that self-care apps employ Self-Care App technique Active documentation Instilling Habits

Mindfulness techniques Self-reflective notetaking Community building Gamification

Description Documenting one’s mental states, such as mood, on an app platform. Typically, users are prompted to do so by notifications. Prompting users to acquire habits with electronic reminders. Such habits can aim to increase one’s physical health (e.g. drinking more water) or to improve one’s moral character (e.g. by expressing gratitude to one’s friends and family). Encouraging mindfulness and meditation by offering users tools to focus their minds. These often consist in advice how to remove extraneous thoughts and on how to concentrate on one’s breathing or posture. Writing reflective notes to document one’s subjective states (mental, emotional, existential, etc.), along with recording one’s progress while pursuing a self-care regime. Encouraging users to understand their self-care practice as part of a greater social movement of like-minded individuals who are engaged in a similar practice. Framing self-care as a game that can be played. Not typically including the idea of ‘winning’, but rather the idea that one’s progress can be monitored and evaluated.

Table 6.2  A summary of the ethical issues that relate to self-care apps Ethical issue Privacy

Description Creation of large data sets of highly sensitive data that are vulnerable to external hackers. Monetisation Charging for self-care services (a departure from traditional self-care, which often provides these services for free or for a nominal fee). Accessibility Potential to massively increase the accessibility of self-care to anyone with a smartphone. Hierarchy Danger of creating a two-tier self-care system, in which online self-care is only provided to those who cannot afford a non-online equivalent. Personalisation Ability of apps to provide highly personalised content to users on the basis of their previous behaviour on the app. Dependence Danger of creating high levels of product dependence among self-care app users. Diversity Risk that traditional self-care providers will disappear if self-care apps become dominant.

technology to facilitate the teaching of them in ways that make them more efficient (e.g. using an app for mindfulness or self-reflective writing). Second, there are self-­ care techniques that apps introduce which allows them to cultivate character in ways that go beyond traditional practices (e.g. gamification). These two techniques each have very different affinities with the various ethical issues I raised in Sect. 6.4. Some issues (e.g. privacy) pertain to self-care apps and traditional practices alike, whereas some (e.g. dependence or personalisation) have a special relevance and affinity with self-care apps. This is not to say that one set of issues pertains to

134

M. J. Dennis

self-­care technology more or less, but that we should be attentive to the various kinds of ethical challenges that this technology brings. For example, I have shown that app developers have straightforwardly borrowed self-reflective writing from traditional self-care practices, albeit giving this technique extra functionality by introducing basic AI functions, a searchable memory, etc. This means that, in the case of this technique, we already have a robust framework to evaluate the ethically salient issues that relate to the way that self-care apps deal with privacy: there is an established literature on the rights we have to privacy and how we are wronged when our data is used without our knowledge or permission, etc. (Robison 2018; Cudd and Navin 2018).10 Nevertheless, I have also shown that some self-care apps use techniques for which there is no traditional equivalent. Take, for example, gamification. In Sect. 6.2 (iv), I explored how app developers have used this technique to give their products more ‘sticking power’ in terms of the user perseverance. By using ‘streaks’ and other gamified aids, users of self-care apps end up using their products more, and therefore gain improvement in their digital well-being. Unlike ethical issues pertaining to privacy, those pertaining to gamification are extremely undeveloped in the literature. Although we may be able to find analogous problems in the literature on nudging and prosocial persuasion, there is currently little scholarly attention devoted to the problem of the ethical issues involved in attempting to gamify our digital well-being.11 This means that we need to develop a sophisticated vocabulary from scratch to understand ethical issues that self-care apps raise in all their nuance and complexity. One way that self-care apps build upon existing approaches to digital well-being is insofar as they offer a way to cultivate our characters that goes beyond codified rule-based approaches. All of the techniques that these apps employ are intended to actively cultivate the users character in prosocial ways, and therefore employ a character-­based approach to improve our digital well-being. Rule-based approaches certainly have some power to change our behaviour online, but these effects on our behaviour are often not significant enough to ensure these changes will endure. In part this is due to the inherent changeability and uncertainty of the digital space. The ethical challenges that this space presents us with are not only manifold, but they are constantly evolving. This limits rule-based approaches insofar as they are constantly striving to keep up with a fluctuating digital landscape. While there is certainly a role for rule-based approaches at those times when unambiguity is necessary, we would do well to supplement them with character-based approaches, especially in those situations when one kind of approach can make up for what the other lacks. If the specific ethical challenges that I have argued that self-care apps present us with can be surmounted, then this kind of technology may be an effective weapon in the fight for twenty-first century digital well-being. We have seen many examples  Even in this case, however, precisely because of its extra functionality, the risks involved in an invasion of privacy in a self-care app are far greater. The ease with which apps allows us to record our most intimate secrets, along with the sheer quantity of searchable data that it generates, means that it has a greater potential for moral harm if this data is compromised. 11  For an overview of this literature, see Johnson et al. (2016). See also Moor (2008). 10

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

135

of how today’s self-care apps use cutting-edge techniques to actively sculpt character in ways that go far beyond those employed in traditional self-care. This means that the rule-based approaches to digital well-being that Tristan Harris and his colleagues at The Center for Humane Technology promote could well be supplemented by an agent-based approached that aligns itself with the contention that the good life is best achieved by focusing on the individual’s character from which all their acts flow. While rule-based approaches to digital well-being surely have an important role to play, technologies that directly affect human character should also be prioritised. The techniques used in self-care app technology offer a real opportunity to tackle digital well-being using a character-based approach, and those sympathetic to the character-based approaches of virtue theory should use their expertise to guide the development of this technology as much as possible. Inevitably, self-care app technology will become an increasingly important part of twenty-first century life. Unless it is taken seriously by those involved the character-based traditions of human flourishing, then it will continue to be developed by the current mix of programmers, entrepreneurs, and self-help theorists. From what we have seen, this technology may give us the opportunity to improve contemporary digital well-being in important ways. This should encourage us to tackle the various ethical challenges it presents us with head on.12

References Attwood, C. 2006. The Passion Test. London: Penguin Books. Birkjær, and Kaats. 2019. #SortingOutSocialMedia. Happiness Research Institute. 1–44. Available at: https://orden.diva-portal.org/smash/get/diva2:1328300/FULLTEXT01.pdf/. Accessed on 15th October. Boettcher, J., V.  Astrom, D.  Pahlsson, O.  Schenstrom, G.  Andersson, and P.  Carlbring. 2013. Internet-Based Mindfulness Treatment for Anxiety Disorders: A Randomized Controlled Trial. Behavior Therapy 45 (0): 241–253. Brey, P., A.  Briggle, and E.  Spence. 2012. The Good Life in a Technological Age. London: Routledge. Cavanaugh, K., C. Strauss, F. Cicconi, N. Griffiths, A. Wyper, and F. Jones. 2013. A Randomized Controlled Trial of a Brief Online Mindfulness-Based Intervention. Behaviour Research and Therapy. 51 (9): 573–578. Crunchbase. 2019a. https://www.crunchbase.com/organization/happify#section-mobile-app-metrics-by-apptopia. Accessed on 5th June 2019. ———. 2019b. https://www.crunchbase.com/organization/calm-com/apptopia/apptopia_app_ overview_list_public. Accessed on 5th June 2019. Cudd A., Navin M. (eds.). 2018. Core Concepts and Contemporary Issues in Privacy. New York: Springer.

 I would like to thank the editors for their comments on the original manuscript and their insightful suggestions for further reading. This allowed me to significantly strengthen key ideas in the chapter. I would also like to thank Prof. James Arthur and Prof. Kristján Kristjánsson from the Jubilee Centre for Character and Virtue for the invitation to discuss the topic of this chapter for at the Centre’s seminar in July 2019.

12

136

M. J. Dennis

Davis, et al. 2016. Thankful For the Little Things: A Meta-Analysis of Gratitude Interventions. Journal of Counselling Psychology 63 (1): 20–31. Goh, C., C.  Jones, and A.  Copello. 2019. A Further Test of the Impact of Online Gaming on Psychological Wellbeing and the Role of Play Motivations and Problematic Use. Psychiatric Quarterly 90 (4): 747–760. Goodyear, V., K.  Armour, and H.  Wood. 2018. The Impact of Social Media on Young People’s Health and Wellbeing: Evidence, Guidelines and Actions, 1–27. Birmingham: University of Birmingham. Graham, M., and W. Dutton, eds. 2019. Society and the Internet: How Networks of Information and Communication are Changing Our Lives. Oxford: Oxford University Press. Grossman, P., et  al. 2004. Mindfulness-Based Stress Reduction and Health Benefits: A Meta-­ Analysis. Journal of Psychosomatic Research 57 (1): 35–43. Happify. 2019. The Science Behind Happify. Available at: my.happify.com/public/science-behindhappify/. Accessed 5th June 2019. Harris, T. 2013. A Call to Minimize Distraction & Respect Users’ Attention. Available at www. scribd.com/document/378841682/A-Call-to-Minimize-Distraction-Respect-Users-Attentionby-Tristan-Harris. Accessed 15th October 2019. iTunes. 2019. Available at: https://apps.apple.com/story/id1438571562?ign-itscg=as10001&ignitsct=BESTOF_SC18_PT122_US_SI1438571562]. Accessed 15th October 2019. Johnson, et  al. 2016. Gamification for Health and Wellbeing: A Systematic Review of the Literature. Internet Interventions 6 (0): 89–106. Khoury, B., T. Lecomte, G. Fortin, M. Masse, P. Therien, V. Bouchard, M. Chapleau, K. Paquin, and S.G.  Hofmann. 2013. Mindfulness-Based Therapy: A Comprehensive Meta-Analysis. Clinical Psychology Review 33 (6): 763–771. MacQueen, G.M., T.M. Galway, J. Hay, L.T. Young, and R.T. Joffe. 2002. Recollection Memory Deficits in Patients with Major Depressive Disorder Predicted by Past Depressions But Not Current Mood State or Treatment Status. Psychological Medicine 32 (0): 251–258. Mani, M., D.J.  Kavanagh, L.  Hides, and S.R.  Stoyanov. 2015. Review and Evaluation of Mindfulness-Based iPhone Apps. Journal of Medical Internet Research 3 (3): 82–100. Mehrotra, S., S. Kumar, P. Sudhir, G. Rao, J. Thirthalli, and A. Gandotra. 2017. Unguided Mental Health Self-help Apps: Reflections on Challenges through a Clinician’s Lens. Indian Journal of Psychological Medicine 39 (5): 707–711. Moor, J. 2008. Why We Need Better Ethics for Emerging Technologies. In Information Technology and Moral Philosophy, ed. J. van den Hoven and J. Weckert, 26–39. Cambridge: Cambridge University Press. Orben, A., and A. Przybylski. 2019. The Association Between Adolescent Well-Being and Digital Technology Use. Nature Human Behaviour 3 (0): 173–182. Orben, A., T. Dienlin, and A. Przybylski. 2019. Social Media’s Enduring Effect on Adolescent Life Satisfaction. Proceedings of the National Academy of Sciences in the United States of America 116 (21): 10226–10228. Parks, A., et  al. 2018. Testing a Scalable Web and Smartphone Based Intervention to Improve Depression, Anxiety, and Resilience: A Randomized Controlled Trial. The International Journal of Wellbeing 8 (2): 22–67. ———. forthcoming. Improving Depression, Anxiety, and Resilience: A Clinical Trial of Happify’s Digital Tools for Mental Health and Well-being. The International Journal of Wellbeing. Peters, D., R. Calvo, and R. Ryan. 2018. Designing for Motivation, Engagement and Wellbeing in Digital Experience. Frontiers in Psychology 9 (0): 1–15. PRNewsWire. 2019. Available at: https://www.prnewswire.com/news-releases/happify-healthand-sanofi-sign-global-agreement-to-bring-prescription-digital-mental-health-therapeutics-toindividuals-with-multiple-sclerosis-300918901.html. Accessed 1st December 2019. Robinson, K. 2009. The Element: How Finding Your Passion Changes Everything. London: Penguin Books.

6  Cultivating Digital Well-Being and the Rise of Self-Care Apps

137

Robison, W. 2018. Digitizing Privacy. In Core Concepts and Contemporary Issues in Privacy, ed. A. Cudd and M. Navin, 189–204. New York: Springer. Roose, K. 2019. Do Not Disturb: How I Ditched My Phone and Unbroke My Brain. The New York Times. February 23rd 2019. Samad, S., M. Nilashi, and O. Ibrahim. 2019. The Impact of Social Networking Sites on Students’ Social Well-being and Academic Performance. Education and Information Technologies 24 (3): 2081–2094. Schwatz, O. 2019. Why Beating Your Phone Addiction May Come at a Cost. The Guardian. 13th March 2019. Sullivan, L. S., and P. Reiner 2019. Digital Wellness and Persuasive Technologies. Philosophy and Technology. Online edition, pp. 1–12. Tang, Y., K. Holzel, and M. Posner. 2015. The Neuroscience of Mindfulness Meditation. Nature Reviews Neuroscience 17 (1): 213–225. Tromholt, M., Lundby, M., Andsbjerg, K., and Wiking, M. 2015. Happiness Research Institute. 1–17. Available at: https://6e3636b7-ad2f-4292-b910-faa23b9c20aa.filesusr.com/ugd/92848 7_680fc12644c8428eb728cde7d61b13e7.pdf. Accessed on 25th November. Trottier, and Fuchs (eds.) 2015. Social Media, Politics, and the State: Protests, Revolutions, Riots, Crime, and Policing in the Age of Facebook, Twitter, and YouTube. London: Routledge. Tunney, M., and E.  Ferguson. 2017. Does Gratitude Enhance Prosociality?: A Meta-Analytic Review. Psychological Bulletin 143 (6): 601–635. Vallor, S. 2012. Flourishing on Facebook: Virtue Friendship and New Social Media. Ethics and Information Technology 14 (3): 185–199. ———. 2016. Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford: Oxford University Press. van de Poel, I. 2012. Can We Design for Well-Being? In The Good Life in a Technological Age, ed. P. Brey, A. Briggle, and E. Spence, 295–306. London: Routledge. Verbeek, P.P. 2012. On Hubris and Hybrids: Ascesis and the Ethics of Technology. In The Good Life in a Technological Age, ed. P. Brey, A. Briggle, and E. Spence, 260–271. London: Routledge. Zuckerberg, M. 2018. Available at: https://www.facebook.com/zuck/posts/10104413015393571. Accessed 3 Aug 2019. Matthew J. Dennis is a Philosopher of Technology, whose work focuses on the ethics of digital well-being. He is currently a Marie Skłodowska-Curie Research Fellow in the Department of Values, Technology, and Innovation at TU Delft. Prior to this, he was an Early Career Research Fellow in Innovation at Institute for Advanced Study, University of Warwick, where he completed his PhD in 2019. He specialises in how technology can increase human flourishing, as well as writing on the ethics of AI and other emerging technologies. Most recently, he as published articles on how online technologies can actively improve the digital well-being of their users. He is currently writing on how we can better incorporate intercultural perspectives on this topic. [email protected]

Chapter 7

Emotions and Digital Well-Being: The Rationalistic Bias of Social Media Design in Online Deliberations Lavinia Marin and Sabine Roeser

Abstract  In this chapter we argue that emotions are mediated in an incomplete way in online social media because of the heavy reliance on textual messages which fosters a rationalistic bias and an inclination towards less nuanced emotional expressions. This incompleteness can happen either by obscuring emotions, showing less than the original intensity, misinterpreting emotions, or eliciting emotions without feedback and context. Online interactions and deliberations tend to contribute rather than overcome stalemates and informational bubbles, partially due to prevalence of anti-social emotions. It is tempting to see emotions as being the cause of the problem of online verbal aggression and bullying. However, we argue that social media are actually designed in a predominantly rationalistic way, because of the reliance on text-based communication, thereby filtering out social emotions and leaving space for easily expressed antisocial emotions. Based on research on emotions that sees these as key ingredients to moral interaction and deliberation, as well as on research on text-based versus non-verbal communication, we propose a richer understanding of emotions, requiring different designs of online deliberation platforms. We propose that such designs should move from text-centred designs and should find ways to incorporate the complete expression of the full range of human emotions so that these can play a constructive role in online deliberations. Keywords  Online emotions · Online deliberation · Text communication · Social media · Well-being · Deliberation platforms

L. Marin (*) · S. Roeser Ethics and Philosophy of Technology Section, Department of VTI, Faculty of TPM, TU Delft, Delft, The Netherlands e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_7

139

140

L. Marin and S. Roeser

7.1  Introduction In the emerging debate concerning the multiple facets of digital well-being (Burr et al. 2020, p. 1), one important aspect still in need of research is the inter-personal dimension of well-being. For example, the interpersonal relations established between friends, family or work colleagues contribute to the individual sense of well-being, suggesting that, when it comes to well-being, we are not the masters of our individual happiness, but that we rely on others and on the quality of our relations with others. For a long time, these inter-personal relations have taken place offline, however, with the advent of the digital society, people also interact with each other online increasingly. Concerning the personal aspect of inter-relational well-being, some researchers have argued that there can be genuine online friendships (Kaliarnta 2016) hence opening up the possibility for other personal relations to be also mediated online. There is however another aspect of inter-personal well-­ being that has been less studied when it comes to its online dimension: the public aspect of relating to others as a member of a community with particular interests and values. The membership to an online community can happen in formal ways such as being part of a closed group, or by following of a public figure or topic on social media, but also informally – someone finds oneself situated on the same side with other strangers when engaging in a debate around a matter of concern. This public aspect of online well-being has been related to governance and social development by Burr et al. (2020) and it refers to the fact that most people aim to have a good life not just in the private sphere of their homes, but also in the public realm. This public dimension of inter-personal well-being deserves further exploration given the recent turn toward digital citizenship (Mossberger et al. 2008; Isin and Ruppert 2015) and online platforms for civic participation. While the discussion concerning e-democracy has been going on for a while, we want to engage it from  a different angle: can we genuinely  pursue public well-being to the same extent online as offline? In this paper, we are not asking whether e-democracy and online civic participation are possible or even effective, but whether participants can achieve some level of well-being as a result of their public engagement online as it is presumably the case with offline democratic participation. We have focused the scope of our question to the possibility of creating meaningful deliberations on social media. Online social media is chosen here as an object of inquiry because digital online platforms dedicated specifically to deliberation are still in their infancy (Verdiesen et al. 2016) and because, as we will claim, the design principles that make social media inefficient in channelling online debates are the same design principles also used on digital deliberation platforms.1 Our main claim in this paper is that online debates on social media do not mediate the full range of human emotions and thus are an impediment for successful deliberations  online. We argue that we need to 1  Examples of such platforms are LiquidFeedback https://liquidfeedback.org/, Debate Hub https:// debatehub.net/ , DemocracyOS http://democracyos.org/

7  Emotions and Digital Well-Being: The Rationalistic Bias of Social Media Design…

141

rethink how we design for online deliberation, by keeping emotions in mind. We base this claim on four observations: 1. We need to take into account emotions in deliberation because they point out what matters to people. 2. Online platforms tend to mediate user’s emotions in an impoverishing way, making visible only a narrow range of emotions, mostly expressed in radical terms. 3. Online social media platforms facilitate debates by emphasising the mediation of text-centred messages; however, textual communication has a hidden rationalistic bias, downplaying emotions and embodied interactions by design. 4. Paradoxically, this rationalistic bias gives room for specific kinds of emotions, namely often hostile responses that are not corrected due to lack of non-verbal communication. These observations will be illustrated and developed in this chapter.

7.2  The Contribution of Emotions to Deliberative Processes Public debates about controversial topics, taking place online or offline, are frequently heated and end up in stalemates, for example debates about potentially risky technological and scientific developments such as climate change, vaccination or genetic modification. This is due to the scientific and moral complexities of these risks, which lead to strong emotional responses by people (Slovic 2010). This effect  is exacerbated by social media: the way emotions are typically treated in online debates increase estrangement and polarization. People from different informational ‘bubbles’ blame each other for seeing the world in an irrational and lopsided way. Such hostile online interactions have the potential to affect people’s wellbeing severely. In other words, online environments can be a platform for deliberation on technological risks, but they can themselves also give rise to negative impacts or risks. However, we would like to point out that the role of emotions is usually misunderstood in public deliberations. Rather than seeing emotions as irrational states, we will argue in what follows that emotions can contribute to emotional-moral reflection and public deliberation on controversial topics such as technological risks. Emotions are often seen as a threat to rationality, in public exchanges but also in academic research, for example in empirical decision theory (Dual Process Theory, e.g. Kahneman 2011) as well as in moral philosophy, where the opposition between ‘rationalism’ and ‘sentimentalism’ has dominated the metaethics debate. However, emotion researchers in psychology and philosophy have argued over the last decades that emotions are intertwined with or part of rationality and cognition. For example, the neuropsychologist Antonio Damasio (1994) has shown that people who lack emotions due to a brain defect (in their amygdala) lose their capacity to be practically rational and to make concrete moral judgments. Psychologists and philosophers have developed so-called cognitive theories of emotions (Lazarus 1994;

142

L. Marin and S. Roeser

Scherer 1984; Solomon 1993). Emotions play an important role in moral wisdom and in forming moral judgments (Little 1995; Nussbaum 2001; Zagzebski 2003; Roberts 2003; Roeser 2011; Roeser and Todd 2014). These insights can be expanded to discussions about risky and controversial technologies. Emotions are crucial to debates about technological risks, because emotions can point out what morally matters. Conventional, quantitative approaches leave out important ethical considerations such as justice, fairness, autonomy and legitimacy (Roeser 2006, 2018). In this chapter we will argue that addressing emotions in a different way can help to overcome stalemates in deliberations about controversial topics: emotions can contribute to sympathy and understanding of shared values, which can in turn contribute to finding commonly shared solutions, thereby also contributing to people’s wellbeing. In what follows, we will argue that, when emotions are not properly included in ways online deliberative platforms are designed, this leads to impoverished and lopsided interactions in which nuanced emotions get lost while harmful emotions tend to prevail. These impoverished communications online can be harmful to people’s wellbeing. We will argue that paradoxically, this is due to the rationalistic, text-based bias of such platforms, as they leave out emotions and embodied, non-­ verbal communications. Our main claim is that online social media platforms rely heavily on text-based communication and miss important nonverbal aspects of communication. We will argue that this also has an effect on how emotions are perceived and expressed.

7.3  O  nline Emotions and the Tendency for Extreme Emotions to Prevail Is there a specific mode in which emotions appear when debating with other users on social media? In order to tackle this issue, we start from the observation that the ways in which emotions are mediated on social media are already leading to the expression of a narrow range of emotions in specific ways. In other words, the landscape of online emotions is rather barren, dominated by a few main emotions to the detriment of emotional diversity and complexity. The case of moral outrage will illustrate this claim as outrage seems to be emphasised to the detriment of other emotions in online deliberations. It seems that the already polarised emotional responses in public debates are exacerbated when taking place in online media. On social media, one commonly encounters extreme negative reactions such as venting of anger, blaming or shaming which lead to the dominance of extreme viewpoints. One of the most visible emotions online is outrage, and this is also one of the most studied emotions in online contexts. The Internet has been deemed the medium of outrage (Han 2017, p. 8), as it is conducive to the expression of waves of outrage in a visible manner such as group bullying, harassment, and online mobbing. Outrage, although it has a

7  Emotions and Digital Well-Being: The Rationalistic Bias of Social Media Design…

143

negative valence, can still have constructive civic uses as for example leading to mobilisations for action (Spring et al. 2018, 1068). Spring et al. (2018) have argued that outrage can be used more effectively for mobilising groups of people into political action, as compared to empathy or reappraisal (Spring et al. 2018, S. 1067). However, Brady and Crockett (2019) have pointed out that, at least in online environments, the expression of outrage does not lead to social mobilisation, but rather has mostly negative effects (Brady and Crockett 2019, S. 79). Brady and Crockett identify at least two problems with online outrage: first, it reduces the effectiveness of collective action - since there are so many themes to be outraged about, the moral anger tends to dissipate instead of coalescing among online users (Brady and Crockett 2019, S. 79); secondly, the participation of certain marginalised groups is discouraged - especially of minorities - via “coordinated harassment” (Brady and Crockett 2019, S. 79), hence outrage becomes effectively an anti-democratic tool. Spring et al. (2018) have noticed that outrage is seen as morally permissible only for majority groups who will tend to deem the expression of outrage as inappropriate when it comes from marginalised groups: ‘only certain groups are “allowed” to express outrage. For example, stigmatized group members are often held to higher moral standards (e.g., accused of expressing inappropriate emotions, especially anger, at greater rates than majority group members’ (Spring et al. 2018, p. 1069). Both problems are related to the architecture of the online environment which, according to Brady and Crockett, makes it too easy to express outrage as a reaction to anything as it does not incur any costs (Brady and Crockett 2019, S. 79). This means that it can be the case that people manifest outrage in their messages without even feeling it because they may have other benefits, for example as a way of virtue-­ signalling to the group (Spring et al. 2018, p. 1067). This could also be the case in offline situations, but the offline outrage is easier to detect by the audience because emotions are harder to fake in real life. The previously cited work on outrage showcases a general problem with online emotions: while a particular emotion can have a social role and be useful in certain contexts, it may become toxic when mediated via online social platforms. As Brady and Crockett (2019) rightly point out, the costs of expressing any emotion online – hence also outrage – are quite small and thus the sheer quantity of online outrage seems to be overwhelming. While we agree with Brady and Crockett in general, we think that we need to revisit the link between the design of a platform and the emotions it allows to be expressed. If expressing outrage is cost-free, why is it not the case that all emotions are equally expressed? What makes outrage so special in its flourishing in online media? Empirical studies have shown that both negative and positive emotions flourish in online debates just as is the case in offline debates (Wojcieszak et  al. 2009, p. 1082). Several researchers have concluded that the online medium as such is not an impersonal medium, no less devoid of emotions, and that, even if text-based communication makes it harder to convey emotional cues, users will compensate for this feature “by the use of emoticons, or by verbalizing emotions in a more explicit way” (Derks et al. 2008, p. 780). We do not contest that emotions can be expressed effectively in online communications, but rather we want to question the

144

L. Marin and S. Roeser

quality of the expression of such emotions and the effect this has on online users. When online representations of emotions are based either on self-reports or on contextual information such as usage of expressions, emoticons, and typography, the effect is not quite the same. We think that emotions which are not fully and accurately expressed online do not achieve the same effect as the emotions in offline, real-life scenarios, as we will argue in what follows.

7.4  T  he Rationalistic Bias of Text-Based Online Communications The misrepresentation of emotions online can be traced to the design choices made by social media platforms. One salient design feature is the heavy reliance on texts to convey messages among users. We only need to look back at the history of the text as medium to understand its rationalistic bias. The invention of the optical texts2 in the twelfth century is tied by media historians to the need to design texts as a tool for the purpose of quick intellectual appraisal of complex arguments, enabling to see and comprehend arguments at a glance (Illich 1993). Thus, text was not initially used to express emotions, but to convey complex ideas to a wide audience in the form of books, journals, novels, etc. An exception is of course written literature (novels, poetry etc.). But until the advent of the Internet, most people (except for professional literary, academic or journalistic writers) did not use text as a regular medium of communication, except for the occasional letter writing. The epistolary novel shows how the genre of letter writing could be used to express rich emotions. However, the success of this genre was due to the talent and training of the authors. The fact remains that most people do not have the writing skills needed to express a full range of emotions in writing. Furthermore, even letters were long texts with delayed delivery, forcing the correspondents to write in a different way than they would have spoken, conveying thoughts and ideas with a timelier effect. By contrast, online instant messages were designed specifically with the purpose of replacing fully the need for face-to-face communication. Social media posts function similarly to instant messaging: they allow for the immediate publishing of updates, and of comments of quick responses. The assumption that we can communicate just as effectively via text messages as we can do in speech is buried deeply in the design of social media platforms. This poses several problems for those wishing to engage in online deliberation A first problem is that online text-messages lack certain meta-communicational features which are essential for a successful act of communication. Offline real-life interactions involve nonverbal communication conveyed via tone of voice, gestures, 2  Optical texts are texts written in such a way as to be readable at a glance, in silence. Before the twelfth century, most manuscripts were written in scripta continua, demanding users to read them out loud so to understand the content (see Marin et  al. 2018 for a more comprehensive discussion).

7  Emotions and Digital Well-Being: The Rationalistic Bias of Social Media Design…

145

and facial expressions. Such non-verbal communication can provide for essential clues as how to understand someone’s words: the same words, for example a simple expression such as ‘poor you!’, can have completely different meaning depending on whether they are expressed in a caring, ironic, mocking or even aggressive way. How could one compensate for the lack of the meta-communicational features? Emoticons could in principle accommodate that to some degree, but even they can be ambiguous. Another solution would be more text devoted specifically to explaining the emotions of the users. This could work in a lengthy text such as a novel, where a detailed description of the characters might also work to render explicit the emotions of the characters. However, because instant messaging and social media favour short messages, there is not enough time and space to write lengthy descriptions of feelings. An additional impediment is that most online users are not professional writers, and may not be able to use words to convey something usually left to gestures, tone of voice and facial expressions. This makes it that much of the emotions and subtext are often lost in the process of online communication. Furthermore, a second problem is that in such heavily text-based forms of communication, we need to rely on the honesty of the users when they report their emotions, and we need to take their emotional reports at face value. But online, behind the veil of anonymity, deception may be more likely to happen than sincerity. Already pointed out by Brady and Crockett, people may express outrage online even when they are not feeling it – because it signals their virtue. This could happen just as well with other emotions besides outrage. A third problem can be that users do not report all their emotional states. For users to report an emotion in a text medium, it should be powerful enough to disturb them. Thus, mild emotions such as boredom, curiosity, amusement, annoyance, etc. may pass by unnoticed unless their subjects will take the time and effort to state clearly what they are feeling. But online users have no incentives to continuously report their feelings via text updates, while their emotions would often get noticed in offline interactions via non-verbal communication such as bodily or facial expressions, and these could be relevant for interactions between people. Meanwhile, more extreme emotions such as anger, excitement or hatred will get the spotlight much easier. For extreme emotions, one does not even need to self-report, sometimes the way a message is written is enough to tell something about the emotional state of the user: exclamation marks and using Caps Lock are indicative of more extreme emotions. Hence, when  compared to actual face-to-face communication,text-based messages are typically poor indicatorsof the full range of emotions felt by users. Certain emotions manage to prevail in online interactions while others do not get expressed and shared properly. To get back to our previous question as to why outrage seems to be a more prevalent emotion online than in offline interactions: this could be due to the poor expression of the full range of emotions on social media, thus leading to a distorted view of the other users and of the online environment. The visibility of online outrage may be due to its multiple modes of expression, which are not shared by other, more subtle emotions. To use a fortunate expression of Brady and Crockett, online we are confronted with “emotional noise” (p.  79) meaning that the heavy

146

L. Marin and S. Roeser

expression of certain (extreme) emotions tends to drown other emotions, making their signalling invisible. Furthermore, this may also be due to the urgency of outrage and the fact that it cannot be controlled easily, and that outrage can be used for ‘virtue signaling’ more than other emotions. And since the Internet is a medium of self-presentation facilitating spontaneous interactions (Nadkarni and Hofmann 2012), people might choose easy ways to present a virtuous self. These are hypotheses which would require further empirical and conceptual research. The text-centred way in which online social media platforms are designed is focused on the informational content of what is said, thereby often obfuscating the emotional and expressive context. Such design ignores the possibility that not all the messages we communicate online are meant to inform, and that many times we say things just to vent out, to express emotions, or to signal to others our allegiance to a community. This design underlies the rationalistic assumption previously exposed when we discussed the pitfalls of offline deliberation. Online deliberation seems to assume that users are rational epistemic agents whose emotions do not matter. The online platforms thus repeat the same mistakes as in-real-life deliberation: focusing on what is said and ignoring the context and personal meaning of a message. Thus, context and subtext are regularly downplayed as insignificant add-ons to the message’s meaning. The paradox of online communication is that its rationalistic bias promoted by the text-based communication creates room for populist emotions, but not for more nuanced, sympathetic and reflective moral emotions which are essential in a deliberation. This poses a problem for online deliberation because people will not be able to express their full range of emotions.

7.5  D  esigning Online Environments for more Emotionally Fine-Grained Expression In this section we propose several design features meant to foster well-being through emotionally rich deliberation environments online. We think that current designs of social media platforms do not take the mediation of emotions explicitly into account, with possible detrimental effects for the user’s well-being. The design features proposed below are intended as a starting point for a wider debate concerning the emotional environments of social media. Should the emotional environments be designed with a particular emotional state in mind (e.g. related to well-being)? To what extent are the emotional reactions of users on social media the effect of the misuse of these media for other purposes than they were intended? These questions cannot be expanded on here, but deserve further elaboration. Furthermore, this debate on the limits of designing social media affordances for emotional well-being cannot be answered by philosophers alone, but rather deserve an interdisciplinary approach. Concerning online deliberation – be it on dedicated platforms or on social media groups – we think of several design features that could foster a more emotionally

7  Emotions and Digital Well-Being: The Rationalistic Bias of Social Media Design…

147

rich environment. We propose that the first stage of an online debate should be dedicated to choosing what to debate: for example, concerning a policy proposal, users could vote what they find the most interesting policies. The choice option is already implemented in the existing deliberation platforms, but it is done only via text bits which get up-voted. However, we propose to complement the text-based explanations of policies with video clips or audio clips in which proponents explain their policy proposals. This would give a more human touch to the debate, by making known the faces of the proponents and their own emotions attached to these proposals. A danger – as always in video based content – is that some proposals might get voted because their speakers are charismatic, and not for the content itself. This could be partially by-passed by having the users first read the policy proposal in a text snippet, and then asking them to click on a video recording of the expanded proposal. In a next stage, users could have the opportunity to comment on these proposals and explain why they support a certain policy. In this phase, users could be encouraged to also post video or audio clips with their comments of the policy. The audio clips would probably work better since these would foster anonymity, while also allowing for a personal touch – the voice of the user, their emotions being discernible from the voice recording. In the final stage, when a policy has been proposed for a general vote, the final debate could take place in a video conference format with each side designating representatives to speak for it. Thus, users could watch the debate – either live or recorded – and then cast their vote. Again, the danger of charisma-based votes needs to be averted. One possibility would be to combine video with text transcripts. Thus, before casting their votes, the users would be asked to read the transcript of the debate. The reading would ask of the users to focus on the content of the debate, while the emotions expressed in the debate would still be in their minds. For regular discussions among social media users, no formal constraints can be imposed on the users. After all, nobody can be hindered from starting a deliberation in a group or in a private chat. However, we propose that the text comments and messages to be screened for sensitive words. Once users type in a negative message containing certain trigger-words, an AI algorithm could detect it and it would pop up the question “Are you sure you want to send this?”. This feature is already being experimented with by Instagram3 but we think that it would also help to make users understand the emotional consequences of their actions, not just by asking them if they are sure, but for example, also showing them the image of a suffering face, or an emoticon. Overall, we suggest moving away from the asynchronous text message conversation to a format more expressive of the interlocutor’s bodily presence: their tone of voice, face and movements could be recorded – either via video or at least audio. This can help interlocutors to remember that the other online user is a human being who may feel affected by their messages, and that everyone matters  equally. If

 See https://www.bbc.com/news/technology-48916828

3

148

L. Marin and S. Roeser

people cannot hide behind the anonymity of a nickname but see each other as real human beings with feelings to be acknowledged, then we hope that such online debates could become more meaningful for all participants, possibly leading to a form of digital civic well-being. On a final note, we are aware that video alone cannot solve all the problems of text-based communication without adding its own problems. The raising popularity of vloggers on YouTube has not always led to more emotionally aware users, rather new phenomena such as self-radicalization after watching videos on YouTube (Alfano et al. 2018) became possible. In this paper we wanted to draw attention to the hypothesis that text-based communication undercuts emotional expression in a very particular way which imposes its own media-logic on existing emotions. The solution proposed here would not be to entirely give up text-based messages, nor to replace text with another medium,4 but to look at the possible convergence of multiple media: text and video, text and sound clips, text and emoticons, text and images, etc.

7.6  Conclusion In this paper we have suggested that emotions are mediated in an incomplete way in online social media by the heavy reliance on textual messages which fosters a rationalistic bias and a bias towards less nuanced emotional expressions. This incompleteness can happen either by obscuring emotions, showing less than the original intensity, misinterpreting the emotion, or eliciting emotions without feedback. Online interactions and deliberations tend to contribute rather than overcome stalemates and informational bubbles, partially due to prevalence of anti-social emotions. It is tempting to see emotions as being the cause of the problem of online verbal aggression and bullying. However, we argue that social media are actually designed in a too rationalistic way, because of the reliance on text-based communication, thereby filtering out social emotions and leaving space for easily expressed antisocial emotions. Based on research on emotions that sees these as key ingredients to moral interaction and deliberation as well as on research on text-based versus non-verbal communication, we propose a richer understanding of emotions, requiring different designs of online deliberation platforms. We propose that such designs should move from text-centred designs and should find ways to incorporate the complete expression of the full range of human emotions so that these can play a constructive role in online deliberations.

4  There are currently other solutions being investigated by tech companies such as Apple  – for example dynamic avatars – but we do not have the space to go into these here.

7  Emotions and Digital Well-Being: The Rationalistic Bias of Social Media Design…

149

References Alfano, Mark, J.  Adam Carter, and Marc Cheong. 2018. Technological Seduction and Self-­ Radicalization. Journal of the American Philosophical Association 4 (3): 298–322. Brady, William J., and Molly J. Crockett. 2019. How Effective Is Online Outrage? Trends in cognitive sciences 23 (2): 79–80. https://doi.org/10.1016/j.tics.2018.11.004. Burr, C., M. Taddeo, and L. Floridi. 2020. The Ethics of Digital Well-Being: A Thematic Review. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00175-8. Damasio, Antonio R. 1994. Descartes’ Error: Emotion, Reason and the Human Brain. New York: G.P. Putnam. Derks, Daantje, Agneta H. Fischer, and Arjan E.R. Bos. 2008. The Role of Emotion in Computer-­ Mediated Communication: A Review. Computers in Human Behavior 24 (3): 766–785. https:// doi.org/10.1016/j.chb.2007.04.004. Han, Byung-Chul. 2017. In the Swarm: Digital Prospects; Translated by Erik Butler. Cambridge, MA: MIT Press. Illich, Ivan. 1993. In the Vineyard of the Text: A Commentary to Hugh’s Didascalicon. Chicago: University of Chicago Press. Isin, Engin F., and Evelyn Sharon Ruppert. 2015. Being Digital Citizens. London/ Lanham: Rowman & Littlefield International. Kahneman, Daniel. 2011. Thinking Fast and Slow. New York: Farrar, Straus and Giroux. Kaliarnta, Sofia. 2016. Using Aristotle’s Theory of Friendship to Classify Online Friendships: A Critical Counterview. Ethics and Information Technology 18: 65–79. Lazarus, Richard S. 1994. Emotion and Adaptation. New York/Oxford: Oxford University Press. Little, Margaret Olivia. 1995. Seeing and Caring: The Role of Affect in Feminist Moral Epistemology. Hypatia 10 (3): 117–137. https://doi.org/10.1111/j.1527-2001.1995.tb00740.x. Marin, Lavinia, Jan Masschelein, and Maarten Simons. 2018. Page, Text and Screen in the University: Revisiting the Illich Hypothesis. Educational Philosophy and Theory 50 (1): 49–60. https://doi.org/10.1080/00131857.2017.1323624. Mossberger, Karen, Caroline J. Tolbert, and Ramona S. McNeal. 2008. Digital Citizenship: The Internet, Society, and Participation. Cambridge, MA/London: MIT. Nadkarni, Ashwini, and Stefan G. Hofmann. 2012. Why Do People Use Facebook? Personality and Individual Differences 52 (3): 243–249. https://doi.org/10.1016/j.paid.2011.11.007. Nussbaum, Martha C. 2001. Upheavals of Thought. Cambridge: Cambridge University Press. Roberts, Robert Campbell. 2003. Emotions: An Essay in Aid of Moral Psychology. Cambridge/ New York: Cambridge University Press. Roeser, Sabine. 2006. The Role of Emotions in Judging the Moral Acceptability of Risks. Safety Science 44 (8): 689–700. ———. 2011. Moral Emotions and Intuitions. Basingstoke: Palgrave Macmillan. ———. 2018. Risk, Technology, and Moral Emotions. London: Routledge. Roeser, Sabine, and Cain Samuel Todd. 2014. Emotion and Value. 1st ed. Oxford: Oxford University Press. Scherer, Klaus R. 1984. On the Nature and Function of Emotion: A Component Process Approach. In Approaches to Emotion, ed. Klaus R. Scherer and Paul Ekman, 293–317. Hillsdale/London: Lawrence Erlbaum Associates. Slovic, Paul. 2010. The Feeling of Risk: New Perspectives on Risk Perception. London: Earthscan. Solomon, Robert C. 1993. The Passions: Emotions and the Meaning of Life. Indianapolis/ Cambridge: Hackett Publishing. Spring, Victoria L., C. Daryl Cameron, and Mina Cikara. 2018. The Upside of Outrage. Trends in Cognitive Sciences 22 (12): 1067–1069. https://doi.org/10.1016/j.tics.2018.09.006. Verdiesen, E.  P., M.  V. Dignum, M.  J. van den Hoven, Martijn Cligge, Jan Timmermans, and Lennard Segers. 2016. MOOD: Massive Open Online Deliberation Platform-A Practical Application. In ECAI 2016: 22nd European Conference on Artificial Intelligence, 29 August-2

150

L. Marin and S. Roeser

September 2016, The Hague, The Netherlands, Including Prestigious Applications of Artificial Intelligence (PAIS 2016): Proceedings, ed. Gal A. Kaminka et al. Amsterdam: IOS Press. Wojcieszak, Magdalena E., Young Min Baek, Michael X.  Delli, and Carpini. 2009. WHAT IS REALLY GOING on? Structure Underlying Face-to-Face and Online Deliberation. Information, Communication & Society 12 (7): 1080–1102. https://doi.org/10.1080/13691180902725768. Zagzebski, Linda. 2003. Emotion and Moral Judgment. Philosophy and Phenomenological Research 66 (1): 104–124. https://doi.org/10.1111/j.1933-1592.2003.tb00245.x. Lavinia Marin is a Postdoctoral Researcher in the Ethics and Philosophy of Technology Section at TU Delft, the Netherlands. Her current research investigates conditions of possibility for online critical thinking by looking into how thinking is mediated by technologies and shaped by embodied actions and emotions. In addition, she is involved in a research project at TU Delft focused on identifying best practices for ethics education in engineering. [email protected]

Sabine Roeser is Professor of Ethics at TU Delft (distinguished Antoni van Leeuwenhoek Professor) and Head of the Ethics and Philosophy of Technology Section. She is Integrity Officer of TU Delft. Roeser has been working at Delft University of Technology since September 2001. Roeser’s research covers theoretical, foundational topics concerning the nature of moral knowledge, intuitions, emotions and evaluative aspects of risk, but also urgent and hotly debated public issues on which her theoretical research can shed new light, such as nuclear energy, climate change, biotechnology and public health issues. Roeser frequently is member of governmental advisory boards. She has given numerous interviews and public lectures. Her most recent book is Risk, Technology, and Moral Emotions (2018, Routledge). Roeser has led various research projects with highly competitive funding from the EU and the Dutch Research Council (NWO). She is Co-chair of the Ethics of Socially Disruptive Technologies project, a 27 million Euro, 10 year multi-university research programme. Also see: http://www.tbm.tudelft.nl/sroeser Research Interests: Moral Emotions and Intuitions, Metaethics, Risk Ethics, Energy Ethics, Climate Ethics, Digital Ethics, Public Health Ethics and Ethics of Medical Technologies. [email protected]

Chapter 8

Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection Andrew Gibson and Jill Willis

Abstract  We reflect on the ethical challenges associated with designing, developing and implementing a socio-technical system for personal reflection. During our work in creating and refining GoingOK, we have encountered ethical challenges relating to issues associated with vulnerability, safety, anonymity, privacy, transparency, agency, trust, identity, well-being, and resilience. We discuss ethical challenges as they occurred in four key activities: (1) promoting author well-being, (2) negotiating meaningful participation, (3) balancing stakeholder interests, and (4) initiating socio-cultural sensitive analysis. We then identify four guiding principles that have directed our work and helped us address the ethical challenges. They are the (1) interaction principle, (2) pragmatic principle, (3) respect principle, and (4) trust principle. We conclude with our thoughts on how both the challenges and the principles might inform other work. Keywords  Ethics · Well-being · Personal reflection · Reflective writing analytics · Socio-technical systems

8.1  Introduction 8.1.1  Personal Reflective Writing Reflective writing is an author’s subjective narrative of personal experience, infused with an affective commentary on that experience. This type of writing requires a mental process of self-reflection and has been shown to yield both personal A. Gibson (*) Science and Engineering Faculty, Queensland University of Technology (QUT), Brisbane, QLD, Australia e-mail: [email protected] J. Willis Faculty of Education, Queensland University of Technology (QUT), Brisbane, QLD, Australia © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_8

151

152

A. Gibson and J. Willis

well-­being enhancement (Pennebaker 2010; Pennebaker and Seagal 1999) and learning benefits (Gibson et  al. 2016; Mäeots et  al. 2016) With the exception of ruminating, which can have a negative impact on well-being, reflecting is associated with improved engagement with others (Murray et  al. 2014), overcoming loss (Nerken 1993), identity formation (Berzonsky and Luyckx 2008), and general health improvements (Pennebaker 2010). Significant benefits have also been identified for learning, and have been documented in a diversity of areas such as Nursing (Murphy and Timmins 2009), Teaching (Hoban and Hastings 2006), and Pharmacy (Tsingos-­Lucas et al. 2016). Reflective writing in online journals enables these representations of self-­ reflection to become shareable with others. Where personal reflective writing is shared with another such as a mentor who can engage in dialogic conversation, or a teacher who can provide feedback or adapt their teaching based on the experiences of their learners, there are additional benefits for the authors and also potentially for the interlocutor. Computational analysis of reflective writing aims to provide additional insights about authors, their thinking, and aspects of their well-being. Termed Reflective writing analytics (RWA) (Gibson 2017), this computational analysis enlists approaches developed in natural language processing (NLP) to the task of extracting insights from reflective writing, in a manner that can be applied to large reflective corpora. For facilitators of groups of learners, the computational analysis enables the benefits of reflective writing and corresponding analytical insights to be considered on a larger scale drawing on large numbers of authors. Scaling RWA can present opportunities for assisting large populations such as groups of learners in university cohorts (Gibson et al. 2017). Although the personal value of self-reflection is well established, when this experience is in an online writing space, and extended to include RWA with other asynchronous interlocutors such as teachers or mentors, new ethical considerations become visible. Socio-technical innovation inevitably raises ethical issues (Somerville 2006). In our work with a digital online reflective writing environment, we have faced ongoing ethical decisions about how the technical and pedagogical dimensions might provide the most beneficial impact for the authors. The following analysis seeks to identify the associated lines of ethical analysis through an examination of challenges encountered and principles used to guide our work.

8.1.2  The GoingOK Reflective Writing Web Application GoingOK, an online reflective writing application,1 was created to address a pragmatic need: to understand the personal challenges of a group of early career teachers in their first year of teaching. We began an investigation into how web-based

 See http://goingok.org

1

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

153

technologies could facilitate personal reflective writing, for the benefit of the teachers as well as providing data for educational research. The researchers2 were interested in the experiences of first year teachers in remote locations, to learn how new teachers might be better supported by their university preparation and employers. In late 2012, the first iteration of the GoingOK web application was developed and throughout 2013 GoingOK was used with a small group of early career teachers as part of the Becoming Colleagues research project (Willis et al. 2017). In the following years, GoingOK has undergone numerous development changes (from minor improvements to complete re-writing of the software), and as at September 2019 has been used with over 2500 people who have written more than 14,000 reflections. GoingOK collects three data types with every reflection entry (see Fig. 8.1): (1) the reflection point selected by the author using a visual slider which ranges from distressed to soaring with goingOK in the middle. This is captured as a number between 0 and 100; (2) the reflection text is natural language text typed into a scrolling text box in response to the prompt Describe how you are going... ; and (3) a reflection timestamp which is added to the reflection entry by the server when the author clicks the save button submitting their reflection. Notably, the GoingOK data

Fig. 8.1  GoingOK interface for entering a reflection

 The core Becoming Colleagues research team included us (Jill Willis and Andrew Gibson) together with Leanne Crosswell and Chad Morrison.

2

154

A. Gibson and J. Willis

Fig. 8.2  Example reflection point chart

Fig. 8.3  Example past reflections

is collected in order to understand how authors are grappling with well-being related issues, it does not attempt to perform a psychological assessment of subjective well-­ being. Significantly, the reflection point when observed statistically over a large number of authors does align reasonably well with psychological theory. However, GoingOK was not developed as clinical tool, and has not been validated against any psychological scales of well-being. Past reflection entries are displayed in the browser in two ways. Firstly, above the reflection entry area in the form of a reflection point chart (also referred to as a plotline) (see Fig. 8.1), and secondly below the reflection entry area in the form of a reverse ordered list of the entries (see Figs. 8.2 and 8.3).

8.1.3  R  eflecting on Designing, Developing and Implementing GoingOK In the following sections we provide our reflections on ethical challenges faced during the design, development and implementation of GoingOK over the first seven years, and we identify four main principles which have guided us in navigating these challenges.

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

155

Table 8.1  GoingOK projects Year 2013 2014 2014 2014 2015 2017 2018 2019

Project name Becoming Colleagues (Queensland) Science & IT Group Work Reflection Professional Experience Becoming Colleagues (South Australia) Beginning Well Transition to Teaching Faster Feedback Preparing Assessment Capable Teachers

Cohort type Early career teachers Undergraduate science students Teacher education students Early career teachers Early career teachers Final year education students Postgraduate education and IT students Postgraduate teacher education students

Code BCQ SGW PEX BCSA BW T2T FF PACT

GoingOK is more than a web application. It is a socio-technical system that includes the software, technical infrastructure to make the software operational, social infrastructure to ensure that the implementation holds value to the various stakeholders, and the interactions of the stakeholders themselves. As such, we examine the whole socio-technical system. As a guide for our discussion, we include quotes related to these ethical decisions from the GoingOK reflections of one of us (JW), which were collected during a number of projects. Table 8.1 provides a summary of the projects, the year, and the groups involved as authors. The codes provided are used to reference projects throughout our discussion. In looking back through the projects, we trace the ethical challenges and commitments that we made when designing with multiple human stakeholders engaging with the socio-technical system.

8.2  Ethical Challenges We have encountered (and continue to encounter) ethical questions that have been challenging to resolve. These relate to topics such as: vulnerability, safety, anonymity, privacy, transparency, agency, trust, identity, well-being, resilience, freedom, performativity and compliance. Looking back over the projects, these key ethical questions have occurred in four important activities: (1) promoting the well-being of those that are writing the reflections, (2) negotiating participation that is purposeful and holds meaning for participants, (3) balancing the interests of various stakeholders associated with the context of reflection, and (4) initiating socio-cultural analysis of the reflection data.

156

A. Gibson and J. Willis

8.2.1  Promoting Author Well-Being There is an established body of research indicating that when people write reflectively they positively impact their personal well-being (Pennebaker 2010; Pennebaker and Seagal 1999). GoingOK was designed to capitalise on this work by capturing an author’s reflective writing while also promoting their well-being. The Becoming Colleagues (BCQ) project represents our first attempt at doing this. Participants in this project were early career teachers (ECTs) beginning their teaching careers in remote rural Queensland schools. Previous research had shown that well-being of ECTs, particularly in this context, is often at risk (Kelly et al. 2018) and that a large number of ECTs leave the profession in the first 5 years (Buchanan et al. 2013; White 2019). The remoteness of the ECTs meant that there were challenges in maintaining regular contact to encourage them to reflect regularly. Yet we expected that the benefits of reflection would be greater when it was performed regularly. We decided that an email with a weblink to GoingOK would be automatically sent from within the software to the authors to prompt them to reflect. It was important that the process was easy and convenient. Fri 8 Feb 2013 Getting this email and having the page look so easy to use invites me to quickly reflect. It works well.

This reflection is one of the first in the GoingOK system, and indicates relief that the technical design of an email prompt worked. Also there was an increasing awareness of how the simple design of the page acted as an invitation to engage in personal reflection. The importance of this sense of invitation and ease of use became evident through later projects. When technical issues made it harder for authors to gain quick access to reflection, for example when organisational computer servers blocked the email reminders, new authors tended to not persist and therefore missed the associated benefits. Part of the benefit for the author is to log concerns they may not be able to say out loud to others, and to use the online space to work them out. An early decision with the ECTs was co-developing the language of being authors who were “narrating stories”, with a “plotline” emerging over time from their reflection points. This was to resist a sense of being seen as patients who were being monitored, and to acknowledge that there would be no therapeutic support available in response to their entries. The relationship between the ECT authors and the online reflective space was for personal benefit. Early authors, highlighted that GoingOK felt like “a big listening ear”, and it was a safe place to ‘vent’ or work through topics they could not share with mentors or colleagues in their workplace. Fri 24 Oct 2014 Later same day.... Benefits of reflection: angry, troubled thoughts can be poured out in this safe space and be transformed into an objective third space. It created enough distance for me to regain perspective.

The relationship between the authors and the interlocutors or facilitators who are remotely observing reflections is one that has required ongoing thought. In the first

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

157

project (BCQ), it was surprising how personal the reflections felt to the distant reader/researcher. Being privy to this thinking, generated a sense of responsibility for responding to it. Wed 20 Feb 2013 There has been another participant get in touch and want to be part of the project, which is great news. I also saw that a number of entries had been made which was encouraging. While I was concerned that the latest data entries were showing that our participants were struggling, C reminded me that this was an important opportunity for our young colleagues to learn resilience.

For the author to experience well-being, they needed to also experience the process of deliberating and deciding what to do for themselves (Archer 2007; Crosswell et al. 2018). There was an ethical challenge in whether a response was required. Design discussions such as closing a feedback loop through automated prompts were discussed then delayed when it became clear that the act of reflecting online was promoting wellbeing in several ways. For example, the decision of where to place the slider encouraged introspection, and almost a recalibration. Seeing the developing plotline helped to cast the momentary concerns into a longer timeframe. Thu 25 Apr 2013 Goingok shifts my sense of resilience to the upper levels each time I write as it makes me take the long view.... time is unwound and buoyant and I am not alone.

Qualitative analysis of the entries showed that within the entries, the authors would often start by discerning what was of concern, and then would work through their deliberations, and by the end of the entry have reached a position of reassurance or hope. The well-being benefits seemed to flow from the author thinking about the situation they were in, and reflecting back on what they could take away from that situation, that led to a focus on how this might benefit them in the future. Another example that is illustrated in the following reflection, is how the continuous record and plotline in GoingOK provides a connection between the past and future self. Thu 28 May 2015 I have been reading more about reflexivity and the inner conversation as a way of making sense of our practices and identities in a continually shifting world. This record is a sense of that conversation and how troubling various events have been for that continuous sense of self. The graph that I can see sitting above me reminds me that I have a continuous sense of self, even if the distributed and diverse experiences within one day might challenge a sense of continuity.

An important consideration in the ongoing design of GoingOK was how to promote this well-being, and ensure that other features of the design didn’t detract from the benefit that the author can gain from engaging in the reflective writing process. However reflexivity is less beneficial to wellbeing when people are prone to rumination, as the reflection process can exacerbate rumination and potentially lead to heightened anxiety or depression (Lengelle et al. 2016). Reflection when in a state of “fractured reflexivity” can intensify distress and disorientation rather than lead to purposeful courses of action (Archer 2012). An ethical challenge is to discourage those at risk from reflecting from doing so. Theoretically it might be possible for Natural Language Technology (NLP) to detect rumination. However for this to occur the person would need to engage in the

158

A. Gibson and J. Willis

reflective writing process exposing them to risk. Importantly, the use of GoingOk tends to be outside of a clinical setting where those at risk of rumination might receive care and guidance with respect to any reflective activity. Hence we require, within practical constraints, that potential ruminating authors be identified within the social context in which the technology is put to work, rather than through the technology itself. For example, if it is known that a student has a mental health problem where rumination is a significant risk, then they can be advised by the person in charge of the specific GoingOK implementation not to use the software unless professional support can be provided to the student throughout their use of the software. Further, in the design of GoingOK, we have been careful to ensure that it is not perceived as a psychological diagnostic or intervention tool. We believe that these are significant ethical considerations that need to be taken into account if any the software was to include diagnostic or intervention features. The principle of considering GoingOK as a socio-technical system changes the way we considered the ethical challenges associated with such a system. Ethical challenges associated with GoingOK such as promoting author well-being are not purely the responsibility of the software, but are also the responsibility of the context in which that software is implemented. In working through this ethical challenge, over time we have developed social resources such as web-based guides and dialogic ways of working to promote the use of the software in a way most likely to benefit the well-being of the authors. We consider the ethical challenge of promoting author well-being as being twofold: firstly the development of the software itself in order to benefit the author, and secondly the development of social and pedagogical resources to ensure that the software is put to use in a way that is most likely to improve the well-being of the author.

8.2.2  Negotiating Meaningful Participation While we could see evidence that GoingOK supported personal well-being, the intrinsic worth was not enough to engage new users. Participation had to be meaningful for authors, and this involved being sensitive to both the individual characteristics of the author, and the social context in which they are engaging in the process of reflection. As the project was taken up in new contexts, there was significant discussion about what pedagogic purposes GoingOK would serve. This negotiation had to occur with facilitators of groups about why they wanted to use GoingOK, what might be meaningful for authors, and for researchers and other stakeholders in the activity. What was needed for learning or research was not guaranteed to align with the personal needs of the author. GoingOK users don’t use the software for merely well-being purposes. In fact, frequently the authors do not realise that they are receiving any well-being benefits at the time of writing. GoingOK authors may instead initially find meaning in keeping a record of events or completing a required reflection for a university task.

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

159

Authors regularly reflect that is only when they look back over their GoingOK plotline that they realise the regular reflections have helped them recognise what they have learned, or that they needed to make changes, or that it helped them stay on track with their study goals. Therefore, one of the challenges is negotiating why an author should use GoingOK in the first place as opposed to some other tool to record their thoughts. We found that prior to discussing the use of GoingOK the potential authors needed to learn more about how the process of reflecting on learning has helped other authors to improve their understanding of themselves and the task that they are attending to. Once students have accepted that reflection on learning can benefit their learning, they tend to be more receptive to using a tool which supports them in this reflection process. The time spent in negotiating a clear social benefit to individual authors was also critical, as for authors to benefit, they had to feel comfortable using the tool over time. It made for anxious moments each time a new project started about whether we had made the project sufficiently meaningful to encourage participation. Fri 23 Jan 2015 Just saw the first L project participant entry which made me very happy. Am feeling a bit anxious about getting participants started on using it.

Meaningful participation has included negotiating mutual benefits for authors and facilitators. In the Beginning Well (BW) project, the reflections of the early career teacher authors supported those teachers in their transition to practice, as well as informed the facilitators from the employing system how to better support the ECTs for success. Using GoingOK for the mutual benefit of authors and an employer raised important ethical design dilemmas: How to protect the identities of vulnerable beginning employees from being judged while these authors were still establishing themselves in new careers, how to enable authors to reflect freely while avoiding feelings of surveillance by an employer and any negative consequences, even how to invite authors to participate without feeling any sense of coercion. Anonymity of the user was absolutely critical. These considerations led to a number of design decisions that are outlined in Sect. 8.2.3. Benefits to participants also depended on the technical ease of use. What appeared to be annoying technical problems to students, such as difficulties logging in to GoingOK, became significant pedagogical issues that had wider ripple effects. For example, if students were frustrated in their attempts to use the online tool, they disengaged from their facilitator’s plans (T2T). Potential authors quietly gave up without letting anyone know they were having difficulties (BW). This meant that their views were not being considered by the mentors who were seeking to find ways for their schooling system to support them. In the FF project, where use of GoingOK was mandated for an assessment task, the student authors vented their frustration in emails to their lecturers where they were having difficulties logging on to use the tool. Student satisfaction ratings are used as high stakes performance indicators for tertiary lecturers, so the cost of technical disruption was risky for the facilitating lecturers. Authors and facilitators needed to have an early successful experience of GoingOK to build the trust that was needed for them to continue using the tool so

160

A. Gibson and J. Willis

that the benefits of reflection could be experienced. Even timing of technical updates to address concerns had potential to disrupt the trust that had been established. Fri 2 Mar 2018 We are discussing the issues that some students are having logging onto the software. Some of it is more likely to do with the way that their internet works with the software or their permissions on their machine, or their firewall at work. This is informing Andrew’s ongoing development of a new version. We discussed how important the timing is before changing over to a new version and how important trust is again. Where students are new to the experience or there is assessment, the risk is higher. Feedback cycles are fuelled by trust. They revolve on funds of capital that are built and spent as cycles revolve. Each communication and response generates future trust, and there is the idea of borrowed trust as other team members borrow from their trust in me and my trust in us.

Trust has been a recurring challenge throughout the development of GoingOK. We found that technical stability and predictability was essential to build trust, and yet well-established trust was still fragile and could be eroded quickly by unforeseen circumstances. The social trust in GoingOK was enhanced by technical adaptations such as using a familiar login like ‘Login with Google’ , and by providing pedagogic supporting resources such as YouTube videos explaining why the tool might be useful, or how to log in. Also, social trust generated by the facilitator's confidence in using GoingOK and promoting its benefits have been essential to help authors initially engage and see the benefit for themselves. Additionally the social trust within the author group has encouraged others to successfully engage. For example in the FF project, external post-graduate students in an online tutorial were providing support to one another: Tue 20 Feb 2018 The first live class of 31 students are logging into GoingOK at the moment. I am interested that a number of people are succeeding and giving advice to those who are struggling. There is a question about editing...if others can see their reflections...or helping others access via chrome.

For many authors, once they had overcome their initial uncertainty of how to use a new digital tool, they continued to reflect without hesitation. For other authors and facilitators, there was an additional concern about what the reflective data might be used for, and who would read their responses. Tue 13 Feb 2018 Yet for others in our team there is an awareness of an unknown audience and unknown users. They ask - Who will read this? This hesitancy speaks to a deep vulnerability that can intervene before the keystrokes are made. I don’t have the same reluctance and this might be because I know the designers and there is interpersonal trust to begin with.

This wariness is part of growing public awareness of online data use.3 For tertiary education students and lecturers who have been early adopters of GoingOK, there is also a growing critical awareness of concerning impacts that digital data has on learners and their learning (Lupton and Williamson 2017). The distant possibility of data misuse does not seem to reduce the likelihood of a person to use online tools, in what Hallam and Zanella (2017) identify as the privacy paradox, if the benefit seems more immediate than the risk.  An Australian example can be found in the Australian Privacy Foundation (https://privacy.org.au).

3

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

161

However, the digital environment also means that there can be less transparency and control over who can access these personal insights. Ephemeral digital traces of reflective thoughts-in-the-moment potentially remain visible for a longer time frame. These are fundamental ethical concerns about ownership, privacy and power relationships. These often led to a process of balancing interests of different stakeholders.

8.2.3  Balancing Stakeholder Interests When the various stakeholders interests do not easily align, the GoingOK team has prioritised a negotiated approach to find middle ground where there is greatest overlap and mutual benefit of greatest good (Somerville 2006). The highest priority has been given to maintaining the integrity of the author’s role as owner of the reflections, however the way this balancing process is undertaken depends upon the context in which the reflection takes place. In some projects such as the BC project, there was direct negotiation between the research team and authors about technical and social design decisions to balance stakeholder interests. In other projects such as the BW and FF projects there were formal, and imbalanced power relationships between facilitators and authors. In the BW project the researchers were separate to the context. In the FF project, the researchers were also teachers of students who were being invited to use GoingOK.  The variability of contexts, where we were sometimes tightly associated with the facilitating team, and other times loosely connected, meant that a ‘one-size’ technical fix was not possible. We examine the tight and loose principle in more detail in Sect. 8.3.1. To manage these competing interests early versions of the GoingOK software included research consent mechanisms within this software, however due to the impact that contextual factors had on the formal ethics process,4 a design decision was made to remove the process of consent from the software and allow the specific consent mechanisms to be managed outside of the software (e.g. with paper or pdf consent forms distributed to participants by means appropriate to the context). Within the overall socio-technical system of GoingOK, this essentially shifts the consent process from being technically facilitated (in the software) where it needs to be pre-determined, to being socially facilitated (by people) where it can be contextually responsive. What remains in the software is a more general ethical baseline that authors are made aware of when they sign up, and which are visible in the help section before signing in. The key elements of this baseline can be summarised as follows: • By default, GoingOK does not store any personal information together with an author's reflective writing. The author’s anonymity in the system is by design. 4  In our local context, GoingOK does not receive ongoing ethical approval from the University. Rather, separate ethical approvals are sought for each research project in which GoingOK is used.

162

A. Gibson and J. Willis

• Logins are validated by virtue of holding an account with Google, but no personal information is collected. • Authors own their reflective writing and will always remain the owner of this data. Authors can control their data and request that it be deleted on cancelation of their account. • While a GoingOK account is current, GoingOK may analyse author reflection data for the purposes of providing a range of services including insights on the reflective writing and on the writing of groups of people. • Authors cannot be identified through any of the analyses that GoingOK performs on their reflection data. • Some of these analyses may form the basis of computer models, and these models are owned by the GoingOK implementers and may exist independently of the author’s reflection data, and remain after an account is deleted. • Authors may be asked if they are willing to help with research projects by allowing limited access to their de-identified reflection data, but researchers will not have access to personal information that may identifies an author. • No author is obliged to participate in any research. • GoingOK will never give an author’s personal information to a researcher to make contact, but may send an author a message requesting contact which the author may ignore, decline or accept. • Researchers will only be able to request contact like this if they have ethical approval from their institution to make contact with authors as part of the research. In addition to the technical development of GoingOK, where it was being used for research purposes, there were social negotiations about what data might be used only for teaching and learning purposes and what type of additional ethical consent would be needed if teaching and learning GoingOK data might be used for research. Potential conflicts of interest were negotiated through the formal research ethics processes. This meant that invitations for research were communicated in formal ways. In the BW project, the invitation was given at the very start of the project, with the research acting as the prompt for using GoingOK. In the FF project, the teaching and learning purpose of helping students become more confident with their learning through meta-cognitive reflection was prioritised. The invitation to share GoingOK reflections with researchers was then sent at the end of the teaching experience, and after authors could review their reflections. To balance stakeholder interests, and maintain the conditions of sociocultural trust needed for successful take up by authors, many of the projects have had a dialogic design element. Individual authors have reflected as part of a group, with a facilitator providing opportunities for whole group reflection on the corpus of de-­ identified reflections. These designs were driven by ethical commitments that technical design should work within meaning systems valued by teachers and learners. It also reflected a sociocultural commitment to learning where feedback loops enable students and teachers to learn from one another, as they adapt practice and negotiate meaning through the exchanges and interactions (Pryor and Crossouard

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

163

2008). GoingOK feedback loops occurred when authors in focus group discussed some sample reflections from the group and gave their interpretations (BC, BW, T2T, FF). A technical design feature was added in 2018 to enable an authorised facilitator to download a file (CSV) of deidentified reflections, to give general feedback to the group. Balancing stakeholder interests involved a balancing act between utility, a desire to maintain the author agency, and protecting the well-­being focus through privacy and anonymity. 8.2.3.1  Utility Sometimes teachers were reluctant to facilitate learning activities with GoingOK as they were concerned that the open prompt “How are you going?” may not generate useful reflections and may invite personal criticism of the teaching staff by student authors. Thu 8 Mar 2018 On a brighter note, the trepidation and anxiety that was felt by other lecturers in the unit early on about GoingOK and feedback loops has lifted. The two other lecturers are reassured that students are not making personal judgements about them or their teaching, but instead are reflecting on their own learning and lives

While the fear of personal criticism was lessened through experience, sometimes facilitators established additional pedagogical prompts to direct student reflections towards specific learning focii such as an academic reading, or learning experience. The extent to which the teacher asserts this expectation in respect of the reflection process has an impact on the topic, the tone and formality of the reflections the students write. The students can tend to write to the teacher instead of themselves, and they can write with a consciousness of receiving good marks for the work. This can result in instrumental reflective writing. In contrast, if the teacher is less prescriptive in what the students should write, the students tend to be more flexible, making connections between feelings and thought, personal and academic life, with entries indicating greater self-regulation that potentially helps them make richer learning connections. Balancing the tension between open invitations and more scaffolded invitations to reflect includes balancing the perceived utility of reflections with the benefits to authors. 8.2.3.2  Agency When there have been conflicting stakeholder interests the touchstone for decisions has been the ethical benefit for authors. For example, course designers have been interested in accessing group reflective data in order to improve course design, however this audience and purpose may not have been negotiated with the authors in the first instance. Data has not been shared by us unless there has been ethical permission from authors through the formal mechanisms of research ethics approval negotiated with research committees that data can be used for further publication.

164

A. Gibson and J. Willis

However de-identified GoingOK data, from a cohort gathered during teaching and learning activities, has been used by their lecturers to further those learning activities during discussions or planning, with the understanding that such data should not be gathered for publication. Granting permission has been managed by us in projects so far, but as the software is open source, this stance may be harder to govern. The agency of the facilitator is recognised in making contextually based decisions, and the limited agency of the GoingOK team in monitoring how data is used. Authors have been regarded as agents who should be able to control access to their reflective data. For this reason, the technical design has been set up with no requirement for institutional emails to be used to log in. A future design commitment is to find ways to enable authors to have greater agency over the analysis of data through automated feedback. The design also has imposed deliberate limits to author agency, where reflections cannot be deleted or edited. A number of authors have requested editability of reflections; Thu 22 Feb 2018 When reflections are linked into the assessment agenda, then there may be fears if spelling or ideas are not ‘correct’. These reflections are some of the first writing that students may be doing in returning to academic study and some of their early concerns are about whether they will be good enough in the writing department. When we raised editing with Andrew he explained his reasons for wanting to trace the revisions of thought as a valued human tracing of thinking. We need to address concerns through the pedagogic work where the students can be reassured that the early writing won’t be judged, and that there is value in tracing thinking, and metacognitive traces show up over time.

The decision not to enable editing of prior reflections was made to acknowledge that reflection is a product of in the moment thinking. This decision created ethical tensions when reflections were being used for assessment purposes and has prompted future design plans giving authors greater agency to hide, or selectively share reflections. 8.2.3.3  Anonymity Reflection is a representation of an individual’s inner conversation, and as such is an intensely personal record of human thought. The GoingOK online environment transforms this private thinking into a semi-public performance as reflections are typed, and tagged with a time stamp. Unlike other social media forums, there are no ‘like’ or ‘share’ options so GoingOK author’s thoughts and emotions can feel somewhat private to the reflecting author. An early design decision to maximise author agency and privacy was to prioritise author anonymity. This is achieved by authors choosing their own log-in identity, and by ensuring personal information is not stored with reflections, and analyses are kept separate from data. This decision to prioritise author anonymity has created ethical challenges. Facilitators are unable to respond to individual reflections where the author might indicate distress. It has been important for authors to know that there will be no

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

165

monitoring of their reflections in order to respond to individual needs, as their identities are not discoverable, by design. Where facilitators might suggest that RWA could ‘flag’ entries that indicate distress, the limits of RWA are clear. RWA is not able to distinguish text that human authors may recognise as distressed, as words can change meaning when used in different senses and contexts. For example, there are no special words in the phrase “end it”, but in the context of someone being very depressed, this phrase could indicate the possibility of suicidal thoughts. The need for some kind of support has instead been met through general social responses to the group that have pointed towards sources of help. A further ethical challenge has been how to recruit individual participants for research where their data is automatically made anonymous by the design. A social solution has been for authors to download their own reflections and share them. A technical solution has resulted in the GoingOK software generating for each author a deidentified GoingOK identifier. Authors can share this ‘anonymous id’ through an anonymous survey link, that is then manually extracted by a GoingOK administrator who has sighted the approved ethical agreement. Automation of access to analysis has not yet been achieved, as human care for the reflections has been needed, with each context requiring a balancing of needs.

8.2.4  Initiating Socio-culturally Sensitive Analysis The promise of digital reflections - that computational analysis will be time efficient, especially when dealing with a large number of reflections from a large number of authors  - is fraught with ethical tensions. What is easy to analyse computationally, may not be meaningful. For us, this has meant that computational analysis has depended on a first round of human qualitative analysis. Qualitative human-led analysis was often co-constructed with project partners during deep and rich discussion. In the BW and T2T project initial analysis occurred between authors and researchers, and then facilitators and researchers. In the FF and PACT projects, the analysis occurred between researchers and facilitating lecturers. This first stage of analysis has typically focused on common themes, evidence of reflection, and what these meant for learners. It was a process that was both productive and time-consuming. Tue 30 Jun 2015 Just finished data analysis session with L data. Google docs and co-­ constructing is rich, time consuming, open, unstructured and new ground for me. I have to be present, fully listening. Yet as we invest the time in thinking aloud, our coding, shared understanding and theorizing gathers momentum. We are seeing immediate impact and relevance to both theory and potential practice which is exciting. Time is again an issue, but booking ahead into diaries seems to be a good way to make progress possible.

The ethical challenge was that while quick analytical representations, such as a contour map of a group’s reflection points over time (See Fig. 8.4), was impressive and could be shared with a cohort to show that their reflections were being taken seriously, they were not meaningful without the contextually informed qualitative

166

A. Gibson and J. Willis

100

80

point

60

40

20

0 0

2

4

6

8

10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 week

Fig. 8.4  Example contour map of reflection points for a cohort

analysis. However the qualitative analysis was time-consuming and slow. Developing relevant analytics from the qualitative analysis led to meaningful approaches, but often well after the learning event that sparked it, and after the authors who created the reflections have moved on. The facilitators and teachers who engaged with the raw reflective data also found reading the unfiltered emotions of the authors was emotionally demanding. Often the students were reflecting about the challenges of learning, and were registering what they were finding difficult or hard to understand. Wed 28 Aug 2019 Again, I am putting off looking at the group GoingOK data... I am hesitant to read the data as it is more work and it is emotional work. I miss having a team around me to talk it through with me. There is such strength in the dialogue, and this time I am solo and not enjoying it nearly as much.

A social solution has been to continue a dialogic team approach to the analysis, so teachers and mentors can be supported to engage with emotionally demanding work, but also added more time and potential risks to confidentiality.

8.3  Guiding Principles Working ethically in digital environments cannot happen only through formal ‘hard’ processes of ethical approval, but also occurs importantly through ‘soft’ governance where decisions are based on what is socially preferable (Floridi 2018) and through commitments to moral practices (Slade and Prinsloo 2013). We identify four principles that we have returned to on a regular basis throughout the development of GoingOK in navigating the challenges that have been illustrated in the previous section. They are the: (1) interaction principle, (2) pragmatic principle, (3) respect principle, and (4) trust principle. The first two of these principles are associated with the functional nature of our work, or how we approached the

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

167

development of GoingOK. Our approach has been continuously interactive embracing a dialogue between the social and the technical, and it has been pragmatically focused on achieving useful outcomes. The second two principles are associated with the benevolent character of our work, that is the principles by which we considered the work good. The degree to which the results of our work might be considered good involves consideration of both the extent to which it respected the humanity of stakeholders and their contexts, and the degree to which we facilitated trust in the various interactions between the stakeholders and the socio-technical context. In the following sections we detail the significance of these principles to our work with GoingOK, and their interdependence.

8.3.1  Interaction Principle The interaction principle involves considering interaction as a primal influencing factor in achieving a desired impact. This is in contrast to focusing on particular aspects like the technology, or people, or a particular activity. Just as focusing on the quality of a relationship benefits both parties more than a focus on each individual party, focusing on interactions can yield benefits for all contributors to the interaction. The interaction principle not only applies to the interaction between the social and technical worlds, it also foregrounds the interactions between authors and the software, and between authors and other stakeholders (like researchers or authority figures with an interest in the reflection process). 8.3.1.1  Tight and Loose Interactions A core aspect of these interactions with the authors is the degree to which interactions were tightly or loosely coupled. Tightly coupled interactions between teachers and students could negatively affect the level of personal reflection as students wrote to what they thought the teacher wanted to hear. However, when this coupling is too loose, students may not engage beyond the first ‘trying it out’ reflection. A tight interaction between an author and the software often resulted in the author trusting GoingOK as a ‘listening ear’, and producing deep reflections. By contrast loose interaction between an author and GoingOK could result in sporadic superficial reflections. 8.3.1.2  Analytics Interactions Our use of GoingOK for research projects has meant that analysis of the reflective writing has always been present. However the nature of this analysis has developed over time as we have grappled with ethical issues about how to most appropriately implement and use the reflective writing analytics. We have found that like the other

168

A. Gibson and J. Willis

interactions mentioned above, there are multiple interactions with the analytics raising ethical questions like: Who decides what is analysed and how it is analysed? Who gets to see the resultant analytics, and what meaning might they make of them? What do analytics from my reflections say about me? What do analytics from my class (or group) say about me? How do I compare with others in a similar situation? These kinds of questions in the minds of authors can be accompanied by emotional reactions such as considering the software creepy or being angry about being under surveillance, or being afraid about writing the wrong thing. An important aspect of the interaction principle is that we consider all of these reactions by authors as valid, and we consider how to improve the quality of interactions with analytics so as to ensure that they are beneficial and contribute to improved well-being. This has resulted in considerable restraint in terms of what analytics is accessible within the software by authors. It has also meant that in some projects (like FF) analytics have been mediated by responsible stakeholders, directed to cohorts rather than individuals, and been derived via manual human interpretation rather than automated computation. 8.3.1.3  Dialogic Interactions Dialogue differs from discussion. Dialogue requires an openness to new ideas. Through listening, adapting, and finding agreed ways forward, the resulting learning is richer than what might be possible from a mere discussion that is limited to an exchange of views. Dialogic interactions – between us, and other researchers, facilitators and authors  - have informed the socio-technical design decisions, enabled reflections to be situated within meaningful practice, resolved competing stakeholder interests, and have underpinned the development of new types of analysis. Additionally, reflective dialogic interactions have informed the pedagogical processes for using GoingOK as dialogic reflection can enhance wellbeing by supporting personal and professional development (Pappa et al. 2017), Interlocutors can help authors renew their perspectives and consider new alternatives (Willis et al. 2017). While digital reflection promises efficiency and speed, a commitment to dialogic interactions has instead led to a slower cycle of recursive development. This process of interrogating and revisiting key issues in multiple contexts has enabled the consistency of the shared ethical commitments, such as pragmatic benefit, to become visible. 8.3.1.4  Socio-technical Interactions We viewed human stakeholders, technical and social infrastructure as interactive participants in a socio-technical system. This perspective abandons the assumptions of a benign computer-as-tool and active human-as-user view, and considers all elements to be potential contributors to success or otherwise of the system as a whole.

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

169

From the beginning GoingOK has been conceived and has grown within an interaction between socio-cultural and technical worlds. Our collaboration has brought both of those worlds together and the interaction between different philosophies has given birth to ideas that would have been unlikely to arise within more traditional silos. Our socio-technical interaction has also provided considerable strength when addressing the various ethical challenges during the development and implementation of GoingOK. Convinced that working in a transdisciplinary way would be most beneficial we adopted a perspective for design and development that went beyond the more common human-computer interaction paradigm. Addressing ethical issues then, was more than just a process of software bug-fixing. It involved considering all aspects of the eco-system, identifying which might best be changed to address the issue at hand.

8.3.2  Pragmatic Principle Interactions are somewhat pointless if they don’t result in some practical good for those contributing to the interaction. The pragmatic principle addresses this point. C.S. Peirce’s pragmatic maxim (Peirce 1905) asserts that the full understanding of a concept is found in its practical effects. For us, GoingOK is best understood in terms of the practical impact it makes on those that use it. Rather than understanding it as software that is bug-free or that performs to a certain level of efficiency or accuracy, we choose to assess GoingOK on the extent to which it makes an impact on the well-being of authors, helps them grow, and provides insights to others who have the authors’ best interests at heart. The pragmatic principle ensures that we are continually asking the question: What is the good that we want to see realised, and what does that look like for the authors? This is not a single step in a one-off design. The pragmatic principle when coupled with the interaction principle requires us to view GoingOK as dynamic and changing, responsive to those that use it and to the contexts in which it is used. With a traditional approach to technology implementation, the technology remains static and the outcomes vary depending on the users. With the pragmatic principle, the outcome (the beneficial impact) is fixed, and the technology and social context is continually changed to ensure the outcome remains. It is an understanding that benefit doesn’t automatically flow from good software, but that it emerges from a dynamic system which is responsive to context.

8.3.3  Respect Principle While the pragmatic principle ensures our focus on the practical effects or beneficial impact, the respect principle requires us to answer the question: what ensures that this impact is beneficial? First and foremost, this principle ensures that we respect

170

A. Gibson and J. Willis

the humanity of the authors as decision makers in their own personal contexts, that we respect their vulnerability in recording their personal thoughts, and that we respect those factors that allow the author to derive improved well-being from their reflective writing. The respect principle ensures that we respect the authors as owners of their data, and using the technology to help protect their data and their anonymity, separating those data that hold their personal thoughts from those data that are required merely for administration of the system. However, respecting their anonymity doesn't erase a respect for the situations that authors find themselves in, and does not necessary resolve ethical issues around helping those who are distressed. When coupled with the interaction principle however, these potential situations are addressed through other social interactions. Assistance for authors in distress is not provided by an automated technical response service, but by ensuring that authors are aware of the opportunities to seek help if required through the social resources of GoingOK. This also respects the author’s agency in being the owner of their decisions, even if such decisions are related to distress and adversity. The respect principle has led us to minimise the extent to which we intervene in the process of reflection, and when intervention is part of the process (e.g. feedback in learning contexts), it is designed to maximise the potential for well-being and preserve the other aspects of GoingOK that maintain respect.

8.3.4  Trust Principle In starting with interaction, seeking out beneficial impact, and respecting the vulnerability of the author, we are in a position to seek the author’s trust in using GoingOK to write their personal reflections. The trust principle is not simply the consequence of the other three principles. Trust is difficult to win and easy to lose. The principle of trust ensures that we make trust a core consideration in the design and development of both the social and technical aspects of GoingOK.  Coupled with the principle of respect, it necessitates an understanding that trust will exist at differing degrees in different interactions, not just interactions between the author and the software, but also between the author and other stakeholders like teachers or researchers. This has resulted in us taking a cautious iterative approach to the development of GoingOK. GoingOK began with consideration for the well-being and identity of early career teachers, considering them as co-developers. There was a trust that reflection would have good benefit for participants, so it followed that the technology also should support this core value. However, this trust is easily lost when the technology is assumed to be benign. The earliest versions of GoingOK included user and password management. This meant that GoingOK had all of the authors’ data, including passwords, and the authors control over this data was limited to the functions that GoingOK provided them through the software. In this simple example, the software

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

171

is active in the management of the author’s personal data, not passive as could be assumed. When we made the design decision to use Google for authentication, GoingOK no longer managed any of the author's personal information, it merely kept a connection between the login (Google ID) and the internal representation of the author. The software then further distances the reflective writing from even this small amount of personal information by way of the anonymous id allocated to the author on first use. Our approach to simple administrative procedures such as these highlight the trust principle at work. Recent development of GoingOK documentation,5 provides further evidence of how we have put the trust principle to work. Our aim has to be as transparent as possible about the whole socio-technical system of GoingOK, and to make clear to all stakeholders the centrality of the well-being of the authors.

8.4  Conclusion In our ethical analysis of GoingOK, we have identified key ethical challenges that we encountered while endeavouring to promote author well-being through meaningful participation in GoingOK projects, and while balancing different stakeholder interests and conducting socio-culturally sensitive analysis of the reflective writing data. When reflecting on these challenges, we identified four guiding principles that helped us address them. The functional principles of interaction and pragmatism have resulted in our transdisciplinary work being focused not on the social or technological dimensions of GoingOK, but on achieving a transdisciplinary practical effect of beneficial impact for the author. We have described GoingOK as a socio-­ technical system where interaction is foundational rather than a side effect of individual components. These functional principles led us to two benevolent principles of respect and trust, both of which find strength through the interactive character of the work, the socio-technical nature of the system, and the pragmatic focus on the well-being of the author. While we make no claim as to have ‘solved’ the various ethical challenges raised, nor created a blueprint for others to avoid them, we suggest that the principles we have drawn on in our work may well be relevant for other work beyond reflective writing analytics. Further, we suggest that our conception of GoingOK as a socio-­ technical system may inspire others who develop interactive digital technologies to take a similar perspective and in doing so promote the digital well-being of those that interact with them.

 See http://docs.goingok.org

5

172

A. Gibson and J. Willis

References Archer, M. S. 2007. Making our Way through the World: Human Reflexivity and Social Mobility. https://doi.org/10.1017/CBO9780511618932 ———. 2012. The Reflexive Imperative in Late Modernity. https://doi.org/10.1017/ CBO9781139108058 Berzonsky, M.D., and K. Luyckx. 2008. Identity Styles, Self-Reflective Cognition, and Identity Processes: A Study of Adaptive and Maladaptive Dimensions of Self-Analysis. Identity 8 (3): 205–219. https://doi.org/10.1080/15283480802181818. Buchanan, J., A.  Prescott, S.  Schuck, P.  Aubusson, P.  Burke, and J.  Louviere. 2013. Teacher Retention and Attrition: Views of Early Career Teachers. Australian Journal of Teacher Education 38 (3): n3. Crosswell, L., J. Willis, C. Morrison, A. Gibson, and M. Ryan. 2018. Early Career Teachers in Rural Schools: Plotlines of Resilience. In Resilience in Education, 131–146. Cham: Springer. Floridi, L. 2018. Soft Ethics and the Governance of the Digital. Philosophy & Technology 31 (1): 1–8. Gibson, A. 2017. Reflective Writing Analytics and Transepistemic Abduction. Brisbane: Queensland University of Technology. Gibson, A., Aitken, A., Sándor, Á., Buckingham Shum, S., Tsingos-Lucas, C., and Knight, S. 2017. Reflective Writing Analytics for Actionable Feedback. In Proceedings of the Seventh International Learning Analytics & Knowledge Conference on – LAK ’17 (153–162). https:// doi.org/10.1145/3027385.3027436 Gibson, A., K. Kitto, and P. Bruza. 2016. Towards the Discovery of Learner Metacognition from Reflective Writing. Journal of Learning Analytics 3 (2): 22–36. https://doi.org/10.18608/ jla.2016.32.3. Hallam, C., and G.  Zanella. 2017. Online Self-Disclosure: The Privacy Paradox Explained as a Temporally Discounted Balance Between Concerns and Rewards. Computers in Human Behavior 68: 217–227. Hoban, G., and G. Hastings. 2006. Developing Different Forms of Student Feedback to Promote Teacher Reflection: A 10-Year Collaboration. Teaching and Teacher Education 22 (8): 1006–1019. https://doi.org/10.1016/j.tate.2006.04.006. Kelly, N., C. Sim, and M. Ireland. 2018. Slipping Through the Cracks: Teachers Who Miss Out on Early Career Support. Asia-Pacific Journal of Teacher Education 46 (3): 292–316. Lengelle, R., T. Luken, and F. Meijers. 2016. Is Self-Reflection Dangerous? Preventing Rumination in Career Learning. Australian Journal of Career Development 25 (3): 99–109. https://doi. org/10.1177/1038416216670675. Lupton, D., and B.  Williamson. 2017. The Datafied Child: The Dataveillance of Children and Implications for Their Rights. New Media & Society 19 (5): 780–794. Mäeots, M., Siiman, L., Kori, K., and Pedaste, M. 2016, July. Relation Between Students’ Reflection Levels and Their Inquiry Learning Outcomes. 5558–5564. https://doi.org/10.21125/ edulearn.2016.2324 Murphy, F., and F.  Timmins. 2009. Experience Based Learning (EBL): Exploring Professional Teaching Through Critical Reflection and Reflexivity. Nurse Education in Practice 9 (1): 72–80. https://doi.org/10.1016/j.nepr.2008.05.002. Murray, T., Woolf, B., Katsh, E., Osterweil, L., Clarke, L., & Wing, L. 2014. Cognitive, Social and Emotional Support (62) [NSF SoCS Award #0968536]. Massachusetts: University of Massachusetts. Nerken, I.R. 1993. Grief and the Reflective Self: Toward a Clearer Model of Loss Resolution and Growth. Death Studies 17 (1): 1–26. https://doi.org/10.1080/07481189308252602. Pappa, S., J.  Moate, M.  Ruohotie-Lyhty, and A.  Eteläpelto. 2017. Teachers’ Pedagogical and Relational Identity Negotiation in the Finnish CLIL Context. Teaching and Teacher Education 65: 61–70. Peirce, C.S. 1905. What Pragmatism Is. The Monist 15 (2): 161–181. Pennebaker, J. W. 2010. Expressive Writing in a Clinical Setting. 3.

8  Ethical Challenges and Guiding Principles in Facilitating Personal Digital Reflection

173

Pennebaker, J.W., and J.D.  Seagal. 1999. Forming a Story: The Health Benefits of Narrative. Journal of Clinical Psychology 55 (10): 1243–1254. https://doi.org/10.1002/(SICI)1097-4679 (199910)55:103.0.CO;2-N. Pryor, J., and B.  Crossouard. 2008. A Socio-Cultural Theorisation of Formative Assessment. Oxford Review of Education 34 (1): 1–20. Slade, S., and P.  Prinsloo. 2013. Learning Analytics: Ethical Issues and Dilemmas. American Behavioral Scientist 57 (10): 1510–1529. https://doi.org/10.1177/0002764213479366. Somerville, M. 2006. The Ethical Imagination: Journeys of the Human Spirit. Melbourne: Melbourne University Press. Tsingos-Lucas, C., S.  Bosnic-Anticevich, C.R.  Schneider, and L.  Smith. 2016. The Effect of Reflective Activities on Reflective Thinking Ability in an Undergraduate Pharmacy Curriculum. American Journal of Pharmaceutical Education 80 (4): 65. https://doi.org/10.5688/ajpe80465. White, S. 2019. Recruiting, Retaining and Supporting Early Career Teachers for Rural Schools. In Attracting and Keeping the Best Teachers, 143–159. Springer. Willis, J., L. Crosswell, C. Morrison, A. Gibson, and M. Ryan. 2017. Looking for Leadership: The Potential of Dialogic Reflexivity with Rural Early-Career Teachers. Teachers and Teaching 23 (7): 794–809. https://doi.org/10.1080/13540602.2017.1287695. Andrew Gibson is a Researcher in Decision Science and Information Interaction. He is a Lecturer in the Information Systems School, Science and Engineering Faculty at Queensland University of Technology, Brisbane, Australia. Andrew’s research centres on the interaction between people and technology. He focuses on both the philosophy of cognition through transepistemic abduction, classical pragmatism and ethical considerations of socio-technical systems. He also investigates the application of these ideas through reflective writing analytics, particularly in the field of Learning Analytics. Andrew’s research is also influenced by his prior experience in technology development and management and high school music teaching. Andrew works in a long-term collaboration with Dr Jill Willis, investigating the pragmatic benefits of reflective writing analytics for learners and teachers through the dialogic interactions between computational and socio-cultural perspectives. A key aim of their work is to understand what makes ‘good outcomes’ for learners and how to facilitate these through reflective writing analytics. Research Interests: Transepistemic Abduction, Information Interaction, Reflective Writing Analytics, Classical Pragmatism and Learning Analytics. [email protected]

Jill Willis is a researcher in educational assessment and evaluation. She is Associate Professor in the Faculty of Education and a Researcher in the Centre for Inclusive Education at Queensland University of Technology, in Brisbane, Australia. Her current research focuses on how self-assessment feedback loops inform personal agency and system change in educational contexts. She specialises in qualitative work that highlights the complexity of teacher and student interactions in Assessment for Learning. Jill works in a long-term transdisciplinary collaboration with Dr Andrew Gibson, investigating the pragmatic benefits of reflective writing analytics for learners and teachers. Together they aim to identify principles and practices that bring sociocultural and computational fields together, to achieve good outcomes for learners. Jill draws from her pedagogical expertise as a secondary school teacher, middle leader and preservice teacher educator. She specialises in collaborative research with international and industry partners to address real world problems and generate research evidence that informs practice, policy and theory. Research Interests: Assessment for Learning, participatory Evaluation, Assessment Capability and Digital Feedback Loops. [email protected]

Chapter 9

Big Data and Wellbeing: An Economic Perspective Clement Bellet and Paul Frijters

Abstract  This article provides a general review and discussion of the debate surrounding Big Data and wellbeing. We ask four main questions: Is Big Data very new or very old? How well can we now predict individual and aggregate wellbeing with Big Data, and to what extent do novel measurement tools complement survey-­ based measures? Is Big Data responsible for the rising interest in wellbeing or a threat to it? What are the economic and societal consequences of Big Data, and is there a point to government regulation of ownership, access, and consent? Keywords  Well-being · Big data · Privacy · Happiness economics

9.1  Quo Vadis? The availability of information has increased dramatically over the last decades, with roughly a doubling in the market for data storage every 2 years.1 The main driver of this has been the spectacular reduction in the costs of gathering and transferring information: cheaper computer chips and faster computers have followed Moore’s law since the 1970s. As a result, there are now billions of databases on the

This article was initially prepared as a contribution to the World Happiness Report 2019. We thank Elizabeth Beasley, Aurélien Bellet, Pascal Denis and John F. Helliwell for their useful discussions and comments. See https://www.statista.com/statistics/751749/worldwide-data-storage-capacity-and-demand/

1 

C. Bellet (*) Erasmus School of Economics, Erasmus University Rotterdam, Rotterdam, The Netherlands e-mail: [email protected] P. Frijters London School of Economics, London, UK © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_9

175

176

C. Bellet and P. Frijters

planet with all kinds of information, including lists of genetic markers, inventory, pictures, topography, surveillance videos, administrative datasets and others.2 The amount of data on individuals collected is baffling. For instance, whilst it was reported in 2014 that there were thousands of “data brokering” firms buying and selling information on consumers, with the biggest company Axciom alone already having an average of 1500 pieces of information on 200 million Americans, today the amount is 4 times higher at least. As for Google queries, they went from 14 billions per year in 2000 to 1.2 trillions a decade later. The main business model that pays for the collection and analysis of all this data is advertising: Internet companies and website hosts now sell personalised advertising space in a spot market, an industry worth around 250 billion a year. There is also a smaller market for information about individuals: professional “data broker” firms specialise in collecting data on individuals around the world, selling it to all and sundry. This includes their creditworthiness and measures of their Internet-related activities. Firms are getting increasingly good at matching records from different sources, circumventing privacy laws and guessing the identification behind de-­ personalised information by cross-referencing financial transactions and recurrent behaviour. Academic articles and books on these developments are now plentiful. The term used to describe this data explosion and its Big Brother type uses, “Big Data”, was cited 40,000 times in 2017 in Google Scholar, about as often as “happiness”! This data explosion was accompanied by the rise of statistical techniques coming from the field of computer science, in particular machine learning. The later provided methods to analyse and exploit these large datasets for prediction purposes, justifying the accumulation of increasingly large and detailed data.3 The term Big Data in this article will refer to large datasets that contain multiple observations of individuals.4 Of particular interest is the data gathered on individuals without their “considered consent”. This will include all forms of data that one could gather, if determined, about others without their knowledge, such as visual information and basic demographic and behavioural characteristics. Other examples are Twitter, public Facebook posts, the luminescence of homes, property, etc. Is this information used to say something about wellbeing, ie Life Satisfaction? How could it be used to affect wellbeing? And how should it be used? These question concerning Big Data and Wellbeing - where are we, where could we go, and where should we go - will be explored in this article.

2  Improved internet usage and access have been major drivers of data collection and accessibility: internet users worldwide went from less than 10% of the world population to more than 50% today, with major inequalities across countries. 3  For a review of such methods and how they can complement standard econometrics methods, see (Varian 2014). 4  We define Big data as large-scale repeated and potentially multi-sourced information on individuals gathered and stored by an external party with the purpose of predicting or manipulating choice behaviour, usually without the individuals reasonably knowing or controlling the purpose of the data gathering.

9  Big Data and Wellbeing: An Economic Perspective

177

In the first Section we give a brief history of Big Data and make a broad categorization of all available forms of Big Data and what we know about their usages. In the second Section we ask how well different types of data predict wellbeing, what the potential use is of novel measurement instruments, and what the most promising forms of data are to predict our individual wellbeing. We will also look at the question of what the likely effects are of the increased use of Big Data to influence our behaviour. This includes how useful information on wellbeing itself is to governments and businesses. In the third Section we then review the agency issues surrounding Big Data and wellbeing: who is in control of this data and what future usage is desirable? How important is considered consent when data usage agreements for commercial purposes become either the default option or a requirement to access services provided by Internet companies? To illustrate the review, we augment the article with twitter data from Mexico and draw on the 2018 WHR calculations from the Gallup World and Daily Polls, and other major data sources. We do this in particular to discuss how much of wellbeing one can explain the types of information that currently available in the public domain.

9.2  Big Data: A Brief History Before the advent of writing and during the long hunter-gatherer period, humans lived in fairly small groups (20–100) of people who knew each other well. Gathering data on those around them, particularly their emotional state, was necessary and normal, as one can gleam from humanity’s empathic abilities and the varied ways in which faces and bodies communicate internal lives to others. It might not have been recorded on usb-drives, but the most intimate details would have been the subject of gossip and observation within the whole group with which humans lived. It would have been vital to know about others’ abilities, health, likes and dislikes, and kinship relations. All that shared data would now have to be called something like “distributed Big Data”. Then came large agricultural hierarchies and their need to control populations, leading to systems of recording. The Sumerian script is the oldest known system of writing, going back at least 6000 years, and one of its key uses was to keep track of the trades and taxes of those early kingdoms: the business of gathering taxes needed records on who had paid how much and who was yet to pay how much. One might see the hundreds of thousands of early clay tablets of the Sumerian accountants as the first instance of “Big Data”: systematic information gathered to control and manipulate a population. Some 4000 years ago, in both Egypt and China, the first population censuses were held, recording who lived where and how much their wealth was, with the express purpose of supporting the tax ambitions of the courts of those days. A census was the way to know how much individuals, households, villages, and whole regions could be taxed, both in terms of produce and labour time. The key initial use of Big Data was simply to force individuals into paying taxes. The use of a census

178

C. Bellet and P. Frijters

to measure and tax a population has stayed with humanity ever since, including the regular censuses of the Romans, the Domesday book ordered by William the Conqueror in 1086 in Britain, up to the present day where censuses are still held in many countries. The modern countries that don’t have a census, usually have permanent population records, an even more sophisticated form of Big Data. The Bible illustrates these early and still dominant uses of Big Data: the book of Genesis lists the genealogy of the tribe, important for matters of intermarriage and kinship claims; and the book of Exodus mentions the use of a population census to support the tabernacle. Courts and governments were not the only gatherers of Big Data with an interest in recording and controlling the population. Organised religion and many secular organisations collected their own data. Medieval churches in Europe collected information on births, christenings, marriages, wills, and deaths. Partly this was in order to keep track of the daily business of a church, but it also served the purposes of taxation: the accounts were a means of counting the faithful and their wealth. Medieval universities also kept records, for instance of who had earned what qualification, because that is what they sold and they needed to keep track of their sales. As with churches, universities also had internal administrations where they kept track of their possessions, loans, debts, “the academic community”, teaching material, etc. With the advent of large corporations came totally different data, connected to the need to manage long-run relations with many employees: records on the entitlements and behaviour of employees, alongside identifying information (where they could be found, next of kin, etc.). These records were held to allow a smooth operation of organisations and were subsequently used as the basis of income taxation by governments, a good example of where the Big Data gathered by one entity (firms) gets to be used by another (a tax authority) for totally different purposes. What has been said above can be repeated for many other large organisations throughout the ages: they kept track of the key information they needed to function. Traders needed to keep track of their clients and suppliers. Hospitals and doctors needed to keep track of ailments and individual prescriptions. Inns needed to keep track of their guests. Towns needed to keep track of their rights versus other authorities. Ideologies needed to keep track of actual and potential supporters. Etc. There is hence nothing unusual about keeping records on individuals and their inner lives, without their consent, for the purposes of manipulation. One might even say that nothing on the Internet is as invasive as the monitoring that is likely to have been around before the advent of writing, nor is anything on the internet more manipulative than the monitoring of large empires that pressed their populations into taxes, wars, and large projects (like building the pyramids). Big Data is thus ancient. There is just a lot more of it nowadays that is not run and owned by governments, and an incomparably stronger capacity to collect, classify, analyse, and store it due to the more recent rise in computer power and the rapid development of computer science. In the present day, governments are still large producers and consumers of Big Data, usually without the consent of the population. The individual records are kept in different parts of the government, but in Western countries they usually include births, marriages, addresses, emails, fingerprints, criminal records, military service

9  Big Data and Wellbeing: An Economic Perspective

179

records, religion, ethnicity, kinship relations, incomes, key possessions (land, housing, companies), and of course tax records. What is gathered and which institution gathers the data varies by country: whereas in France the data is centrally gathered in a permanent population record and it is illegal to gather data on religion and ethnicity, in the US the various bits of data are gathered by different entities and there is no problem in measuring either religion or ethnicity.5 Governments are also in the business of analysing, monitoring, and manipulating our inner lives. This is a well-understood part of the social contract and of the socialisation role of education, state media, military service, national festivities or national ceremonies: successful countries manage to pass on their history, values and loyalties to the next generation (Frijters and Foster 2013). Big Data combined with specific institutions surrounding education, information, taxation or the legal system is then used to mould inner lives and individuals’ identities. Consent in that process is ex post: once individuals are “responsible citizens” they can have some say about this in some countries, but even then only to a limited degree because opting out is often not an option. In the Internet age, the types and volume of data are truly staggering, with data gathered and analysed for lots of purposes, usually profit-motivated. The generic object is to get a consumer to click on a website, buy a service, sign some document, glance some direction, vote some way, spend time on something, etc. A few examples illustrate the benefits and dangers. Supermarket chains now gather regular scanner and card-data on the sales to their customers.6 Partly in order to improve the accuracy of their data, they have loyalty programs where customers get discounts in exchange for private information that allows the supermarkets to link names and addresses to bank cards and other forms of payment. As a result, these companies have information on years of purchases by hundreds of millions of households. One use of that data has been to support “just on time” delivery to individual stores, reducing the necessity for each store to have large and expensive magazines where stocks are held, making products cheaper. That system requires supermarkets to fairly accurately predict what the level of sales will be for thousands of products in stock, which not merely needs good accounting of what is still in stock, but also good forecasting of future demand which requires sophisticated analysis of previous sales. Hence supermarkets know with near-perfect accuracy how much extra ice-cream they will sell in which neighbourhood if the weather gets warmer, and just how many Easter eggs they will sell at what discounted price. One might see this use of Big Data as positive, efficiency improving. Then there is the market for personalised advertising, also called behavioural targeting. On the basis of their internet-observable history, which will often include

5  The question of ethnic-based statistics is an interesting instance where Big Data is sometimes used to circumvent legal constraints, for instance by predicting ethnicity or religion using information on first names in French administrative or firm databases (Algan et al. 2013). 6  This data is now partly available to researchers and led to numerous studies, for instance Nielsen consumer panel and scanner data in the United States.

180

C. Bellet and P. Frijters

their social communication on the internet (including their mobile phone device), it is predicted what advertising is most likely to work on them. Personalised advertising is then sold on a spot market, leading to personalised recommendations (ie one’s previous purchases), social recommendations (what similar people bought), and item recommendations (what the person just sought). Hildebrandt (2006) typified the key aspect of this market when she said “profiling shifts the balance of power between those that can afford profiling (...) and those (...) profiled”. This advertising market is enormous and has grown fast. Paid media content in 2017 was reportedly worth over 500 billion dollars, and digital advertising was worth some 230 billion in 2017 according to industry estimates. The business model of many internet firms is to offer services for free to anyone in the world, funded by the ads attracted to the traffic on that site. The grand bargain of the Internet is thus free services in exchange for advertising. This is both well-understood and well-known, so one could say that this bargain is made under conditions of considered consent: users of free services (like Facebook) should know that the price of those services is that their personal information is sold for advertising purposes. There is also a market for more invasive information, where access to goods and services is decided on the basis of that information. An old example from before the internet was credit-worthiness information, which could be bought off banks and other brokers. This was of course important when it came to large purchases, such as a house or setting up a new business. A good modern example is personalised information on the use of online health apps. Individuals visiting free online health apps which give feedback on, for instance, how much someone has run and where, are usually asked to consent to the sale of their information. That information is very useful to, for instance, health insurance companies interested in knowing how healthy the behaviour of someone is. Those health insurance companies will look more favourably on someone known to have a fit body, not buy large volumes of cigarettes and alcohol online, and have a generally considered and healthy lifestyle. It is thus commercially important for health insurance companies to buy such data, and not really an option to ignore it. This example also shows the ambiguity involved in both consent and the option of staying “off the grid”: it is unlikely that everyone using health apps realises the potential uses of the data they are then handing over, and it is not realistic to expect them to wade through hundreds of pages of detailed consent forms wherein all the potential uses would be spelled out. Also, someone who purposefully stays “off the grid” and either actively hides their online behaviour via specialised software or is truly not online at all, will not be unaffected by health profiling activities for the very reason that there is then no profile of them. To a health insurance company, the lack of information is also informative and likely to mean that person has something to hide. Hence, even someone actively concerned with leaving no digital footprints and having very limited data on them online, will be affected without their consent by the activities of others. Privacy is very difficult to maintain on the Internet because nearly all large internet-­site providers use a variety of ways to identify who accesses their websites and what their likely interests are. Websites use cookies, Javascripts, browser

9  Big Data and Wellbeing: An Economic Perspective

181

Fingerprinting, Behavioural Tracking, and other means to know the moment a person clicks on a website who that person is and what they might want. What helps these websites is the near-uniqueness of the information that a website receives when it is accessed: the IP-address, the Browser settings, the recent search history, the versions of the programs used, and the presence of a variety of added software (Flash, Javascript, cameras, etc.). From that information, internet sites can usually know who has accessed them, which can then be matched to previous information kept on that IP address, bought and sold in a market. Only very Internet-literate individuals can hope to remain anonymous. The fact that the main use of Big Data on the Internet is to aid advertising should also be somewhat reassuring for those who fear the worst about Big Data: because the advertising market is worth so much, large internet companies are careful not to sell their data for purposes that the population would be highly disapproving of, whether those purposes are legal or not. It is for instance not in the interest of e-bay, Apple, Google, or Samsung to sell information about the porn-viewing habits of their customers to potential employers and romantic partners. These uses are certainly worth something, and on the “Dark Web” (the unauthorised parts of the internet) such information can (reportedly) indeed be bought and sold, but for the “legitimate” part of the market, there is just too much to lose. How does this relate to wellbeing?

9.3  The Contribution of Big Data to Wellbeing Science Mood analysis is very old, with consumer and producer sentiment recorded in many countries since the 1950s because it predicts economic cycles well (Carroll et al. 1994). However, the analysis of the wellbeing of individuals and aggregate wellbeing is starting to take off as more modern forms of mood analysis develop. These include counting the positive/negative affect of words used in books or any written documents (e.g. Linguistic Inquiry and Word Count); analysis of words used in Twitter feeds, Facebook posts, and other social media data through more or less sophisticated models of sentiment analysis; outright opinion and election polling using a variety of tools (mobile phone, websites, apps). New technologies include Artificial Intelligence analysis of visual, olfactory, sensory, and auditory information. They also include trackable devices that follow individuals around for large parts of the day and sometimes even 24/7, such as fitbits, mobile phones or credit cards. One may first wonder whether “Big Data” can improve wellbeing predictions, and help solve what economists have called “prediction policy problems” (Kleinberg et al. 2015).

182

C. Bellet and P. Frijters

9.3.1  P  redictability of Individual and Aggregate Wellbeing, a Benchmark Some forms of Big Data will trivially explain wellbeing exceedingly well: social media posts that inform friends and family of how one feels are explicitly meant to convey information about wellbeing and will thus have a lot of informational content about wellbeing to all those with access. Claims that social media can hence predict our wellbeing exceedingly well thus need not be surprising at all for that is often the point of social media. Nevertheless, it is interesting to have some sense of how much wellbeing can be deduced from the average individual, which is equivalent to the question how much wellbeing is revealed by the average user of social media. A similar question arises concerning medical information about individuals: very detailed medical information, which includes assessments of how individuals feel and think about many aspects of their lives, will also explain a lot of their wellbeing and may even constitute the best measures available. Yet, the question how much one on average would know from typical medical records remains an interesting question. In order to have some comparison, we first document how available datasets that include direct information on wellbeing reveal the potential of different types of information to predict wellbeing. We take the square of the correlation coefficient (R2) as our preferred indicator of predictability. Andrew Clark et al. (2018) run typical life satisfaction regressions for the United Kingdom, with comparisons for Germany, Australia, and the United States. The main finding is that the R2 does not typically go beyond 15% and even to reach that level needs more than socio-democraphic and economic information (income, gender, age, family circumstances, wealth, employment, etc.) but also needs subjective indicators of health, both physical health and mental health which are both measured using subjective questions. Using the US Gallup Daily poll, we show in the Online Appendix that the same relationship holds there too. The relatively low predictability of life satisfaction at the individual level has long been a known stylised fact in the literature, with early reviews found for instance in the overview book by Michael Argyle et al. (1999) where Michael Argyle also notes the inability of regularly available survey-information to explain more than 15% of the variation in life satisfaction (largely based on World Value Survey data). Generally, wellbeing is poorly predicted by information from regular survey questions, but health conditions appear to be the most reliable predictors of wellbeing. The availability of administrative datasets capturing the health conditions of an entire population - for instance via drug prescriptions - suggests health may be the best proxy available to predict wellbeing in the future (see also Deaton 2008). Clark et al. (2018) find that mental health explains more variation in wellbeing than physical health does, also a typical finding that we replicate for the United States (see online Appendix). What about variation in aggregate wellbeing? Helliwell et al. (2018) looked at how well differences in average wellbeing across countries over time can be explained by observabled average statistics. They showed a typical cross-country

9  Big Data and Wellbeing: An Economic Perspective

183

regression wherein 74% of the variance could be explained by no more than a few regressions: GDP per capita, levels of social support, life expectancy, an index of freedom, an index of generosity, and an index of perceptions of corruption. In this article, we also find that the strongest moves up and down were due to very plausible and observable elements: the collapsing economy of Venuzuela showed up in a drop of over two points in life expectancy from 2008–2010 to 2015–2017, whilst the recovery from civil war and Ebola in Sierra Leone lead it to increase life satisfaction by over a point. Hence country-variations are strongly predictable. We did our own calculations with the same Daily Gallup dataset (in the Online Appendix) and also found we could explain even higher levels (90%) of variation between US states if one added self-reported health indicators to this set. Predictability of aggregate wellbeing thus differs strongly from individual wellbeing and has different uses. Predicting aggregate wellbeing can be useful if individual measures are unavailable, for instance due to wars or language barriers. When individuals originate from various countries, wellbeing predictions based on standardized variables capturing income, jobs, education levels or even health are about half as powerful as within country predictions (see online Appendix, but also, for instance Claudia Senik (2014) on the cultural dimensions of happines). This is true both for individual-level and country-level predictability.7 This suggests socio-economic and demographic factors affect subjective well-being in very different ways across cultures and countries at various levels of economic development. The use of alternative sources of Big Data, like content analysis of tweets, does not necessarily help. In their research, Laura Smith and her co-authors ask whether wellbeing ‘translates’ on Twitter (Smith et  al. 2016). They compare English and Spanish tweets and show translation across languages leads to meaningful losses of cultural information. There exists strong heterogeneity across wellbeing measures. For instance, at the individual level, experienced feelings of happiness are better predicted than reported satisfaction with life as measured by the Cantril ladder. This is the opposite for country-level regressions.

9.3.2  Can Big Data Improve Wellbeing Predictions? Standard socio-demographic variables, especially the health conditions of a population, can generate high wellbeing predictability, at least at a more aggregate level. However, with the rise of digital data collection and improvements in machine learning or textual analysis techniques, alternative sources of information can now be exploited. Standard survey-based measures of happiness could then be used to train prediction models relying on “Big Data” sources, hence allowing for a finer analysis across time and space of the determinants of wellbeing.8 Table 9.1 reviews the main studies that have tried to predict life satisfaction or happiness from  The Gallup World Polls survey 1000 individuals each year in 166 countries.  For a discussion, see (Schwartz et al. 2013).

7 8

184

C. Bellet and P. Frijters

Table 9.1  Can big data predict wellbeing? Review of R2 coefficients across studies SWB Reference measure SWB source Individual level predictions Life myPersonality. Collins satisfaction org et al. (2015) Life myPersonality. Kosinski satisfaction org et al. (2013) Liu et al. Life myPersonality. (2015) satisfaction org

Big Data measure

Big Data source

Status updates

Facebook Facebook users

3505

0.02

Facebook Facebook users

3505

0.028

Facebook Facebook users

1124

0.003

Facebook Facebook users

1124

0.026

Facebook Facebook users

2198

0.09

Type of Facebook pages liked Status updates, positive emotions Liu et al. Life myPersonality. Status (2015) satisfaction org updates, negative emotions myPersonality. Topics and Schwartz Life satisfaction org lexica from et al. status (2016) updates Aggregate level predictions Life Gallup Word Algan satisfaction searches et al. (2019) Happiness Gallup Word Algan searches et al. (2019) Life myPersonality. Average Collins satisfaction org size of et al. personal (2015) network Life myPersonality. Average Collins satisfaction org number of et al. status (2015) updates Average Life my Collins number of satisfaction Personality, et al. photo tags org (2015) Hills et al. Life Eurobarometer Words (2017) satisfaction Schwartz et al. (2013)

Life Gallup satisfaction

Unit of analysis

Sample size R2

Google Trends

US weekly 200 time series

0.760

Google Trends

US weekly 200 time series

0.328

Facebook LS bins

31

0.7

Facebook LS bins

31

0.096

Facebook LS bins

31

0.348

Google Books

200

0.25

3IKK1

0.094

Topics and Twitter lexica from tweets

Yearly panel of 5 countries US counties

Notes: This Table lists the main studies that have tried to predict survey responses to life satisfaction or happiness questions from alternative Big Data sources. The information collected is extracted from digital footprints left by individuals when they go online or engage with social media networks

9  Big Data and Wellbeing: An Economic Perspective

185

alternative Big Data sources. The information collected is extracted from digital footprints left by individuals when they go online or engage with social media networks. In this section, we focus on studies proposing the construction of new measures of wellbeing based on how well they can predict reported happiness and life satisfaction. Hence, Table 9.1 does not reference articles that have used NLP and other computerized text analysis methods for the sole purpose of eliciting emotional content, which we discuss in the next section. Quite surprisingly, the classical issue of a generally low predictability of individual-­level satisfaction remains. The clearest example is a study by Kosinski and his co-authors that looks at how predictive Facebook user’s page likes are of various individual traits and attributes, including their wellbeing (Kosinski et  al. 2013). Life satisfaction ranks at the bottom of the list in terms of how well it can be predicted, with an R squared of 2.8% only. This does not mean predictive power cannot be improved by adding further controls, but it provides a reasonable account of what should be expected. Strikingly, alternative studies using sentiment analysis of Facebook status updates find similarly low predictive power, from 2% of between-­ subjects variance explained to a maximum or 9% (see Collins et al. 2015; Liu et al. 2015; Schwartz et  al. 2016). These differences are explained by the measure of wellbeing being predicted, and the model used. Research also showed positive emotions are not significantly correlated to life satisfaction on Facebook, contrary to negative emotions (Liu et  al. 2015). This suggests social pressure may incite unhappy individuals to pretend they are happier than they really are, which is less likely to be the case for the display of negative emotions. However, once aggregated, measures extracted from social networks’ textual content have a much stronger predictability. A measure of status updates which yields a 2% R squared in individual-level regressions yields a five times bigger coefficient, close to 10%, when looking at life satisfaction bins (see Collins et al. 2015). Alternative measures have a much higher predictability like the average number of photo tags (70%) or the average size of users’ network of friends (35%). Looking at a cross-section of counties in the United States, research by Schwartz and co-authors find the topics and lexica from Tweets explains 9.4% of the variance in life satisfaction between-counties (Schwartz et al. 2013). Predictability improves to 28% after including standard controls, as shown in Fig. 9.1 which maps county-­ level life satisfaction from survey data along with county-level life satisfaction predicted using Tweets and controls. This coefficient remains relatively low, which may again be due to the manipulability of positive emotions in social networks. Research using the emotional content of words in books led to higher predictability for life satisfaction (Hills et al. 2017). Using a sample of millions of books published over a period of 40 years in five countries, researchers find an R squared of 25%, which is similar to the predictive power of income or employment across countries in the Gallup World Polls. But the strongest predictability comes from a paper by Yann Algan, Elizabeth Beasley and their co-authors, who showed that daily variation in life satisfaction in the US could be well-predicted (around 76%) by google-trend data on the frequency with which individuals looked for positive terms to do with work, health, and family (Algan et al. 2019). Figure 9.2 illustrates

186

C. Bellet and P. Frijters

Fig. 9.1  County-level life satisfaction, survey-based measures vs. predicted from tweets. (Source: (Schwartz et al. 2013) The Figure shows county-level life satisfaction (LS) as measured (a) using survey data and (b) as predicted using our combined model (controls + word topics and lexica). Green regions have higher satisfaction, while red have lower. White regions are those for which the language sample or survey size is too small to have valid measurements. (No counties in Alaska met criteria for inclusion; r = 0.535, p < 0.001).)

Fig. 9.2  Galup Daily Polls Life Satisfaction vs. Estimated Life Satisfaction from Google Trends. (Source: (Algan et al. 2019). The graph shows the estimates (with confidence intervals) for weekly life satisfaction at the US level, constructed using US Google search levels, in red, alongside estimates from the benchmark (seasonality only) model in yellow and the Gallup weekly series in blue. Confidence intervals are constructed using 1000 draws. Training data is inside the red lines, and Testing data is outside the red lines)

9  Big Data and Wellbeing: An Economic Perspective

187

these results. The authors find a lower predictability of experienced happiness (about 33%). A clear disadvantage of this method though is that these results would not easily carry over to a different time-frame, or a different language. The authors also use standard regression analysis, while the use of machine learning models (like Lasso regressions) can greatly improve out-of-sample prediction in such cases. Sentiment analysis via twitter and other searchable Big Data sources may thus lead to a greater ability to map movements in mood, both in the recent past and geographically. The ability to past-cast and now-cast life satisfaction via Google search terms and various other forms of available Big Data may similarly improve our understanding of wellbeing in the recent past and across areas. This increased ability to predict current and previous levels of mood and life satisfaction might prove very important for research as it reduces the reliance on expensive large surveys. One might start to see papers and government evaluations using derived measures of mood and life satisfaction, tracking the effects of local changes in policy or exogenous shocks, as well as their effects on other regions and times. This might be particularly useful when it comes to social multipliers of events that only directly affect a subset of the population, such as unemployment or identity-specific shocks. The increased ability to tell current levels of mood and life satisfaction, both at the individual and aggregated level, can also be used for deliberate manipulation: governments and companies can target the low mood / life satisfaction areas with specific policies aimed at those communities (eg more mental health help or more early childcare facilities). Opposition parties might deliberately ‘talk down’ high levels of life satisfaction and blame the government for low levels. Advertisers might tailor their messages to the mood of individuals and constituents. In effect, targeting and impact analyses of various kinds should be expected to improve.

9.3.3  B  ig Data as a Complement to Survey-Based Wellbeing Measures Even if mood extracted from social networks may not fully match variation in survey-­based measures of life satisfaction or happiness, they often allow for much more detailed analysis of wellbeing at the daily level, or even within days. A good example of how massive data sources allow a fuller tracking of the emotional state of a population is given by large-scale Twitter-data on Mexico, courtesy of Gerardo Leyva who kindly allowed us to use the graphs in Fig. 9.3 based on the work of his team. Sub-Figure (a) shows how the positive/negative ratio of words varied from day to day in the 2016–2018 period. One can see the large positive mood swings on particular days, like Christmas 2017 or the day that Mexico beat Germany in the Football Word Cup 2018, and the large negatives, like the earthquake in 2017, the loss in the World Cup against Brasil, or the election of Donald Trump in the 2016 US Election.

188

C. Bellet and P. Frijters

Fig. 9.3  Mood from Tweets in Mexico. (We thank Gerardo Leyva from Mexico’s National Institute of Statistics and Geography (INEGI) for generously sharing these slides, which were

9  Big Data and Wellbeing: An Economic Perspective

189

Sub-Figure (b) shows how the mood changes minute-by-minute during the football match against Germany, with ups when Mexico scores and the end of the match. The main take-aways from these Figures are that one gets quite plausible mood-­ profiles based on an analysis of Twitter data and that individual events are quite short-lived in terms of their effect on Twitter-mood: the variation is dominated by the short-run, making it hard to say what drives the longer-run variation that you also see in this data. This high daily variability in mood also shows the limits of its usefulness in driving policy or understanding the long-run level of wellbeing in Mexico. Another example of the usefulness of alternative metrics of wellbeing extracted from Big Data sources can be found in recently published research by Borowiecki (2017). The author extracts negative and positive mood from a sample of 1,400 letters written by three famous music composers (Mozart, Beethoven and Liszt). It provides an interesting application of Linguistic Inquiry and Word Count (LIWC) to the question of whether wellbeing determines creative processes. The research leverages historical panels of the emotional state of these composers over nearly their entire lifetime, and shows poor health or the death of a relative negatively relates to their measure of wellbeing, while work-related accomplishments positively relates to it. Figure 9.4 shows the positive and negative mood panel of Mozart. Using random life events as instruments in an individual fixed effects model, the author shows negative emotions trigger creativity in the music industry. Measures extracted from the digital footprints of individuals can also provide a set of alternative metrics for major determinants of wellbeing available at a much more detailed level (across time and space). One example can be found in previously mentioned research by Algan et  al. (2019). They investigate the various domains of wellbeing explaining variation in overall predicted life satisfaction using Google search data for a list of 554 keywords. From this list of words, they construct 10 composite categories corresponding to different dimensions of life. They find that higher searches for domains like Job Market, Civic Engagement, Healthy Habits, Summer Leisure, and Education and Ideals are consistently associated with higher well-being at the aggregate US level, while Job Search, Financial Security, Health Conditions, and Family Stress domains are negatively associated with wellbeing. The fact that “Big Data” often includes time and geographical information (e.g. latitude and longitude) can trigger both new research designs and novel applications to wellbeing research. For instance, data based on the location of mobile devices can

Fig. 9.3  (continued) based on the subjective wellbeing surveys known as BIARE and the big data research project “Estado de Animo de los Tuiteros en Mexico” (The mood of twitterers in Mexico), both carried out by INEGI. These slides are part of a presentation given by Gerardo Leyva (head of research at INEGI) during the “2° Congreso Internacional de Psicología Positiva “La Psicología y el Bienestar”, November 9–10, 2018, hosted by the Universidad Iberoamericana, in Mexico City and in the “Foro Internacional de la Felicidad 360”, November 2–3, 2018, organized by Universidad TecMilenio in Monerrey, México). (a) Daily Mood (November 2016–September 2018). (b) Mexico-Germany Game (Minute-by-Minute Mood)

190

C. Bellet and P. Frijters

Fig. 9.4  Positive and negative emotions of Wolfgang Amadeus Mozart. (Source: Borowiecki (2017). The left (right) panel plots the author’s index of positive (negative) emotions from Mozart’s letters from age 15 until his death at age 35. The depicted prediction is based on a local polynomial regression method with an Epanechnikov kernel, and it is presented along with a 95% confidence interval.)

have many applications in the domains of urban planning, which we know matters for things as important to wellbeing as trust, security or sense of community (Ratti et al. 2006). Another example can be found in research by Clement Bellet (2017) who matches millions of geo-localised suburban houses from Zillow, a large American online real estate company, to reported house and neighborhood satisfaction from the American Housing Surveys. The author finds new constructions which increase house size inequality lower the house satisfaction of existing homeowners through a relative size effect, but no such effect is found on neighborhood satisfaction. Making use of the richness of Big Data, this research also investigates the contribution of spatial segregation and reference groups to the trade-off new movers face between higher status (i.e. a bigger house) and higher neighborhood satisfaction. Life Satisfaction is of course not the only thing of relevance to our inner lives that can be predicted. Important determinants of wellbeing can also be predicted. For instance, online ratings have been used to measure interpersonal trust and reciprocity, known to be major drivers of subjective wellbeing.9 How much can we know

 See for instance (Proserpio et al. 2018) or (Albrahao et al. 2017) for recent applications to AirBnb data. See also (Helliwell et al. 2016) for a survey on trust and wellbeing. 9

9  Big Data and Wellbeing: An Economic Perspective

191

about important determinants of wellbeing simply from how someone writes, walks, looks, smells, touches, or sounds?

9.3.4  New Measures and Measurement Tools To see the future uses of Big Data for wellbeing, we can look at developments in measurement. Pre-internet, what was measured was largely objective: large possessions, social relations (marriages), births and deaths, forms of accreditation (education, training, citizenship), income flows (employment, welfare, taxes), other-relating activities (crime, court cases, social organisations, large purchases). Measurement in all these cases was usually overt and took place via forms and systems that the population could reasonably be aware of. Relatively new is data on purely solitary behaviour that identifies individuals, including all things to do with body and mind. There is an individual’s presence in space (where someone is), all manner of health data on processes within, and data on physical attributes, such as fingerprints, retina structure, height, weight, heart rates, brain activity, etc. Some of this information is now gathered as a matter of course by national agencies, starting before birth and continuing way past death, such as height, eye colour, finger prints, physical appearance, and age. In some countries, like Singapore and China, there are now moves under way to also store facial features of the whole population, which are useful in automatically recognising people from video information and photos, allowing agencies to track the movements of the population by recognising them wherever they are. In the European Union, facial features are automatically used to verify that individuals crossing borders are the same as the photos on their passports. Fingerprint and iris recognition is nigh perfect, and is already used by governments to check identity. This has uses that are arguably very positive, such as in India where fingerprint and iris-based ID is now used to bypass corruption in the bureaucracy and directly pay welfare recipients and others. It of course also has potential negative uses, including identity theft by copying fingerprints and iris-scan information in India. The main biophysical measurement devices now in common use in social science research (and hence available to everyone) are the following: MRIs, fMRIs, HRV, eye-scanners, skin conductivity, cortisol, steroid hormones, introspection, and mobile sensors that additionally pick up movement, speech, and posture. Table 9.2 lists the measurement devices currently in wide operation in the social sciences, with their essential characteristics and uses reviewed in the book edited by Gigi Foster (2019). Individually, each of these biophysical markers has been studied for decades, with fairly well-know properties. Some have been around for centuries, such as eye-tracking and heart rate monitoring. Table 9.2 quickly describes them and their inherent limitations. Whilst these measures have many research uses, they all suffer from high degrees of measurement error, high costs, and require the active participation of the

192

C. Bellet and P. Frijters

Table 9.2  Description of biophysical measurement devices used frequently in social sciences Measure Magnetic Resonance Imaging (MRI) Functional MRI (fMRI)

Heart Rate Variability

Eye-tracking scanners

Skin conductivity

Cortisol levels

Steroid hormones

Introspection

Mobile sensors

Description Requires individuals to lie in a large machine and is mainly used to map the size and structure of the brain, useful for finding brain anomalies. It is expensive, rare, and not informative on what people think. Requires large contraptions and is used to track blood flows in the brain, marking the level of neuronal activity, useful for knowing which areas are active in which tasks. It is expensive, rare, very imprecise about people’s thoughts and thought processes (lots of brain areas light up even in simple cases), and thus of very limited use to any would-be manipulator. Can be tracked with heart monitors (small or large) and is primarily useful for picking up short-term stress and relaxation responses. It is cheap and can be part of a portable package but is unreliable (high individual heterogeneity) and mainly useful in very specific applications, such as monitoring sleep patterns or stress levels in work situations. Require close-up equipment (preferably keeping the head fixed) and can be used to see what draws attention. They are awkward, quite imprecise, and are almost impossible to use outside of very controlled situations because one needs to know the exact spot of all the things that someone could be looking at in 3-dimensional space. Except for things like virtual reality, that is still too hard to do on a large-scale basis. Essentially about measuring sweat responses and requires only small on-body devices, mainly useful as a measure of the level of excitement. It is very imprecise though (people sweat due to weather, diet, movement, etc.) and even at best only measures the level of excitement, not whether that is due to something positive or negative. Can be measured in bodily fluids like saliva and is primarily used as a measure of stress. It reacts sluggishly to events, is susceptible to diet and individual specific variation, varies highly across individuals and over time due to diurnal, menstrual, and other cycles, and is difficult to measure continuously. Like testosterone can also be measured via saliva and is a measure of things like aggression, for instance going up in situations of competition and arousal. It varies over the life-time and the day cycle, having both very long-term effects (e.g. testosterone in uterus affects the relative length of digits) and short-run effects (more testosterone increases risk-taking). It is difficult and expensive to measure continuously though, and its ability to predict behaviour is patchy. Introspection (the awareness of own bodily processes) is mainly measured by asking people to guess their own heart rate and is linked to cognitive-­ emotional performance. It is a very imprecise construct though, and its ability to predict behaviour is highly limited. Can track many aspects of the body and behaviour at the same time, as well as yield dynamic feedback from the individual via spot-surveys.

individuals concerned. People know if there is a large device on their heads that tracks their eye-movements. And they can easily mislead most of these measurement devices if they so wished, for instance via their diet and sleep patterns (which affect pretty much all of them). With the exception of non-invasive mobile sensors,

9  Big Data and Wellbeing: An Economic Perspective

193

which we will discuss later, the possibilities for abuse are thereby limited and their main uses require considered consent. A new development is the increased ability to recognise identity and emotional state by means of features that can be deduced from a distance: facial features (the eyes-nose triangle), gait, facial expressions, voice, and perhaps even smell (see Croy and Hummel 2017). These techniques are sometimes made readily available, for instance when it comes to predicting emotional display from pictures. For instance, FaceReader is a commercial software using an artificial neural network algorithm trained on more than 10,000 faces to predict emotions like anger or happiness with high levels of accuracy (above 90% for these two) (see Bijlstra and Dotsch (2011). The ability to recognise individuals from afar is now advancing at high speed, with whole countries like Singapore and China investing billions in this ability. Recent patents show that inventors expect to make big money in this field.10 The ability to recognise identity from a distance is not merely useful for governments trying to track down criminals in their own country or ‘terrorists’ in a country they surveillance covertly. It can be used for positive commercial applications, like mobile phone companies and others to unlock devices of customers who have forgotten their passwords. Yet, it also offers a potential tool for companies and other organisations to link the many currently existing datasets that have a different basis than personal identity, so as to build a profile of whole lives. Consider this last point more carefully: currently, many forms of Big Data are not organised on the basis of people’s identity in the sense of their real name and unique national identifier (such as their passport details) which determine their rights and duties in their countries. Rather, they are based on the devices used, such as IP-addresses, credit cards, Facebook accounts, email accounts, mobile phone numbers, Instagram IDs, twitter handles, etc. Only rarely can these records be reliably linked to individuals’ true identities, something that will be increasingly difficult for companies when individuals get afraid of being identified and start to deliberately mix and swap devices with others. Remote recognition might give large organisations, including companies that professionally collect and integrate datasets, the key tool they need to form complete maps of individuals: by linking the information from photos, videos, health records, and voice recordings they might well be able to map individuals to credit cards, IP addresses, etc. It is quite conceivable that Google Street view might at one point be used to confirm where billions of individuals live and what they look like, then coupled with what persons using a particular credit card look like in shop videos. This can then be coupled with readily available pictures, videos, and documents in ample supply on the internet (eg Youtube, facebook, twitter, snapchat, etc.) to not only link records over time, but also across people and countries. The time might thus come that a potential employer is able to buy your personal life story, detailing the holidays you had when you were 3 years old, deduced from pictures your aunt

10

 See https://patents.google.com/patent/US9652663B2/en

194

C. Bellet and P. Frijters

posted on the internet, not even naming you, simply by piecing together your changing features over time. Remote recognition is thus a potentially powerful new surveillance tool that has a natural increasing-returns-to-scale advantage (accuracy and usefulness increase with data volume), which in turn means it favours big organisations over small ones. It is not truly clear what counter-moves are available to individuals or even whole populations against this new technology. The data can be analysed and stored in particular small countries with favourable data laws, bought anonymously online by anyone willing to pay. And one can see how many individual holders of data, including the videos made by the shopkeepers or the street vendors, have an incentive to sell their data if there is a demand for it, allowing the ‘map of everyone’s life’ to be gathered, rivalling even the data that governments have. The advances in automatic emotional recognition are less spectacular, but nevertheless impressive. At the latest count, it appears possible for neural-network software that is fed information from videos to recognise around 80% of the emotions on the faces of humans. If one adds to this the potential in analysing human gaits and body postures (see Xu et al. 2015), the time is soon upon us in which one could remotely build up a picture of the emotions of random individuals with 90% accuracy. The imperfection in measurement at the individual level, which invalidates it clinically, is irrelevant at the group level where the measurement error washes out. Many of the potential uses of these remote emotion-recognition technologies are thus highly advantageous to the wellbeing research agenda. They for instance promise to revolutionise momentary wellbeing measurement of particular groups, such as children in school, prisoners in prison, and passengers on trains. Instead of engaging in costly surveys and non-randomised experiments, the mood of workers, school children, and whole cities and countries can be measured remotely and non-­ invasively, without the need to identify anyone personally. This might well revolutionise wellbeing research and applications, leading to less reliance on costly wellbeing surveys and the ability to ‘calibrate’ wellbeing surveys in different places and across time with the use of remote emotion measures on whole groups. Remote emotional measurement of whole groups is particularly important once wellbeing becomes more of a recognised policy tool, giving individuals and their groups an incentive to ‘game’ measures of wellbeing to influence policy in the desired direction. There will undoubtedly be technical problems involved, such as cultural norms in emotional expression, but the promise is high. The potential abuses of remote emotional measurement are harder to imagine, precisely because the methods are quite fallible at the individual level, just as with ‘lie detectors’ and other such devices supposed to accurately measure something that is sensitive to people. Individuals can pretend to smile, keep their face deliberately impassive, and practise gaits that mimic what is desired should there be an individual incentive to do so. Hence commercial or government abuse would lie more in the general improvement it would herald in the ability to predict individual and group wellbeing.

9  Big Data and Wellbeing: An Economic Perspective

195

If one then thinks of data involving interactions and devices, one thinks of the whole world of online-behaviour, including twitter, mobile phones, portable devices, and what-have-you. Here too, the new data possibilities have opened new research possibilities as well as possible abuses. Possibly the most promising and dangerous of the new measurement options on interactive behaviour ‘in the real world’ is to equip a whole community, like everyone in a firm, with mobile sensors so as to analyse how individuals react to each other. This is the direction taken by the MIT Media Lab (Hedman et al. 2012). The coding of mood from textual information (“sentiment analysis”) has led to an important literature in computer science.11 So far, its empirical applications mainly resulted in predictive modeling of industry-relevant outcomes like stock market prices, rather than the design of wellbeing enhancing policies (Bollen et al. 2011). Wellbeing researchers should thus largely benefit from collaborating with computer scientists in the future. Such collaborations should prove fruitful for the latter as well, who often lack knowledge on the distinction between cognitive or affective measures of wellbeing, which measures should be used to train a predictive model of wellbeing (besides emotions), and why. Another promising technique is to use speech analysis to analyse emotional content or hierarchical relations, building on the finding that individuals lower in the social pecking order adapt their speech and language to those higher in the social pecking order (Danescu et al. 2012). Overall, these methods should lead to major improvements in our capacity to understand and affect the subjective wellbeing of a population. By equipping and following everyone in a community, researchers and manipulators might obtain a full social hierarchy mapping that is both relative (who is higher) and absolute (average hierarchical differences), yielding social power maps of a type not yet seen before. Analyses of bodily stances and bilateral stress-responses hold similar promise for future measurement. This can be used both positively (eg to detect bullying) and negatively (to enforce bullying).

9.3.5  Is Big Data Driving the Renewed Interest in Wellbeing? The explosion of choice that the Internet has enabled is probably a key driver of the use of wellbeing information: to help them choose something they like from the millions of possibilities on offer, consumers use information on how much people like themselves enjoyed a purchase. Large internet companies actively support this development and have in many ways led research on wellbeing in this world. Ebay and Amazon for instance regularly experiment with new forms of subjective feedback that optimise the information about the trustworthiness of sellers and consumers. Nearly all newspapers use a system of likes for their comments to help individuals sift through them and

11

 See (Liu 2012) for a review.

196

C. Bellet and P. Frijters

inform themselves of what others found most interesting. Brands themselves are getting increasingly interested in collecting the emotional attitudes linked to their mentions on social networks. Social media monitoring companies like Brandwatch analyse several billion emoticons shared on Twitter or Instagram each year to learn which brands generate the most anger or happiness. Hence some part of the surge in interest in wellbeing is because of Big Data: individuals are so bewildered by the huge variety of choice that they turn to the information inherent in the subjective feedback of others to guide their own choices. This subjective feedback is of course subject to distortion and manipulation, and one might well see far more of that in the future. Restaurants may already manipulate their facebook likes and ratings on online restaurant guides (as well as off-line guides that give stars to restaurants), leading to an arms race in terms of sophisticated rating algorithms that screen out suspect forms of feedback.12 Yet, the key point is that Big Data gives more value to wellbeing measurements. New generations of consumers and producers are entirely used to subjective feedback, including its limitations and potential abuse: they have learnt by long exposure what information there is in the subjective feedback of others. An interesting aspect of the Big Data revolution is that it is largely driven by private organisations, not government. It is Google that collected information on all the streets and dwellings in the world. Facebook owns billions of posts that have information on trillions of photos, videos, and personal statements. Apple has information on the billions of mobile phones and app-movements of its customers, data it can use for advertising. Private companies also collect information on millions of genetic profiles, so as to sell people gene charts that show them where their ancestors came from on the basis of a sample of their own genes. They also have the best data on genealogy, which involves collecting family trees going back centuries, allowing them for instance to trace beneficiaries of wills and unspecified inheritances. Lastly, they collect embarrassing information on bankruptcies or credit worthiness, criminal activities, pornography, defamatory statements, and infidelity, allowing them to blackmail individuals and provide buyers with information about individuals of interest (eg employers or potential partners). The fact that this data is in private hands and often for sale means academics (and sometimes governments) are very much at a disadvantage because they often lack the best data and the resources: no academic institution had the resources to set up GoogleMaps or Wikipedia, nor the databases of the NSA that track people and communication around the world. In many areas of social science then, the academic community is likely far behind commercial research units inside multinational organisations. Amazon, eBay, and Google probably know more about consumer sentiments and purchasing behaviour than any social scientist in academia. A few leading academic institutions or researchers do sign data sharing agreements with  Online platforms actively try to mitigate manipulation concerns. Besides, whether subjected to manipulation or not, these reviews do play a large influential role on economic outcomes like restaurant decisions or customer visits. For a review on user-generated content and social media addressing manipulation concerns, see (Luca and Zervas 2016).

12

9  Big Data and Wellbeing: An Economic Perspective

197

institutions like Nielsen or Facebook. Yet, these agreements are scarce and can lead to problems, like the 2017 scandal of the (ab)use of Facebook profiles by Cambridge Analytica. However, the fact that private companies gather the bulk of Big Data means we should not confuse the existence of Big Data with an omniscient Big Brother who is able to analyse and coherently use all the information. Individual data packages are held for particular reasons, and data in one list is often like a foreign language to other data, stored in different ways on different machines. This results in marketing companies often buying inaccurate information on customer segments (age, gender, etc.) to data brokers (see Neumann et al. 2018). We should thus not presume that merely because it exists, it is all linked and used to the benefit or harm of the population. It costs resources to link data and analyse them, meaning that only the most lucrative forms of data get matched and used, with a market process discovering those uses gradually over time. An average health centre can for instance easily have 50 separate databases kept up to date, ranging from patient invoices to medicine inventory and pathology scans. The same person can be in those databases several times, as the subject of pathology reports, the patient list of 2015, the invoice list of 2010, the supplier of computer software, the father of another patient, and the partner of yet another. All on separate lists and not recorded in the same format and thus necessarily recognised as one and the same person.

9.4  Implications: The Economic Perspective 9.4.1  Price Discrimination, Specialisation and AI We want to discuss three economic aspects of Big Data: the issue of predictability, insurance and price-discrimination; the general equilibrium aspects of the improved predictability of tastes and abilities; and the macro-consequences of the availability of so much information about humanity. There are two classic reasons for insurance: one is to ensure individuals against sheer bad luck, and the other is to share risks within a community of different risk profiles. The first is immune to Big Data by construction, but the second is undermined by Big Data. If one were able to predict different risk profiles, then insurance companies would either ask higher premiums of higher risks, or not even insure the high-risk types. The use of Big Data means a reduction in risk-sharing which benefits the well-off (who are generally lower risks).13 This is indeed happening in health (e.g. Tanner 2017), but also other insurance markets. Data on age, weight, and self-rated health is predictive of future longevity,  Looking at refusals to reveal private information on a large-scale market research platform, Goldfarb and Tucker (2012) provides evidence of increasing privacy concerns between 2001 and 2008, driven by contexts in which privacy is not directly relevant, i.e. outside of health or financial products.

13

198

C. Bellet and P. Frijters

health outcomes, and consumption patterns, making it of interest to health insurance companies, travel insurance companies, financial institutions, potential partners, potential employers, and many others. The degree to which such data is known and can be used by insurance companies depends on the social norms in countries and their legislation. Denmark is very free with such data, offering 5% of their population records to any researcher in the world to analyse, giving access to the health, basic demographics, and family information of individuals, including the details of their birth and their grandparents. Norway is similarly privacy-insensitive with everyone’s tax records available to everyone in the world. Yet, both Denmark and Norway have a free public health service so it actually is not that relevant that one could predict the individual health risk profile of their citizens. Where private health insurance is more important, the issue of Big Data is more acute. Some countries like Australia forbid health insurance companies from using personal information (including age) to help set their insurance rates. The use of Big Data to differentiate between low-risk and high-risk is but one example of the general use of Big Data to price-discriminate, a theme more generally discussed by Alessandro Acquisti in his research (Acquisti et al. 2016). When it comes to products that differ in cost by buyer (ie, insurance), that works against the bottom of the market, but when it concerns a homogenous good, it works in favour of the bottom of the market: lower prices are charged of individuals with lower ability to pay, which is inequality reducing. Privacy regulation can thereby hinder favourable price-discrimination. Privacy regulations restricting advertisers’ ability to gather data on Internet users has for instance been argued to reduce the effectiveness of online advertising, as users receive mis-targeted ads (Goldfarb and Tucker 2011). The main macro-economic effect of Big Data is to reduce market frictions: it is now easier to know when shops have run out of something, where the cheapest bargains are, what the latest technologies are, whom to work with that has the right skills, what the ideal partner looks like, where the nearest fuel station is, etc. In the longer run, the main effect of reduced frictions is to increase the degree of specialisation in the economy. The increase in specialisation will come from reduced search frictions involved in knowing suppliers and buyers better: companies and individuals can target their services and products better and more locally, which in general is a force for greater specialisation, a change that Durkheim argued was the main economic and social change of the Industrial revolution. Greater specialisation can be expected to have many effects on social life, some of which are very hard to predict, just as the effects of the Industrial Revolution were hard to foresee in the nineteenth century. Specialisation reduces the importance of kinship groups in production and increases the reliance on anonymous platforms and formal exchange mechanisms, which increases efficiency but also makes economic relations less intimate. On the other hand, specialisation and increased knowledge of others increases communication over large distances, which is likely to be pacifying and perhaps culturally enriching. Specialisation will favour the production factor that is hardest to increase and most vital to production, which

9  Big Data and Wellbeing: An Economic Perspective

199

in the past was human capital, but in the future might be physical capital in the form of AI machines. We already see a reduction in the share of labour in national income, and Big Data might increase the importance of sheer computing power and data storage capacity, both likely to favour capital and thus increase inequality whilst reducing median wages. However, this is no more than pure speculation as it is also possible that Big Data will allow the majority of human workers to focus on a skill that is not AI-replicable, perchance human interaction and creativity (though some fear that there is no human skills AI cannot over time acquire). There will also be macro-effects of Big Data via a totally different avenue: the effect of lots of data available for training the intelligence of non-human entities. It is already the case that Artificial Intelligence techniques use Wikipedia and the whole of the Internet to train, for instance, translation programs from one language into another. It is the case that the internet was used by IMB’s Watson machine to outperform humans at ‘Jeopardy’, a general knowledge quiz. It is the case right now that the internet’s vast store of pictures and videos is being used to train AI machines in the recognition of objects, words, attitudes, and social situations.14 Essentially, the available knowledge on the lives of billions of humans is improving the intelligence of non-human entities. This might benefit humanity, for instance by allowing individuals from totally different language communities to quickly understand each other, or might be training rivals for political dominance. It is beyond this article to speculate what the end result of these societal forces will be, as one is then pretty much talking about the future of the world, so we simply state here that the explosion in data available to lots of different actors is part and parcel of major economic shifts that seem difficult to contain and hard to predict.

9.4.2  Privacy and Conclusions The point of gathering and analysing Big Data is to uncover information about individuals’ tastes, abilities, and choices. The main case wherein that is a clear problem is where individuals want to keep secrets from others.15 That in turn shows up the issue of ’face’, ie the need for individuals to be seen to adhere to social norms whilst in reality deviating from them. Big Data potentially uncovers ’faces’: the faces individuals present to some can be unmasked, leading to the possibility of blackmail on a huge scale. One should expect this danger to lead to countermoves. Whilst some companies may hence buy information on the behaviour of the clicks made from an IP address that is then  For instance, recent papers used scenic ratings on internet sites with pictures or hedonic pricing models to build predictive models of what humans found to be scenic (Seresinhe et  al. 2017; Glaeser et al. 2018). 15  There are other cultural aspects of the Internet age in general that lie outside of the scope of this paper, such as the general effect of social media, the increased (ab)use of the public space for attention, and the effects of increasingly being in a Global Village of uniform language, tastes, and status. 14

200

C. Bellet and P. Frijters

linked to a credit card and then linked to an individual name, the individual can react by setting up random internet behaviour routines specifically designed to create random click-noise. Or an individual can totally hide their internet tracks using specific software to do that. Similarly, individuals can open multiple bank accounts, use various names, switch devices with others, and limit their web presence entirely. The rich will find this easier than the poor, increasing the divide. The crucial question for the state is when and how to respect the right of individuals to keep their ’faces’ and thus, in some sense, to lie to others. The key aspect of that discussion lies in the reasons for using the faces. When the reason to keep a face is criminal, the law already mandates everyone with data on the criminal activities on others to bring this to the attention of the authorities. Big Data gatherers and analysers that uncover criminal activities will hence be pressed into becoming law-informers, lest they become complicit in covering up for crimes. When it comes to crime, Big Data will simply be part of the cat-­ and-­mouse aspect of authorities and criminals, which is as old as society itself. Take taxation, which was the original reason for the emergence of Big Data. Sophisticated individuals will now use Big Data to cover up what they earn via the use of anonymous companies, online purchases via foreign countries, and what-have-you. Tax authorities react by mandating more reporting, though with uncertain effect. Even China, which is arguably the country most advanced in constantly keeping its population under electronic surveillance, has great difficulties curtailing its wealthier citizens, whose children often study abroad and who funnel their wealth as well.16 There are also non-criminal reasons for people to keep different faces for different audiences though. People can be embarrassed about their looks, their sexuality, their family background, their age, their health, their friends, their previous opinions, and their likes. They might also want to keep their abilities, or lack thereof, secret from employers, friends, and families. Having their personal information known to all could well be devastating for their careers, their love life, and their families. There is a whole continuum here of cases where ‘face’ might differ from ‘reality’, ranging from self-serving hypocrisy to good manners to maintaining diverging narratives with diverging interest groups. From a societal perspective a decision has to made as to whether it is deemed beneficial or not to help individuals keep multiple faces hidden or not. The norms on what is considered embarrassing and private differ from country to country. Uncovering faces might be considered a crime in one country and totally normal in another. Having an angry outburst on social media might be considered a healthy expression in one country and an unacceptable transgression in another. Medical information about sexually transmitted diseases (even if deduced from surveillance cameras or Facebook) might scarcely raise an eyebrow in one country and  A popular means of estimating the size of tax-evasion is by looking at the difference in the actual usage of cash versus the official usage of cash, yielding perhaps 25% tax evasion in China (Jianglin 2017). There have also been attempts to compare reported exports with reported imports (Fisman and Wei 2004).

16

9  Big Data and Wellbeing: An Economic Perspective

201

be devastating to reputation in another. Indeed, information that is gathered as a matter of course by officials in one country (ie the gender and ethnicity in one country) might be illegal in another country (eg. France where one is forbidden from storing data on ethnicity). World-wide rules on what information should or should not be subject to privacy legislation (or what should be considered unethical to gather by a researcher) would hence seem futile. Embarrassment and privacy are culture-specific. Is wellbeing itself subject to embarrassment? It would seem not: response rates to wellbeing questions are very high in every country sampled, signifying its universal status as a general signal of the state of someone’s life that is regularly communicated in many ways. It is not immediate that the existence of embarassment means that privacy is good for society. For instance, an employer who screens out an unhappy person as a potential worker because a happier alternative candidate is likely to be more productive, is not necessarily having a net negative effect on society, even though the person being screened out probably is worse off in the short run. From a classic economic point of view, the employer who discriminates against the unhappy because they are less productive is perfecting the allocation of resources and is in fact improving the overall allocation of people to jobs, leaving it up to societal redistributive systems to provide a welfare floor (or not) to those whose expected productivity is very low. The same argument could be run for the formation of romantic partnerships, friends, and even communities: the lack of privacy might simply be overall improving for the operation of society. Yet, it seems likely that the inability of those without great technical ability to maintain multiple faces will favour those already at the top. Whilst the poor might not be able to hide from their management what they really think and might not be able to hide embarrassing histories, those with greater understanding of the new technologies and deeper pockets will likely be able to keep multiple faces. One can for instance already pay internet firms to erase one’s searchable history on the web. Whilst the scientific wellbeing case for the wellbeing benefits and costs of maintaining multiple faces is not well-researched, the UN has nevertheless declared the “Right to Privacy” which consists of the right to withhold information from public view - a basic human right. Article 12 says “No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.” The UN definition partly seems motivated by the wish for individuals to be free to spend parts of their day without being bothered by others, which is not about multiple faces but more about the limits to the ability of others to impose themselves on others. That is not in principle connected to Big Data and so not of immediate interest here. The ‘face’ aspect of privacy is contained in the reference to “honour and reputation” and is seen as a fundamental Human Right.

202

C. Bellet and P. Frijters

If we thus adopt the running hypothesis that holding multiple faces is important in having a well-functioning society, the use of Big Data to violate that privacy and thus attack reputation is a problem. Privacy regulation at present is not set up for the age of Big Data, if there are laws at all. For instance, the United States doesn’t have a privacy law, though reference is made in the constitution against the use of the government of information that violates privacy. Companies can do what governments cannot in the United States. In the United Kingdom, there is no common law protection of privacy (because various commissions found they could not adequately define privacy), but there is jurisprudence protecting people from having some of their private life exposed (ie, nude pictures illegally obtained cannot be published), and there is a general defence against breaches of confidence which invokes the notion that things can be said or communicated ‘in confidence’. Where confidentiality ends and the right of others to remark on public information begins is not clear. Finally, is it reasonable to think that individuals will be able to control these developments and to enforce considered consent for any possible use of the Big Data collected? We think this is likely to be naive: in an incredibly complex and highly specialised society, it must be doubted that individuals have the cognitive capacity to understand all the possible uses of Big Data, nor that they would have the time to truly engage with all the informed consent requests that they would then get. Ones sees this dynamic happening right now in the EU with respect to greater privacy rules that came in mid 2018, forcing large companies to get more consent from their clients. As a result, e-mail inboxes were being flooded with additional information, requiring consumers to read hundreds of pages in the case of large companies, followed by take-it-or-leave-it consent requests which boil down to “consent to our terms or cease using our services”. This is exactly the situation that has existed for over a decade now, and it is simply not realistic to expect individuals to wade through all this. The limits of considered consent in our society are being reached, with companies and institutions becoming faster at finding new applications and forms of service than individuals can keep up with. Hence, the ‘consumer sovereignty’ approach to consent and use of Big Data on the internet seems to us to have a limited lifetime left. The historical solution to the situation where individuals are overwhelmed by organised interests that are far ahead of them technologically and legally is to organise in groups and have professional intermediaries bargain on behalf of the whole group. Unions, professional mediators, and governments are examples of that group-bargaining role. It must thus be expected that in countries with benevolent and competent bureaucracies, it will be up to government regulators to come up with and enforce defaults and limits on the use of Big Data. In countries without competent regulators, individuals will probably find themselves relying on the market to provide them with counter-­ measures, such as via internet entities that try and take on a pro-bono role in this (such as the Inrupt initiative). A key problem that even benevolent regulators will face is that individuals on the internet can be directed to conduct their information exchange and purchases

9  Big Data and Wellbeing: An Economic Perspective

203

anywhere in the world, making it hard for regulators to limit the use of ‘foreign produced’ data. Legal rules might empower foreign providers by applying only to domestic producers of research, which would effectively stimulate out-sourcing of research to other countries, much like Cambridge Analytica was offering manipulation services to dictators in Africa from offices in London. Concerns for privacy, along with other concerns that national agencies or international charitable groups might have about Big Data and the difficulty of controlling the internet in general, might well lead to more drastic measures than mere privacy regulation. It is hard to predict how urgent the issue will prove to be and what policy levers regulators actually have. The ultimate policy tool for national agencies (or supranational authorities such as the EU) would be to nationalise parts of the internet and then enforce privacy-sensitive architecture upon it. Nationalisation of course would bring with it many other issues, and might arise from very different concerns, such as taxation of internet activities. It seems likely to us that events will overtake our ability to predict the future in this area quite quickly. Our main conclusion is then that Big Data is increasing the ability of researchers, governments, companies, and other entities to measure and predict the wellbeing and the inner life of individuals. This should be expected to increase the ability to analyse the effects on wellbeing of policies and major changes in general, which should boost the interest and knowledge of wellbeing. The increase in choices that the information boom is generating will probably increase the use of subjective ratings to inform other customers about goods and activities, or about participants to the “sharing economy” with which they interact. At the aggregate level, the increased use of Big Data is likely to increase the degree of specialisation in services and products in the whole economy, as well as a general reduction in the ability of individuals to guard their privacy. This in turn is likely to lead to profound societal changes that are hard to foretell, but at current trajectory seem to favour large-scale information collectors over the smaller scale providers and users. This is likely to make individuals less in control of how information about themselves is being used, and of what they are told, or even able to discover, about the communities in which they live.

References Abrahao, B., P. Parigi, A. Gupta, and K.S. Cook. 2017. Reputation Offsets Trust Judgments Based on Social Biases Among Airbnb Users. Proceedings of the National Academy of Sciences 114 (37): 9848–9853. Acquisti, A., C. Taylor, and L. Wagman. 2016. The Economics of Privacy. Journal of Economic Literature 54 (2): 442–492. Algan, Y., T. Mayer, and M. Thoenig. 2013. The Economic Incentives of Cultural Transmission: Spatial Evidence from Naming Patterns Across France. Algan, Y., F. Murtin, E. Beasley, K. Higa, and C. Senik. 2019. Well-Being Through the Lens of the Internet. PloS one 14 (1): e0209562.

204

C. Bellet and P. Frijters

Argyle, M., D.  Kahneman, E.  Diener, and N.  Schwarz. 1999. Well-Being: The Foundations of Hedonic Psychology. New York: Russell Sage Foundation. Bellet, C. et  al. 2017. The Paradox of the Joneses: Superstar Houses and Mortgage Frenzy in Suburban America. CEP Discussion Paper 1462. Bijlstra, G., and R. Dotsch. 2011. Facereader 4 Emotion Classification Performance on Images from the Radboud Faces Database. Unpublished manuscript, Department of Social and Cultural Psychology, Radboud University Nijmegen, Nijmegen, The Netherlands. Bollen, J., H.  Mao, and X.  Zeng. 2011. Twitter Mood Predicts the Stock Market. Journal of Computational Science 2 (1): 1–8. Borowiecki, K.J. 2017. How Are You, My Dearest Mozart? Well-Being and Creativity of Three Famous Composers Based on Their Letters. Review of Economics and Statistics 99 (4): 591–605. Carroll, C.D., J.C. Fuhrer, and D.W. Wilcox. 1994. Does Consumer Sentiment Forecast Household Spending? If so, Why? The American Economic Review 84 (5): 1397–1408. Clark, A.E., S. Flèche, R. Layard, N. Powdthavee, and G. Ward. 2018. The Origins of Happiness: The Science of Well-Being over the Life Course. Princeton: Princeton University Press. Collins, S., Y.  Sun, M.  Kosinski, D.  Stillwell, and N.  Markuzon. 2015. Are You Satisfied with Life?: Predicting Satisfaction with Life from Facebook. In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction (24–33). Springer. Croy, I., and T. Hummel. 2017. Olfaction as a Marker for Depression. Journal of Neurology 264 (4): 631–638. Danescu-Niculescu-Mizil, C., L.  Lee, B.  Pang, and J.  Kleinberg. 2012. Echoes of Power: LanGuage Effects and Power Differences in Social Interaction. In Proceedings of the 21st International Conference on World Wide Web (699–708). ACM. Deaton, A. 2008. Income, Health, and Well-Being Around the World: Evidence from the Gallup World Poll. Journal of Economic Perspectives 22 (2): 53–72. Fisman, R., and S.-J. Wei. 2004. Tax Rates and Tax Evasion: Evidence from Missing Imports in China. Journal of Political Economy 112 (2): 471–496. Foster, G.(ed.). 2019. Biophysical Measurement in Experimental Social Science Research. London: Elsevier. Frijters, P., with G.  Foster. 2013. An Economic Theory of Love, Groups, Power And Networks. Cambridge: Cambridge University Press, 431p. Glaeser, E. L., M. S. Kincaid, and N. Naik. 2018. Computer Vision and Real Estate: Do Looks Matter and Do Incentives Determine Looks. Technical report, National Bureau of Economic Research. Goldfarb, A., and C.E.  Tucker. 2011. Privacy Regulation and Online Advertising. Management Science 57 (1): 57–71. Goldfarb, A., and C. Tucker. 2012. Shifts in Privacy Concerns. American Economic Review 102 (3): 349–353. Hedman, E., L.  Miller, S.  Schoen, D.  Nielsen, M.  Goodwin, and R.  Picard. 2012. Measuring Autonomic Arousal During Therapy. In Proceedings of Design and Emotion (11–14). Citeseer. Helliwell, J. F., H. Huang, and S. Wang. 2016. New Evidence on Trust and Well-Being. Technical Report, National Bureau of Economic Research. Helliwell, J. F., H. Huang, Wang, S., and H. Shiplett. 2018. International Migration and World Happiness. Chapter II of the World Happiness Report 2018. Hildebrandt, M. 2006. Profiling: From Data to Knowledge. Datenschutz und DatensicherheitDuD 30 (9): 548–552. Hills, T., E. Proto, and D. Sgroi. 2017. Historical analysis of national subjective wellbeing using millions of digitized books. Working Paper. Jianglin, L. 2017. Estimation of the Scale of China’s Tax Evasion Caused by Hidden Econ- Omy Based on Revised Cash Ratio Model. Journal of Hefei University of Technology (Social Sciences) 3: 4.

9  Big Data and Wellbeing: An Economic Perspective

205

Kleinberg, J., J. Ludwig, S. Mullainathan, and Z. Obermeyer. 2015. Prediction Policy Problems. American Economic Review 105 (5): 491–495. Kosinski, M., D. Stillwell, and T. Graepel. 2013. Private Traits and Attributes Are Predictable from Digital Records of Human Behavior. In Proceedings of the National Academy of Sciences, 201218772. Liu, B. 2012. Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies 5 (1): 1–167. Liu, P., W. Tov, M. Kosinski, D.J. Stillwell, and L. Qiu. 2015. Do Facebook Status Updates Reflect Subjective Well-Being? Cyberpsychology, Behavior, and Social Networking 18 (7): 373–379. Luca, M., and G.  Zervas. 2016. Fake It Till You Make It: Reputation, Competition, and Yelp Review Fraud. Management Science 62 (12): 3412–3427. Neumann, N., C. E. Tucker, and T. Whitfield. 2018. How effective is black-box digital consumer profiling and audience delivery?: Evidence from field studies, june 25, 2018. Available at SSRN: https://ssrn.com/abstract=3203131 or https://doi.org/10.2139/ssrn.3203131. Proserpio, D., W.  Xu, and G.  Zervas. 2018. You Get What You Give: Theory and Evidence of Reciprocity in the Sharing Economy. Quantitative Marketing and Economics 16 (4): 371–407. Ratti, C., D. Frenchman, R.M. Pulselli, and S. Williams. 2006. Mobile Landscapes: Using Location Data from Cell Phones for Urban Analysis. Environment and Planning B: Planning and Design 33 (5): 727–748. Schwartz, H. A., J. C. Eichstaedt, M. L. Kern, L. Dziurzynski, R. E. Lucas, M. Agrawal, G. J. Park, S. K. Lakshmikanth, S. Jha, M. E. Seligman, et al. 2013. Characterizing Geographic Variation in Well-Being Using Tweets. In ICWSM, pp. 583–591. Schwartz, H.  A., M.  Sap, M.  L. Kern, J.  C. Eichstaedt, A.  Kapelner, M.  Agrawal, E.  Blanco, L. Dziurzynski, G. Park, D. Stillwell, et al. 2016. Predicting Individual Well-Being Through the Language of Social Media. In Biocomputing 2016: Proceedings of the Pacific Symposium (516–527). World Scientific. Senik, C. 2014. The French Unhappiness Puzzle: The Cultural Dimension of Happiness. Journal of Economic Behavior & Organization 106: 379–401. Seresinhe, C.I., T. Preis, and H.S. Moat. 2017. Using Deep Learning to Quantify the Beauty of Outdoor Places. Royal Society Open Science 4 (7): 170170. Smith, L., S. Giorgi, R. Solanki, J. Eichstaedt, H. A. Schwartz, M. Abdul-Mageed, A. Buffone, and L. Ungar. (2016). Does Well-Being Ranslate on Twitter? In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2042–2047. Tanner, A. 2017. Our bodies, Our Data: How Companies Make Billions Selling Our Medical Records. Boston: Beacon Press. Varian, H.R. 2014. Big data: New tricks for Econometrics. Journal of Economic Perspectives 28 (2): 3–28. Xu, X., R.W. McGorry, L.-S. Chou, J.-h. Lin, and C.-c. Chang. 2015. Accuracy of the Microsoft Kinect For Measuring Gait Parameters During Treadmill Walking. Gait & Posture 42 (2): 145–151. Clement Bellet is an Assistant Professor in Behavioural Economics and Quantitative Marketing at Erasmus University, Rotterdam. His work studies how inequality and social norms affect consumer choices and individual wellbeing. More recently, he has also looked at the impact of non-monetary incentives on performance outcomes. To answer these questions, he relies on large-scale consumer surveys, field and online experiments, scanner data, proprietary private sector data and online datasets collected using web-scraping techniques (i.e. ‘Big Data’) or made readily available (e.g. search, social media data). He has held previous posts at the London School of Economics as a postdoctoral researcher and completed his PhD in economics from Sciences Po Paris in 2017.  

206

C. Bellet and P. Frijters

Research Interests: Behavioural Economics, Quantitative Marketing, Subjective Well-Being, Inequality and Poverty. [email protected] Paul Frijters is a Professor of Wellbeing Economics at the London School of Economics, from 2016 to Nov. 2019 at the Center for Economic Performance, thereafter at the Department of Social Policy. He completed his Master’s degree in Econometrics at the University of Groningen, including a 7-month stay in Durban, South Africa, before completing a PhD through the University of Amsterdam. He has also engaged in teaching and research at the University of Melbourne, the Australian National University, QUT, UQ and now the LSE. Professor Fritjers specialises in applied micro-econometrics, including labour, happiness and health economics, though he has also worked on pure theoretical topics in macro and micro fields. His main area of interest is in analysing how socio-economic variables affect the human life experience and the ‘unanswerable’ economic mysteries in life. Professor Frijters is a prominent research economist and has published over 150 papers in fields including unemployment policy, discrimination and economic development. He was the Research Director of the Rumici Project, a project sponsored by the Australian Ministry of Foreign Aid (AusAid), and is also a Co-editor of the journal Economic Record. In 2009 he was voted Australia’s best young economist under 40 by the Australian Economic Society. [email protected]

Chapter 10

The Implications of Embodied Artificial Intelligence in Mental Healthcare for Digital Wellbeing Amelia Fiske, Peter Henningsen, and Alena Buyx

Abstract  Embodied artificial intelligence (AI) has increasing clinical relevance for therapeutic applications in mental health services. Artificially intelligent virtual and robotic agents are progressively conducting more high-level therapeutic interventions that used to be offered solely by highly trained, skilled health professionals. Such interventions, ranging from ‘virtual psychotherapists’ to social robots in dementia care and autism disorder, to robots for sexual disorders, carry with them the hope of improving quality of care and controlling expenditure, as well as reaching underserved populations in need of mental health services and improving life opportunities for vulnerable groups. However, there is a persistent gap between current, rapid developments in AI mental health and the successful adoption of these tools into clinical environments by health professionals and patients. In addition, interventions are often designed without any explicit ethical considerations. At present, the quality of research on embodied AI in psychiatry, psychology, and psychotherapy is varied, and there is a marked need for more robust studies, including randomized controlled trials on the benefits and potential harms of current and future applications. While embodied AI is a promising approach across the field of mental health, continued research will be necessary to address the broader ethical and societal concerns of these technologies, and to identify best research and medical practices in this innovative field of mental health care. Keywords  Embodied artificial intelligence · Robotics · Mental health · Medicine · Healthcare

A. Fiske (*) · A. Buyx Institute for History and Ethics of Medicine, Technical University of Munich School of Medicine, Technical University of Munich, Munich, Germany e-mail: [email protected] P. Henningsen Department of Psychosomatic Medicine and Psychotherapy, Klinikum rechts der Isar at Technical University of Munich, Munich, Germany © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_10

207

208

A. Fiske et al.

10.1  Introduction Embodied artificial intelligence (AI) is gaining traction for applied clinical uses in psychiatry, psychology and psychotherapy. Embodied AI employs algorithms based in machine learning that respond to the client or patient independent of any expert human guidance through a physically embodied presence, such as a robotic interface. The application of embodied AI offers new opportunities to increase and improve the provision of mental health care, ranging from advances such as ‘virtual psychotherapists’ (Martinez-Martin and Kreitmair 2018) to social robots in dementia care and autism disorder (Góngora Alonso et al. 2018). More than simply providing clinical support, artificially intelligent virtual and robotic agents are capable of performing higher-level, therapeutic interventions that previously could only be conducted by trained health professionals (Inkster et al. 2018). ‘Virtual’ or ‘robotic therapists’ differ from online therapy or patient-led work with manuals, questionnaires, or other self-help materials because they operate independently from human input (Mehrotra et al. 2017). From improving quality of care and controlling expenditure (Cresswell et  al. 2018), to reaching underserved populations in need of mental health services, or improving life opportunities for vulnerable groups, embodied AI has much to offer mental health care. However, as of yet, many applications are designed without explicit ethical considerations (Ienca et al. 2018), and have social implications that require further research. With the aim of promoting critical advancement of considerations surrounding digital wellbeing in clinical applications of embodied AI, we explore the societal and ethical implications of virtually and physically embodied artificially intelligent agents and applications in relation to pertinent concerns surrounding trust, privacy, and autonomy in the provision of mental health care.1

10.2  E  xisting Embodied Intelligent Applications and the Need for Ethical Guidelines What if, instead of meeting weekly with a therapist, a patient could have a therapeutic encounter with only a smartphone or computer whenever they needed to  – whether at home or on the go? A variety of virtually embodied artificially intelligent agents, from therapeutic apps, chatbots, and avatars, are being integrated into mental health care. Such applications are able to engage a patient like a virtual psychotherapist, whether working with individuals to recognize their emotions and thought patterns, to develop skills such as resilience, or to learn new techniques for reducing anxiety. Examples include programs such as Tess (n.d.), chatbots like Sara (n.d.), Wysa (n.d.), or Woebot (n.d.), some of which are programmed with alerts when they detect emotional distress. Other applications, such as the Avatar Project, work with  The following analysis builds on previous work by Fiske et al. (2019a).

1

10  The Implications of Embodied Artificial Intelligence in Mental Healthcare…

209

patients with persistent auditory hallucinations for patients with psychosis (Craig et al. 2018), while other programs provide virtual reality assisted therapy for conditions such as PTSD, schizophrenia or risk prevention for suicide (Rein et al. 2018; Lucas et al. 2017). Not limited to virtual interfaces, AI-enabled therapy also takes physical forms, such as animal-like robots like Paro the fuzzy harp seal, or ‘companion bots’ like eBear (Mahoor n.d.), to help patients with dementia, depression, or homecare. Currently, researchers are investigating the possibility of companion robots for reducing stress, loneliness, and agitation, and in improving mood and social connections (Wada and Shibata 2007; Yu et al. 2015). Work at the interface of AI and robotics is also being translated into the clinic to assist children with autism spectrum disorders (e.g. Kaspar, RoboTherapy) (Kaspar n.d.; Roboticmagazine 2017; Grossard et al. 2018), as well as for other mental health concerns such as mood and anxiety disorders or children with disruptive behavior (Rabbitt et al. 2015). Perhaps most controversially in this realm, artificially intelligent robots also include adult sex robots (e.g. Roxxxy) that are discussed for treating sexual disorders (Torjesen 2017). While AI-enabled virtual and robot therapy has been used in a range of medical fields (Calderita et al. 2014; Broadbent 2017; Liu et al. 2018), in the area of mental health it is relatively novel. Therapeutic chatbots, avatars, socially assistive devices, and sex robots have not yet been broadly integrated into clinical use, however it is likely that some of these applications will soon translate into wider use. Initial ethical assessments have been conducted in some cases (Vandemeulebroucke et  al. 2018; Coeckelbergh et al. 2016), however, most of these studies focus on a single application. In general, the field lacks large-scale, rigorous research studies on embodied AI applications in the clinic (Riek 2015; Piette et al. 2016; Suganuma et al. 2018; Provoost et al. 2017), pointing to the need for greater investigation of patient acceptance and contingent treatment outcomes in mental health fields. While there is an emergent field of work that is developing ethical and governance frameworks for the integration of AI into society (Floridi et al. 2018), at this point there is a lack of specific guidance for the area of mental health services (Henson et al. 2019; Leigh and Ashall-Payne 2019; Ferretti et al. 2019). The field is developing quickly, and embodied AI and robotic applications offer real benefits such as the possibility of increasing the availability of mental health services in places with limited resources, or for populations with unmet needs. Thus, it is clear that rigorous and comprehensive research is necessary in order to address the ethical and social concerns surrounding the use of embodied artificial intelligence in the field of mental health. Specifically, this research is necessary in order to flag areas of concern early on, so that ethical concerns can be incorporated into the design and construction of the next generation of AI agents and robots for mental health. For the rest of this chapter, we move to outline potential benefits, ethical concerns, and challenges in clinical application for the use of embodied AI across mental health fields (Fiske et al. 2019a).

210

A. Fiske et al.

10.3  Anticipated Benefits Embodied AI applications and robotics have the potential to improve digital wellbeing and bring significant benefits to the practice of mental health care. In particular, embodied AI programs offer the clear advantage of providing opportunities for intervention in areas that have large unmet mental health needs. This could include providing services to high-risk groups such as veterans, or individuals who fear stigmatization associated with seeking out psychotherapeutic services (Stix 2018). Given that one of the major advantages of embodied AI applications is their accessibility, it is also possible that intelligent applications could identify mental health concerns earlier on than traditional methods of care, and lower barriers to treatment by providing a first step for some individuals before reaching out for more comprehensive services. Embodied AI in mental health could also work to improve trust and openness between patients and the medical system, such as by empowering patient groups that have less familiarity with the medical system or encounter barriers to accessing treatment. In this sense, one of the foreseeably greatest benefits of AI applications is structural, that is, they offer a new means of providing care for populations that are difficult to treat via traditional means. Intelligent applications like chatbots and avatars are low-threshold and convenient, and may be particularly beneficial for resource-­ poor or rural populations. This also includes individuals whose insurance does not cover mental health therapy, or who encounter other institutional or societal barriers to treatment. Further, some patients may benefit from the increased availability of virtual or robotic therapist. Unlike human therapists, intelligent applications have unlimited time and patience, always remember what a patient has reported, and are non-judgmental (Gionet 2018; Silva et al. 2018). Thus, AI applications could complement current services, or in some cases, offer an entry point to care for those individuals who might be interested in more traditional clinical interventions in the future. Conceivably, intelligent applications could be integrated into a scaled-­ provision of services, with AI enabled applications providing support for mild cases of depression and other non-acute conditions (Schröder et al. 2017). Such ‘blended’ models (Wentzel et al. 2016) could enable health professionals to devote more time to severe or more complicated cases.

10.4  Overarching Ethical Concerns While embodied AI applications and robotics have many potential benefits, they also raise significant ethical concerns that require further attention. One significant area of concern relates to harm prevention. Robust studies are necessary for ensuring non-maleficence in therapeutic encounters, and for anticipating how to address cases where robots might malfunction or operate in unexpected ways (Cresswell et al. 2018). While other medical devices are subject to rigorous risk assessment and regulatory oversight prior to their approval for clinical use, it remains open to debate

10  The Implications of Embodied Artificial Intelligence in Mental Healthcare…

211

if embodied AI devices – potentially including virtual agents and freely available mental health applications – should also require the same kind of treatment. AI applications in mental health care also require special consideration surrounding data ethics. Like any other medical device, the data security of personal health information, how the data is used, and the potential for hacking and non-­ authorized monitoring (Deutscher Ethikrat 2017; Nuffield Council on Bioethics 2014) all need to be addressed. The data generated through the use of intelligent virtual agents and assistive robots needs to be subject to clear standards, with matters of confidentiality, information privacy, and secure data management spelled out. In addition, guidelines for information gained through monitoring habits, movement, and other interactions need to be developed, for developers, health professionals and users (Feil-Seifer and Matarić 2011; Körtner 2016; Fiske et al. 2019b). In part because embodied AI is one of the most quickly developing areas within psychological and psychiatric research and treatment there is a lack of guidance on development, clinical integration and training (Fiske et al. 2019b). Existing legal and ethical frameworks are often not closely attuned to these changes, thus running the risk that they do not provide sufficient regulatory guidance. For instance, where ‘gaps’ exist between the use of particular applications and existing ethical frameworks, harm might only be addressed retroactively (Cresswell et al. 2018). It is of course difficult to foresee ethical and legal questions raised by current and future AI developments; intentional discussion and reflection on the relationship between current regulation and changes in embodied mental health AI is necessary so that necessary insights can be incorporated into AI design and development. This includes the development of higher-level guidance from professional councils in the mental health fields on how medical professionals might best develop skills for the use of embodied AI in the clinic (Luxton 2014, 2015; Oliveira et al. 2014; Fulmer 2018). As AI is integrated into mental health care, it is important to attend to the ways that intelligent applications and robotics may change the existing landscape of services. With an eye to a just provision of care, there is the possibility that the availability of embodied AI options could justify the elimination of existing services, thereby resulting in either fewer mental health care service options, or predominantly AI-driven options. If this were to occur, it could aggravate existing health inequalities. As such, it remains critical to consider the integration of AI contextually,  in relation to the availability of other mental health care resources. In their current stage of development, embodied AI and robotics cannot be a substitute or replacement for robust, multi-tiered mental health care. Thus, it remains important to ensure that the integration of AI services does not become a pretext to eliminate high-quality, multi-layered care by trained mental health professionals.

10.5  Specific Challenges in Application As more therapeutic AI and robotic options emerge, there are specific challenges that will arise in relation to their clinical application (Fiske et al. 2019a). Many of these challenges relate to matters of professional supervision and risk assessment.

212

A. Fiske et al.

For instance, mental health professionals are ethically mandated to inform other service providers and authorities (when appropriate) when a patient indicates that she poses a risk to herself, or to another person. It is not yet clear how algorithmically informed programs would fulfil this duty, given that patients can use embodied AI applications entirely outside of professional supervision. Likewise, given that many assistive robotic programs operate in the private spaces of people’s lives, such as their homes, it has yet to be sufficiently addressed how robotic devices or virtual agents would help patients get higher-level services such as hospitalization or self-­ harm protection. Current proposals and calls for ethical oversight do not provide sufficient guidance for specific practice-related questions (Henson et  al. 2019; Leigh and Ashall-Payne 2019; Ferretti et al. 2019). Given the sensitive realms in which embodied AI applications and robotics operate, we anticipate that ethical guidelines – like those that inform the work of practitioners – will be necessary. Yet, what an AI duty of care or code of practice might include remains uncertain, as well as how such guidelines could be put into practice when AI applications are designed to be used outside of healthcare settings. A central ethical concern revolves around how embodied AI applications in mental health care will fulfill obligations surrounding patient autonomy (Beauchamp and Childress 2012).2 For example, given that many AI robotics are designed to work with the elderly or individuals with disabilities, how can one ensure that patients fully understand how a given application or avatar works? Individuals respond differently to robots than they do to other humans, and can even be more compliant in responding to robots (Broadbent 2017). While this may be an advantage in some cases, such as helping patients to make necessary behavioral changes or report traumatic experiences (Lucas et al. 2017), it also raises important concerns around manipulation and coercion. How the autonomy of individuals using embodied AI and robotics will be protected is thus a fundamental concern for applications that are designed to be used outside of medical supervision, in particular for groups such as the elderly or those with intellectual disabilities who could be particularly vulnerable to privacy infringement, manipulation, and even coercion (Vandemeulebroucke et al. 2018). While it appears that embodied AI may offer important advances in some areas, such as the improved identification of indicators of psychosis in speech analysis (Bedi et al. 2015), there may be important aspects of the therapeutic encounter with another individual that are not replicable through technological mediators. For instance, for some individuals the relationship with a specific therapist may be critical for their progress, or there may be other therapeutic benefits of a person-to-­ person interaction that are hard to anticipate. Just as therapists learn techniques to manage emotional transference, there is a similar concern that embodied AI programs could have complex effects on patients. Since many of the populations that

2  See Calvo et al. (Chap. 2, this collection), for a more general discussion about the relationship between AI agents and user autonomy.

10  The Implications of Embodied Artificial Intelligence in Mental Healthcare…

213

could benefit from AI applications are vulnerable due to their illness, age, or living situation in a health care facility, specific protections are needed in order to promote healthy AI relationships (Johnston 2015). As has been demonstrated with the use of facial recognition software or data analytics, there are important ethical concerns raised in the use of algorithms, including bias that can have the effect of exacerbating existing social inequalities (Tett 2018). AI mental health interventions are driven by algorithms that could unintentionally exclude particular patient populations or harm particular patient populations. This includes data-driven sexist or racist bias, or bias produced by competing goals or endpoints of devices (Corea 2017; Hammond 2016). In line with other calls for algorithmic transparency (Powles 2017), artificially intelligent applications for mental health purposes should also be available for public scrutiny. Embodied AI and robotics operate within specific cultural understandings surrounding the role of technology in health care and society. For example, discussion of embodied AI often turns to worries surrounding the limits of human control over technology, invoking, for example, science fiction depictions of the nonhuman. Thus, embodied AI does not operate on a blank slate, but rather carries with it particular cultural associations that bear on matters of trust in medical practice (Cresswell et al. 2018). At present, the implementation of embodied AI is still in initial stages for mental health care. Looking forward, the continued and more widespread use of artificial intelligence in clinical care raises questions about the long-­ term impacts this may have on patients, the broader mental health community, and society. The use of robotic aids and applications could foreseeably change understandings of care, the social value placed on caring professions, and raise debates surrounding the ‘outsourcing’ of care to robotics. Matters of trust between patient and provider, or patient and the health care system, will need to be carefully negotiated so that they are not eroded through the use of embodied AI and robotics. It is clear that intelligent robotics have the potential to significantly affect human relationships. While the focus of this discussion has been on AI and robotics, these matters raise fundamental questions about what it is to be human (Cresswell et al. 2018). As science and technology studies scholars have shown, the relationships we form with objects can alter, transform, and limit human relations as well (Latour and Woolgar 1986). Engaging with an algorithm-driven therapist can alter an individual’s behaviors, ways of viewing the world, or relationships with others. Yet, unlike human-human relationships, human-robot relationships are not symmetrical or mutual, running the risk that, for instance, some patients could become too attached to AI interventions (Cresswell et al. 2018), or experience unforeseen interpersonal effects. Further repercussions for identity, agency, and self-consciousness in individual patients remain to be seen and will require further investigation. Thus it is likely that as more intelligent and autonomous devices are developed, human relationships with them will become even more complicated (Dodig Crnkovic and Çürüklü 2012, 63).

214

A. Fiske et al.

10.6  Conclusions There are clear benefits and potential advances to the use of embodied AI and robotics in mental health care, whether extending services to underserved populations or enhancing existing services under professional supervision. Yet, as this discussion has shown, the quality of existing research on embodied AI in psychiatry, psychology, and psychotherapy is varied. More robust studies, including RCTs on the benefits and potential risks of current and future applications, are needed in order to carefully direct future integration of this technology into high level care. In particular, this research should drive the creation of specific guidance for appropriate use of AI in the mental health field. This should include guidelines on: whether embodied AI applications should be subject to standard health technology assessment and require regulatory approval; a broad set of provisions for how AI applications should be provided outside the supervision of health care professionals; professional guidelines specific for mental health practitioners on the best use of AI in clinical practice; recommendations on how the next generation of mental health practitioners can be better prepared for the wide-spread use of embodied AI in mental health, including blended care models. Specific professional guidance is necessary in order to address the significant ethical concerns raised by the use of AI. This includes the possibility that the increased use of embodied AI could reduce the provision of high-quality care by trained mental health professionals. As such, we have argued that AI tools in mental health should be treated as an additional resource in mental health services. Specific provisions are needed in order to satisfy duties of care and reporting of harm; ideally embodied AI should remain under the supervision of trained mental health professionals. As more applications are integrated, both in and outside of the clinic, the subsequent changes for the availability and use of existing mental health care services will need to be assessed. Applications that are intended for use outside of professional supervision, such as apps and bots, should be designed with mechanisms for reliable risk-assessment and referral to higher-level services when appropriate. The ethical concerns raised by the use of AI in healthcare are important and need to be addressed in a transparent way. This includes the possibility for scrutiny of algorithms for concerns such as bias, with the possibility for open public debate and input. Ideally, this would also include training for health professionals on how to communicate with their patients about the role of the algorithms used in different applications. Clarity around when and how informed consent needs to be taken, as well as best practices for addressing matters of vulnerability, manipulation, coercion, and privacy with patients, are important steps towards a transparent engagement with AI in mental health care. Finally, while much research attention is being paid to the development of better AI tools and robotic aids, parallel investigations need to engage with the direct and indirect effects of AI on the therapeutic relationship, other human-human relationships, and effects on individual self-consciousness, agency and identity.

10  The Implications of Embodied Artificial Intelligence in Mental Healthcare…

215

Longer-­term effects, ranging from health reductionism to increased objectification and impacts on our understandings of what it means to be human, should also be studied and addressed. This work will help to ensure that embodied AI and robotics in the area of mental health care can help promote the creation of a more digitally just society.

References Beauchamp, Tom L., and James F.  Childress. 2012. Principles of Biomedical Ethics. 7th ed. New York: Oxford University Press. Bedi, Gillinder, Facundo Carrillo, Guillermo A. Cecchi, Diego Fernández Slezak, Mariano Sigman, Natália B. Mota, Sidarta Ribeiro, Daniel C. Javitt, Mauro Copelli, and Cheryl M. Corcoran. 2015. Automated Analysis of Free Speech Predicts Psychosis Onset in High-Risk Youths. NPJ Schizophrenia 1 (August): 15030. https://doi.org/10.1038/npjschz.2015.30. Broadbent, Elizabeth. 2017. Interactions With Robots: The Truths We Reveal About Ourselves. Annual Review of Psychology 68 (1): 627–652. https://doi.org/10.1146/ annurev-psych-010416-043958. Calderita, Luis Vicente, Luis J. Manso, Pablo Bustos, Cristina Suárez-Mejías, Fernando Fernández, and Antonio Bandera. 2014. THERAPIST: Towards an Autonomous Socially Interactive Robot for Motor and Neurorehabilitation Therapies for Children. JMIR Rehabilitation and Assistive Technologies 1 (1): e1. https://doi.org/10.2196/rehab.3151. Coeckelbergh, Mark, Cristina Pop, Ramona Simut, Andreea Peca, Sebastian Pintea, Daniel David, and Bram Vanderborght. 2016. A Survey of Expectations About the Role of Robots in Robot-­ Assisted Therapy for Children with ASD: Ethical Acceptability, Trust, Sociability, Appearance, and Attachment. Science and Engineering Ethics 22 (1): 47–65. https://doi.org/10.1007/ s11948-015-9649-x. Corea, Francesco. 2017. Machine Ethics and Artificial Moral Agents. Francesco Corea (blog). July 6, 2017. https://medium.com/@Francesco_AI/ machine-ethics-and-artificial-moral-agents-85ad6b71d40b. Craig, Tom K.J., Mar Rus-Calafell, Thomas Ward, Julian P.  Leff, Mark Huckvale, Elizabeth Howarth, Richard Emsley, and Philippa A. Garety. 2018. AVATAR Therapy for Auditory Verbal Hallucinations in People with Psychosis: A Single-Blind, Randomised Controlled Trial. The Lancet Psychiatry 5 (1): 31–40. https://doi.org/10.1016/S2215-0366(17)30427-3. Cresswell, Kathrin, Sarah Cunningham-Burley, and Aziz Sheikh. 2018. Health Care Robotics: Qualitative Exploration of Key Challenges and Future Directions. Journal of Medical Internet Research 20 (7). https://doi.org/10.2196/10410. Deutscher Ethikrat. 2017. Big Data Und Gesundheit  – Datensouveränität Als Informationelle Freiheitsgestaltung. Berlin: Deutscher Ethikrat. http://www.ethikrat.org/dateien/pdf/stellungnahme-big-data-und-gesundheit.pdf. Dodig Crnkovic, Gordana, and Baran Çürüklü. 2012. Robots: Ethical by Design. Ethics and Information Technology 14 (1): 61–71. https://doi.org/10.1007/s10676-011-9278-2. Feil-Seifer, D., and M.J. Matarić. 2011. Socially Assistive Robotics. IEEE Robotics Automation Magazine 18 (1): 24–31. https://doi.org/10.1109/MRA.2010.940150. Ferretti, Agata, Elettra Ronchi, and Effy Vayena. 2019. From Principles to Practice: Benchmarking Government Guidance on Health Apps. The Lancet Digital Health 1 (2): e55–e57. Fiske, Amelia, Peter Henningsen, and Alena Buyx. 2019a. Your robot therapist will see you now: Ethical implications of embodied artificial intelligence in psychiatry, psychology, and psychotherapy. Journal of Medical Internet Research 21 (5): e13216. Fiske, Amelia, Barbara Prainsack, and Alena Buyx. 2019b. Data Work: Meaning-Making in the Era of Data-Rich Medicine. Journal of Medical Internet Research 21 (7): e11672.

216

A. Fiske et al.

Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Chazerand Patrice, Virginia Dignum, Christoph Luetge, et  al. 2018. An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines 28 (4): 689–707. Fulmer, Russell. 2018. 5 Reasons Why Artificial Intelligence Won’t Replace Physicians. The Medical Futurist. May 24, 2018. https://medicalfuturist. com/5-reasons-artificial-intelligence-wont-replace-physicians. Gionet, Kylie. 2018. Meet Tess: The Mental Health Chatbot That Thinks like a Therapist. The Guardian. April 25, 2018, sec. Society https://www.theguardian.com/society/2018/apr/25/ meet-tess-the-mental-health-chatbot-that-thinks-like-a-therapist. Góngora Alonso, Susel, Sofiane Hamrioui, Isabel de la Torre Díez, Eduardo Motta Cruz, Miguel López-Coronado, and Manuel Franco. 2018. Social Robots for People with Aging and Dementia: A Systematic Review of Literature. Telemedicine and E-Health 25 (7): 533–540. https://doi.org/10.1089/tmj.2018.0051. Grossard, Charline, Giuseppe Palestra, Jean Xavier, Mohamed Chetouani, Ouriel Grynszpan, and David Cohen. 2018. ICT and Autism Care: State of the Art. Current Opinion in Psychiatry 31 (6): 474–483. https://doi.org/10.1097/YCO.0000000000000455. Hammond, Kristian. 2016. 5 Unexpected Sources of Bias in Artificial Intelligence. TechCrunch (blog). 2016. http://social.techcrunch. com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/. Henson, Philip, Gary David, Karen Albright, and John Torous. 2019. Deriving a Practical Framework for the Evaluation of Health Apps. The Lancet Digital Health 1 (2): e52–e54. Ienca, Marcello, Tenzin Wangmo, Fabrice Jotterand, Reto W. Kressig, and Bernice Elger. 2018. Ethical Design of Intelligent Assistive Technologies for Dementia: A Descriptive Review. Science and Engineering Ethics 24 (4): 1035–1055. https://doi.org/10.1007/s11948-017-9976-1. Inkster, Becky, Shubhankar Sarda, and Vinod Subramanian. 2018. An Empathy-Driven, Conversational Artificial Intelligence Agent (Wysa) for Digital Mental Well-Being: Real-­ World Data Evaluation Mixed-Methods Study. JMIR mHealth and uHealth 6 (11): e12106. https://doi.org/10.2196/12106. Johnston, Angela. 2015. Robotic Seals Comfort Dementia Patients but Raise Ethical Concerns. KALW Local Public Radio. August 17, 2015. http://www.kalw.org/post/ robotic-seals-comfort-dementia-patients-raise-ethical-concerns. Kaspar. n.d. Kaspar the Social Robot. http://www.herts.ac.uk/kaspar/the-social-robot. Accessed 11 Nov 2019. Körtner, T. 2016. Ethical Challenges in the Use of Social Service Robots for Elderly People. Zeitschrift für Gerontologie und Geriatrie 49 (4): 303–307. https://doi.org/10.1007/ s00391-016-1066-5. Latour, Bruno, and Steve Woolgar. 1986. Laboratory Life. Princeton: Princeton University Press. Leigh, Simon, and Liz Ashall-Payne. 2019. The Role of Health-Care Providers in MHealth Adoption. The Lancet Digital Health 1 (2): e58–e59. Liu, Chaoyuan, Xianling Liu, Fang Wu, Mingxuan Xie, Yeqian Feng, and Hu Chunhong. 2018. Using Artificial Intelligence (Watson for Oncology) for Treatment Recommendations Amongst Chinese Patients with Lung Cancer: Feasibility Study. Journal of Medical Internet Research 20 (9): e11087. https://doi.org/10.2196/11087. Lucas, Gale M., Albert Rizzo, Jonathan Gratch, Stefan Scherer, Giota Stratou, Jill Boberg, and Louis-Philippe Morency. 2017. Reporting Mental Health Symptoms: Breaking Down Barriers to Care with Virtual Human Interviewers. Frontiers in Robotics and AI 4. https://doi. org/10.3389/frobt.2017.00051. Luxton, David D. 2014. Recommendations for the Ethical Use and Design of Artificial Intelligent Care Providers. Artificial Intelligence in Medicine 62 (1): 1–10. https://doi.org/10.1016/j. artmed.2014.06.004. ———. 2015. Artificial Intelligence in Behavioral and Mental Health Care. Amsterdam/Boston: Academic. Mahoor, Mohammad. n.d. Companionbots For Proactive Dialog On Depression. http://mohammadmahoor.com/companionbots-for-proactive-dialog-on-depression/. Accessed 11 Nov 2019.

10  The Implications of Embodied Artificial Intelligence in Mental Healthcare…

217

Martinez-Martin, Nicole, and Karola Kreitmair. 2018. Ethical Issues for Direct-to-Consumer Digital Psychotherapy Apps: Addressing Accountability, Data Protection, and Consent. JMIR Mental Health 5 (2): e32. https://doi.org/10.2196/mental.9423. Mehrotra, Seema, Satish Kumar, Paulomi Sudhir, Girish N. Rao, Jagadisha Thirthalli, and Aditi Gandotra. 2017. Unguided Mental Health Self-Help Apps: Reflections on Challenges through a Clinician’s Lens. Indian Journal of Psychological Medicine 39 (5): 707–711. https://doi. org/10.4103/IJPSYM.IJPSYM_151_17. Nuffield Council on Bioethics. 2014. The Collection, Linking and Use of Data in Biomedical Research and Health Care: Ethical Issues. London: Nuffield Council on Bioethics. http://nuffieldbioethics.org/wp-content/uploads/Biological_and_health_data_web.pdf. Oliveira, Tiago, Paulo Novais, and José Neves. 2014. Development and Implementation of Clinical Guidelines: An Artificial Intelligence Perspective. Artificial Intelligence Review 42 (4): 999–1027. https://doi.org/10.1007/s10462-013-9402-2. Piette, John D., Sarah L. Krein, Dana Striplin, Nicolle Marinec, Robert D. Kerns, Karen B. Farris, Satinder Singh, Lawrence An, and Alicia A. Heapy. 2016. Patient-Centered Pain Care Using Artificial Intelligence and Mobile Health Tools: Protocol for a Randomized Study Funded by the US Department of Veterans Affairs Health Services Research and Development Program. JMIR Research Protocols 5 (2): e53. https://doi.org/10.2196/resprot.4995. Powles, Julia. 2017. New York City’s Bold, Flawed Attempt to Make Algorithms Accountable. December 21, 2017. https://www.newyorker.com/tech/annals-of-technology/ new-york-citys-bold-flawed-attempt-to-make-algorithms-accountable. Provoost, Simon, Ho Ming Lau, Jeroen Ruwaard, and Heleen Riper. 2017. Embodied Conversational Agents in Clinical Psychology: A Scoping Review. Journal of Medical Internet Research 19 (5). https://doi.org/10.2196/jmir.6553. Rabbitt, Sarah M., Alan E.  Kazdin, and Brian Scassellati. 2015. Integrating Socially Assistive Robotics into Mental Healthcare Interventions: Applications and Recommendations for Expanded Use. Clinical Psychology Review 35 (February): 35–46. https://doi.org/10.1016/j. cpr.2014.07.001. Rein, Benjamin A., Daniel W.  McNeil, Allison R.  Hayes, T.  Anne Hawkins, H.  Mei Ng, and Catherine A. Yura. 2018. Evaluation of an Avatar-Based Training Program to Promote Suicide Prevention Awareness in a College Setting. Journal of American College Health 66 (5): 401–411. https://doi.org/10.1080/07448481.2018.1432626. Riek, Laurel D. 2015. Robotics Technology in Mental Health Care. November. https://doi. org/10.1016/B978-0-12-420248-1.00008-8. Roboticmagazine. 2017. Robotherapy for Children with Autism. Roboticmagazine (blog). May 7, 2017. http://www.roboticmagazine.com/military-medical-vehicles/ robotherapy-children-autism. SARA. n.d. SARA: Socially Aware Robot Assistant | ArticuLab. http://articulab.hcii.cs.cmu.edu/ projects/sara/. Accessed 11 Nov 2019. Schröder, Johanna, Thomas Berger, Björn Meyer, Wolfgang Lutz, Martin Hautzinger, Christina Späth, Christiane Eichenberg, Jan Philipp Klein, and Steffen Moritz. 2017. Attitudes Towards Internet Interventions Among Psychotherapists and Individuals with Mild to Moderate Depression Symptoms. Cognitive Therapy and Research 41 (5): 745–756. https://doi. org/10.1007/s10608-017-9850-0. Silva, Joana Galvão Gomes da, David J. Kavanagh, Tony Belpaeme, Lloyd Taylor, Konna Beeson, and Jackie Andrade. 2018. Experiences of a Motivational Interview Delivered by a Robot: Qualitative Study. Journal of Medical Internet Research 20 (5): e116. https://doi.org/10.2196/ jmir.7737. Stix, Charlotte. 2018. 3WaysAI Could Help Our Mental Health. World Economic Forum. March 5, 2018. https://www.weforum.org/agenda/2018/03/3-ways-ai-could-could-be-used-in-mental-health/. Suganuma, Shinichiro, Daisuke Sakamoto, and Haruhiko Shimoyama. 2018. An Embodied Conversational Agent for Unguided Internet-Based Cognitive Behavior Therapy in Preventative Mental Health: Feasibility and Acceptability Pilot Trial. JMIR Mental Health 5 (3): e10454. https://doi.org/10.2196/10454.

218

A. Fiske et al.

Tess. n.d. Tess: Affordable Mental Health Access With Proven Results. https://www.x2ai.com/. Accessed 11 Nov 2019. Tett, Gillian. 2018. When Algorithms Reinforce Inequality. Financial Times. February. http:// search.proquest.com/docview/2121712733/citation/9BD80126D71D429APQ/1. Torjesen, Ingrid. 2017. Sixty Seconds on … Sex with Robots. BMJ 358 (July): j3353. https://doi. org/10.1136/bmj.j3353. Vandemeulebroucke, Tijs, Bernadette Dierckx de Casterlé, and Chris Gastmans. 2018. The Use of Care Robots in Aged Care: A Systematic Review of Argument-Based Ethics Literature. Archives of Gerontology and Geriatrics 74 (January): 15–25. https://doi.org/10.1016/j. archger.2017.08.014. Wada, Kazuyoshi, and Takanori Shibata. 2007. Living with Seal Robots – Its Sociopsychological and Physiological Influences on the Elderly at a Care House. IEEE Transactions on Robotics 23 (5): 972–980. https://doi.org/10.1109/TRO.2007.906261. Wentzel, Jobke, Rosalie van der Vaart, Ernst T. Bohlmeijer, and Julia E.W.C. van Gemert-Pijnen. 2016. Mixing Online and Face-to-Face Therapy: How to Benefit From Blended Care in Mental Health Care. JMIR Mental Health 3 (1): e9. https://doi.org/10.2196/mental.4534. Woebot. n.d. Woebot: Your Charming Robot Friend Who Is Ready to Listen, 24/7. https://woebot. io/. Accessed 11 Nov 2019. Wysa. n.d. Wysa: Your 4 Am Friend and AI Life Coach. https://www.wysa.io/. Accessed 11 Nov 2019. Yu, Ruby, Elsie Hui, Jenny Lee, Dawn Poon, Ashley Ng, Kitty Sit, Kenny Ip, et al. 2015. Use of a Therapeutic, Socially Assistive Pet Robot (PARO) in Improving Mood and Stimulating Social Interaction and Communication for People With Dementia: Study Protocol for a Randomized Controlled Trial. JMIR Research Protocols 4 (2): e45. https://doi.org/10.2196/resprot.4189. Amelia Fiske holds a PhD in Cultural Anthropology, with a specialisation in Medical Anthropology, from the University of North Carolina at Chapel Hill. She situates her research at the intersection of anthropology, feminist science and technology studies, and medical ethics, crosscut by an interest in non-traditional and decolonial approaches to knowledge production. She is a Senior Research Fellow at the Institute for History and Ethics in Medicine at the Technical University of Munich. At the Institute for History and Ethics in Medicine, her current work examines the integration of citizen science in biomedicine and biotechnology, ethical concerns surrounding artificial intelligence and robotics in clinical contexts, and the broader context of technology-driven changes in sharing practices, forms of scientific labour, and research organisation in medicine and bioscience. In addition, her research addresses the production of harm resulting from oil operations in the Ecuadorian Amazon, with a particular focus on matters of toxicity, exposure science, extractive politics and environmental justice. Research Interests: Medical Anthropology, Bioethics, Digital and Sociotechnical Changes in Knowledge Production, Citizen Science, Bodies and Health, Environmental Justice, Toxicity, Extractive Industries, Feminist Science Studies, Decoloniality, and Latin American Studies. [email protected]

Peter Henningsen is a medical doctor and specialist in psychosomatic medicine, psychotherapy, neurology and psychiatry. He is Head of the Department of Psychosomatic Medicine and Psychotherapy at the University Hospital Rechts der Isar, Technical University Munich, and was Dean of the Faculty of Medicine there from 2010 to 2019. Previously, he has held posts at the Universities of Berlin and Heidelberg. His major interest in research is in the field of functional somatic disorders, i.e. patients who suffer from persistent somatic symptoms without clearly defined organic disease. Among other topics, he has coordinated trials in psychotherapy for this group of patients. He is a member of the European research group Euronet-Soma. [email protected]

10  The Implications of Embodied Artificial Intelligence in Mental Healthcare…

219

Alena Buyx is Professor of Ethics in Medicine and Health Technologies and Director of the Institute of History and Ethics in Medicine at Technical University Munich. She has previously held appointments at the University of Kiel, University of Münster, Harvard University and University College London, and she was Assistant Director of the Nuffield Council on Bioethics, London. Professor Buyx is a medical doctor with postgraduate degrees in philosophy and sociology. Her research spans the whole field of biomedical and public health ethics, with a particular focus on ethics of medical innovation and health technologies, research ethics, questions of solidarity and justice in contexts such as public health and health care provision, and novel participatory approaches in biomedicine and beyond. She has expertise in theoretical ethical analysis as well as in empirical, mixed-methods approaches and policy development. She is keen on interdisciplinary approaches and collaborates regularly with clinical colleagues as well as with public health professionals, political and social scientists, philosophers, lawyers, and health economists. As a PI, Professor Buyx has been awarded over 3 million Euros over the last 5 years for dedicated ethics research. Her work is published in high-ranking journals, such as Science, BMJ, GiM and Bioethics. She is an award-winning teacher of medical and life science students, early career researchers, clinicians and health professionals. In addition to research and teaching, Professor Buyx is active in the political and regulatory aspects of biomedical ethics, sitting on a number of high-level national and international ethics bodies concerned with policy development and implementation, and consulting for various international research consortia and policy initiatives. She has been a member of the German Ethics Council since 2016 and a member of the WHO Expert Advisory Committee on Developing Global Standards for Governance and Oversight of Human Genome Editing since 2019. [email protected]

Chapter 11

Causal Network Accounts of Ill-Being: Depression & Digital Well-Being Nick Byrd

Abstract  Depression is a common and devastating instance of ill-being which deserves an account. Moreover, the ill-being of depression is impacted by digital technology: some uses of digital technology increase such ill-being while other uses of digital technology increase well-being. So a good account of ill-being would explicate the antecedents of depressive symptoms and their relief, digitally and otherwise. This paper borrows a causal network account of well-being and applies it to ill-being, particularly depression. Causal networks are found to provide a principled, coherent, intuitively plausible, and empirically adequate account of cases of depression in everyday and digital contexts. Causal network accounts of ill-being also offer philosophical, scientific, and practical utility. Insofar as other accounts of ill-being cannot offer these advantages, we should prefer causal network accounts of ill-being. Keywords  Ethics · Digital well-being · Philosophy of science · Causation · Ill-being · Depression

11.1  Introduction Depression is not uncommon. Estimates suggest that major depressive disorder (MDD) affects more than 272 million people worldwide (Baxter et  al. 2014, pp. 509–510; US Census Bureau 2011). And the ill-being of depression is not an isolated phenomena; many other forms of ill-being are comorbid with depression (Avenevoli et al. 2015). So there is plenty of reason to understand instances of ill-­ being like depression. However, ill-being is complicated, making it difficult to capture the phenomena with a single account (Busseri and Mise 2019).

N. Byrd (*) Stevens Institute of Technology, Hoboken, NJ, USA © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_11

221

222

N. Byrd

This paper proposes that causal networks can account for ill-being. It will focus mainly on the instance of depression and digital technology’s role therein. The resources for this account of ill-being are borrowed from existing causal network accounts of well-being (Bishop 2012, 2015). The present attempt will not amount to a complete account of ill-being, but it provides both a framework for a more complete account and as motivation to pursue the more complete account. Further, insofar as causal network accounts of ill-being complement existing causal network accounts of well-being, the present causal network account can be instrumental in a comprehensive account of welfare, digital and otherwise.

11.2  Causal Networks Causal networks have a few parts: nodes, relationships, and fragments. Further, one and the same effect can be produced by different causal networks. So a causal network account of ill-being will explain ill-being as a multiply realizable phenomena that is realized by—among other things—nodes, relationships, and fragments.

11.2.1  Parts of Networks 11.2.1.1  Nodes A node represents a single variable which has some causal relationship(s) with other variables in a network. One variable that seems to be causally related to ill-being is socio-economic status (SES) (Headey et al. 1984, 1985). So SES might be a node in ill-being networks.

11.2.1.2  Relationships Nodes are connected to one or more other nodes in a network—these connections are often called “edges” in the literature on causal modeling (e.g., Scheines et al. 1998). Nodes can have two kinds of connection with each other: promotional connections and inhibitory connections. Firstly, a node is in a promotional relationship with another node just in case an increase in the coefficient of the one node causes an increase in the coefficient of the other node. The promotional relationship is represented by the line ending with an arrow, as in Fig. 11.1 (left), which represents an increase in SES causing an increase in self-esteem (ibid.). Secondly, a node is in an inhibitory relationship with another node just in case an increase in the coefficient in one node results in a decrease in the coefficient in the other node. The inhibitory relationship is represented by the line ending with a

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

223

Fig. 11.1  Promotional relationship (left) and inhibitory relationship (right)

diamond as in Fig. 11.1 (right), which represents an increase in face-to-face social contact causing a decrease in loneliness (Kross et al. 2013). 11.2.1.3  Fragments A fragment is a non-complete portion of a network. More precisely, a network fragment refers to two or more nodes (of a network containing more than two nodes) as well as the relationships between these nodes.

11.2.2  Properties of Causal Networks 11.2.2.1  Multiple Realizability We can distinguish between higher-level and lower-level states and changes. Lower-­ level states or changes in a network refer to states or changes to the structure and dynamic of a network. Higher-level states or changes in a network refer to states or changes that emerge from lower-level states or changes. For example, lower-level changes in SES, face-to-face social contact, and loneliness will have an impact on higher level states of ill-being such as self-esteem. In this paper, “ill-being” refers to a higher-level phenomenon that emerges from the states and changes in the structure and dynamics of lower-level networks, which will be called ill-being networks. Notably, higher-level network states can be multiply realizable. That is, two or more different lower-level network states might correspond to one state of ill-being. 11.2.2.2  Individuation If ill-being is multiply realizable, then individuating different instances of ill-being will sometimes require individuating the differences in the causal networks from which they emerged. Causal networks can be individuated by their structure and their dynamics. Two networks have different structure when they have different nodes and/or different connections between nodes. For example, a network with three nodes can be individuated from networks containing more or less than three nodes. Similarly, two networks with the same nodes can be individuated if they do not share all the

224

N. Byrd

same connections between their nodes. So the network fragments above have different structure because they have different nodes. Two networks have different dynamics when, all else being equal, the relationship between one network’s nodes are non-identical to the relationships between another network’s nodes. For instance, a network that contains only promotional relationships is distinct from a network of the same nodes that contains only inhibitory relationships. So the network fragments above have different dynamics because one has only a promotional relationship and the other has only an inhibitory relationship.

11.3  A Causal Network Account of Ill-Being With the basic resources of causal networks, we can begin to explain ill-being. To focus the discussion, we will limit our scope to cases of depression and digital well-being.

11.3.1  Causal Networks and Ill-Being The first order of business is to point out that the overall dynamic of a causal network can be positive, negative, or neutral (Bishop 2015, p.  41). These dynamics determine whether the network is contributing to well-being or ill-being. 11.3.1.1  Causes of Well-Being and Ill-Being A positive causal network is a network that inhibits ill-being or contributes to well-­ being (see also Bishop 2015, pp.  10–11). Conversely, a negative causal network either inhibits well-being or contributes to ill-being. Now consider ill-being. Ill-being involves many variables: feelings, beliefs, motivations, behaviors, habits, traits, abilities, goals, goal-attainment, resources, and perhaps other variables. A causal network account of ill-being can represent these aspects of ill-being with nodes. Causal networks can also represent the relationships between variables. Ill-being involves many such relationships. For example, financial and social resources can cause changes in one’s feelings and attitudes (Headey et al. 1985, p. 221).

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

225

11.3.1.2  Robustness of Well-Being and Ill-Being Causal networks can be more or less robust—where ‘robust’ refers to the resilience of a higher-level network state in the face of lower-level network changes. One’s ill-being network is more robust insofar as more interventions on (lower-level) states of the ill-being network produce fewer changes to one’s ill-being. Another way to say this is that the more one can change the structure and dynamic of the illbeing network without changing one’s ill-being, the more robust their ill-being network. This notion of robustness implies that there will be (qualitative and quantitative) thresholds such that some threshold-breaking amount of change to the structure and/or dynamic of an ill-being network will result in a change in ill-being. In other words, once a threshold is broken, an otherwise robust negative causal network that was reinforcing ill-being might no longer reinforce ill-being.

11.3.2  Depression and Rumination Some people are more likely than others to believe that their happiness is a function of goal-attainment. Interestingly this belief is related to numerous other variables like the proclivity to ruminate and the risk of depression (McIntosh 1996; McIntosh et al. 1995; McIntosh and Martin 1992). For illustration, consider two people: Lincoln and Jordan. Lincoln believes that happiness is linked to goal-attainment. Jordan doesn’t believe this. Like anyone might, when Lincoln or Jordan realize that they have attained a goal, they briefly experience a positive feeling. And when they realize that they have failed to reach a goal, they briefly experience a negative feeling. However, because Lincoln believes that happiness is linked to goal-attainment, whenever Lincoln realizes that they failed to reach a goal, Lincoln ruminates about this failure. And the more Lincoln ruminates, the more Lincoln feels negatively. Only when Lincoln stops ruminating can they continue trying to attain the same goal. So, unless Lincoln disengages or attains the goal, Lincoln might become stuck in a cycle of negative feelings. Jordan, however, is not so prone to ruminate about failing to attain a goal. After all, Jordan doesn’t believe that their happiness is related to attaining or failing to attain goals. So when Jordan fails to attain a goal, the effects are briefer and less negative. In this example, one’s beliefs and habits turn out to be crucial to ill-being. In this case, Lincoln has a belief about happiness and a habit of ruminating that Jordan didn’t have. Both are importantly related to other nodes in the ill-being network such that when Lincoln encounters certain scenarios, they are more prone to fall into a negative cycle of rumination and negative feelings (Fig. 11.2). In network terminology, the dynamics of Jordan and Lincoln’s causal networks varied as a function of at least two variables: a particular belief and a particular habit. Lincoln and Jordan are not alone. The effect of their beliefs about happiness and goal-attainment on their ill-being seems to be generalizable (McIntosh 1996;

226

N. Byrd

Fig. 11.2  Two causal network fragments adapted from McIntosh (1996, p. 65, Figure 3.1) illustrating how the same events can produce different levels of well-being or ill-being depending on beliefs

McIntosh et al. 1995; McIntosh and Martin 1992). And the effect of their rumination is also generalizable since it is associated with being in longer and more severe bouts of depression following stressful circumstances (Beck 1970a, b, 1979a, b, 1991, 1995; Beck and Greenburg 1984; Millar et al. 1988; Nolen-Hoeksema 1991). In fact, rumination has also been posited as the mechanism that explains why some

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

227

depressive risk-factors result in depression and others do not (Spasojević and Alloy 2001). From these details about beliefs, goals, rumination, and negative affect, we can begin to see how a causal network might account for the ill-being of certain instances of depression. The causal network can also account for the self-reinforcing nature of certain instances of depression. Further, the causal network accounts for the difference in ill-being between two people that have similar experiences—e.g., similar goal-attainment.

11.3.3  Depression and Learned Helplessness These self-reinforcing dynamics of negative causal networks are helpful in understanding further features of ill-being like robustness. Consider learned helplessness. Learned helplessness is induced when a person finds themselves in an undesirable or painful circumstance so often that the person loses the motivation to avoid said circumstances (Abramson et al. 1978; Maier and Seligman 1976). Various investigations suggest that learned helplessness is a causal condition for depression (Peterson and Seligman 1984). To illustrate how a certain negative causal network might lead to learned helplessness and thereby to depression, consider Jonsi. Jonsi believes—implicitly or explicitly—that their happiness is closely tied to excelling at work. So whenever Jonsi believes that they are doing poorly at work, Jonsi feels negatively. Further Jonsi has a habit of responding to negative feelings by ruminating on them, which leads to additional negative feelings. So doing poorly at work can quickly lead Jonsi’s into a downward emotional spiral, which might culminate in learned helplessness. And if Jonsi is experiencing learned helplessness, then Jonsi’s motivation becomes dangerously low. Some people can recover from this. They might, for example, have the resources to break the self-perpetuating cycle of rumination and negative effect by challenging the thoughts that are producing the negative feelings (e.g., cognitive-behavioral therapy) or by immersing oneself in experiences that supplant negative thoughts with neutral thoughts or positively valanced thoughts (e.g., a social gathering, a romantic evening, or an impromptu weekend getaway). But Jonsi doesn’t have the financial or social resources for these strategies. Jonsi is a single parent with insufficient financial or social resources to take a break from their near-constant work and child-care responsibilities. So Jonsi’s negative thoughts continue to reinforce their negative feelings and vice versa. In network terminology, the dynamic of Jonsi’s ill-being network is becoming increasingly negative and increasingly robust. Soon Jonsi seems to lack motivation entirely. Most days Jonsi cannot even bear to leave the house. This results in poor attendance at work, which increases Jonsi’s sense of not excelling at work. Eventually Jonsi’s absenteeism becomes too much for their employer and Jonsi loses their job. Naturally, this makes

228

N. Byrd

Fig. 11.3  Ill-being network fragment

Jonsi feel worse than ever about work and so Jonsi loses all hope that things will get better and begins wondering whether life is still worth living. What was just described is a robust negative network fragment (Fig. 11.3). It will be familiar to those who have experienced or witnessed severe cases of depression. At first, the negative dynamics of this causal network fragment might have been weakened by modest disruptions to the network—e.g., by changing Jonsi’s beliefs about their vocation, by interrupting rumination, and/or by allowing Jonsi to enjoy some time away from vocational or child-care responsibilities, etc. But eventually the causal network’s dynamic became increasingly negative and increasingly robust until modest interventions would no longer have an impact on Jonsi’s ill-being. The network’s change thresholds were just too high for such modest changes to cause a higher-level change in ill-being.

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

229

11.3.4  Implications for Ill-Being The causal roles of rumination and learned helplessness in depression are, of course, empirical hypotheses subject to empirical testing. Nonetheless, insofar as these causal network fragments adequately capture ill-being dynamics, they have implications for well-being—including digital well-being (Burr et al. 2020). 11.3.4.1  External Nodes Some nodes in Lincoln’s, Jordan’s, and Jonsi’s well-being network fragments are external. That is, they are not part of one’s body, one’s immediate environment, or even one’s domain of controllable factors (e.g., social resources, financial resources). This means that ill-being can depend on factors outside one’s control (Headey et al. 1984, 1985). This is unsurprising. Loved ones die, accidents happen, economies crash, natural disasters occur, and so on. We usually cannot control these factors and yet they can have significant impacts on our well-being. Of course, institutions can control factors that many individuals cannot. So even if I cannot exert control over monetary, tax, and other policies, some institutions can. As such, some part of my well-being is decided by these institutions. Depending on the nature of responsibility, this might entail that such institutions are also responsible for some part of my well-being (Floridi 2018). So causal network accounts of ill-being recommend investigation of potential institutional responsibilities for ill-being. 11.3.4.2  Digital Ill-Being Some external factors are partially in our control. Consider digital technology. We can often choose how to use digital technology. Moreover, digital technology designers can often choose how to impact users’ well-being. Although some argue that digital technology is improving well-being (Schwab 2017), others are less sanguine. Indeed, the World Health Organization recently recommended that children limit the use of digital technology in order to limit its detrimental impact on well-being (2019). Indeed, one observational study of a nationally representative sample of over a million US adolescent students found that using digital technology predicted lower well-being (Twenge et  al. 2018). More recent analyses suggest that using digital technology accounted for, at most, 0.4% of the variance in well-being (Orben and Przybylski 2019). The actual impact of digital technology on well-being is an empirical question. Still, insofar as we think that ill-being networks contain nodes pertaining to digital technology and we want to better predict and control ill-being, there is reason to investigate and intervene on the relationship(s) between digital technology and ill-­ being (Peters et al. 2018).

230

N. Byrd

11.3.4.3  Social Networks Some of the external nodes in a well-being or ill-being network are people (or features of other people). In other words, our welfare is at least partially dependent on other people. There is some evidence that such social factors do influence our welfare. For instance, bullying predicts fewer friendships in early adolescence, which predicts depressive symptoms in later adolescence (Harmelen et al. 2016). Relatedly, family adversity in childhood predicts less family support in adolescence, which predicts greater depressive symptoms in later adolescence (ibid.). There is also evidence that digital social networks can influence our welfare. For instance, passively consuming information about people in our digital social network is linked to lower subjective well-being; conversely, actively broadcasting and exchanging information with people in our digital social network is linked to higher subjective well-being (Verduyn et al. 2017). Insofar as these social factors impact well-being and we want to better predict and control ill-being, we should study how these social factors feature in ill-being networks. Fortunately, some psychologists are already employing such network analyses (Aalbers et al. 2019). For instance, Faelens et al.’ (2019) correlational network analyses found that social comparison and self-esteem featured centrally in networks of Facebook use, rumination, depressive symptoms, and other factors (Fig. 11.4).

Fig. 11.4  Correlational networks (Faelens et al. 2019, Study 2). COM-F Comparison Orientation Measure-Facebook, CSE contingent self-esteem, CSS Contingent Self-Esteem Scale, FBI Facebook Intensity Scale, MSFU Multidimensional Scale of Facebook Use, RRS Ruminative Responses Scale, RSES Rosenberg Self-Esteem Scale, SNS Social networking sites DASS Depression, Anxiety and Stress Scales

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

231

11.3.4.4  Self-Reinforcement Causal networks also capture the self-reinforcing nature of some kinds of ill-being. When something improves our mood, our better mood might thereby lift the moods of those around us. And this positive dynamic might be self-reinforcing. Alas, selfreinforcing dynamics can also be negative. When something puts us in a sour mood, we might thereby put others in a sour mood. All of this depends, of course, on how the nodes between various ill-being networks are related.

11.3.5  Overview of Causal Network Accounts of Ill-Being Causal networks offer a coherent, fruitful, and informative account of ill-being. The risk and induction of, say, depression can be explained in terms of causal network dynamics. Moreover, the treatment-resistance of depression can be explained in terms of the robustness of causal networks. Causal network analysis can also capture how online and offline social factors can contribute to ill-being. Of course, this is only a sketch of a causal network account of ill-being. A more complete causal network account of ill-being would address more features and instances of ill-being. To understand why one should invest in a causal network account of ill-being, we should consider the potential benefits to philosophy, science, and everyday life.

11.4  Benefits of Network Accounts of Ill-Being One metric of the success of an account or concept is its utility (e.g., Dutilh Novaes 2018; Haslanger 2012; Shepherd and Justus 2015). Causal network accounts of ill-­ being offer philosophical, scientific, and practical utility.

11.4.1  Philosophical Utility A causal network account of depression provides a framework for other forms of illbeing. Once other forms of ill-being are fit into a causal network account, we would have a complement to causal network accounts of well-being (e.g., Bishop 2015). Once causal networks account for both ill-being and well-being, it would seem that causal networks account for welfare more generally. In other words, a causal network account of ill-being might be instrumental in a complete account of welfare. This provides some motivation to explore the potential of causal network accounts of ill-being. Importantly, much of the grist for causal network accounts of ill-being is scientific. Philosophers’ intuitions only take us so far. After all, when two intuitions conflict, we need a method for arbitrating between them (Bishop 2015). One way to arbitrate between intuitions involves studying intuitions about ill-being as well as the

232

N. Byrd

psychological processes that produce these intuitions (Knobe and Nichols 2007). So experimental philosophy could be crucial to the advancement of ill-being research.

11.4.2  Scientific Utility Imagine that you’re deciding between two accounts of ill-being. Both accounts do well to capture our first-person experience and our intuitions about ill-being. But only one account unifies, makes sense of, and is useful to science. It seems that this latter account should be preferred to the account that offers only armchair purchase (Bishop 2015; Kitcher 1981; Woodward 2014). There is some reason to think that a causal network account of ill-being is the second sort of account. Consider a range of evidence from science. 11.4.2.1  Science Generally By identifying two variables and intervening on one—while controlling other variables—scientists reveal causal relationships between two variables (Bright et al. 2016; Byrd 2019; Woodward 2003; Saatsi and Pexton 2013). And since causal networks represent variables (as nodes) and the causal connections between variables, a causal network account of ill-being stands to unify otherwise disparate studies of the variables involved in ill-being. For example, suppose that some experiments find that using backlit touchscreens in the evening disrupts circadian rhythms and other experiments find that disrupting circadian rhythms disrupts self-esteem. These otherwise disparate findings could be unified with a causal network with nodes referring to evening backlit touchscreen use, circadian rhythm, and self-esteem. 11.4.2.2  Experimental Psychology Publications in experimental psychology about well-being are rife with causal models like the ones offered herein (Burnette et al. 2009, p. 277, Figure 1; Carnelley et  al. 1994, p.  129, Figure  1; Radecki-Bush et  al. 1993, pp.  573, Figure  1, 579, Figure 2, 580 Figure 3, 582, Figure 4; Tasca et al. 2009, p. 665, Figures 1 and 2; Tse and Yip 2009, p. 367, Figure 1). In fact, many of the causal models in the previous sections were adapted from the causal models of experimental psychologists. So it is no accident that the causal network account resembles this literature. 11.4.2.3  Neuroscience Exercise has been widely shown to relieve depressive symptoms (Blumenthal et al. 1999; Cooney et al. 2013; Kramer and Ericsson 2007; Motl et al. 2005; Pedersen and Saltin 2015; Schuch et  al. 2016). And neuroscience is providing, in broad

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

233

strokes at least, some clues about the causal network(s) that account for these positive outcomes. Exercise and regular physical activity seem to directly affect the brain in various ways. For instance, exercise and physical activity improve synaptic structure by improving potentiating synaptic strength (Cotman et al. 2007), improve neural plasticity via neurogenesis (ibid.), increase glia density (Spielman et al. 2016), additional 5-HT and dopamine (ibid.), additional astrocytes at the blood-brain barrier (ibid.), increase signals of both glutamate and GABA in the visual cortex (Maddock et  al. 2016), and increased signals of glutamate in the anterior cingulate cortex (ibid.). Effects like these are said to jointly cause “growth factor cascades” which improve overall “brain health and function” (Cotman et  al. 2007; Kramer and Ericsson 2007). Exercise and physical activity also indirectly affect the brain. Generally speaking, “exercise reduces peripheral risk factors for cognitive decline” by preventing— among other things—neurodegeneration, neurotrophic resistance, hypertension, and insulin resistance (Cotman et al. 2007; Prakash et al. 2015). By preventing these threats to neural and cognitive health, exercise is indirectly promoting conditions for brain health and function. And all of these direct and indirect effects of exercise and physical activity on the brain are associated with or causally related to significant reductions in depressive symptoms (Spielman et al. 2016, pp. 22–23, 25–26). We can represent these direct and indirect effects of exercise and physical activity on depression with a causal network fragment (Fig. 11.5).

Fig. 11.5  Exercise promotes outcomes in the brain that promote other positive outcomes outside the brain. Similarly, exercise reduces negatives outcomes that would reduce certain positive outcomes. This is adapted from causal network models from Cotman and colleagues’ (2007) work and includes details from a review by Spielman and colleagues (2016)

234

N. Byrd

Exercise is not the only intervention on the brain that matters. Randomized experiments find that stimulating and/or disrupting neural function in various subcortical regions of the brain (e.g., deep brain stimulation or DBS) led to significant and long-term reductions in depressive symptoms (Bewernick et al. 2010; Lozano et al. 2008; Mayberg et al. 2005). All of these hypotheses about the brain’s relationship to cases of ill-being like depression seem amenable to a causal network account. Admittedly, more research is needed to identify all of the causally relevant variables and precisify the causal dynamic(s) in the brain which underlie various forms of ill-being like depression.

11.4.2.4  Economics Economists and other social scientists also use causal networks to visualize their research on well-being. For example, it was economists who helped reveal that the effects of self-esteem and personal competence on health and overall welfare are partially accounted for by socioeconomic status (Headey et al. 1984, 1985). They illustrated their findings with causal networks. They started with a relatively small network fragment (Fig.  11.6), but soon found evidence of a larger network (Fig. 11.7).

Fig. 11.6  Headey, Holstrom, & Wearing 1984, 129, figure 1

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

235

Fig. 11.7  Headey, Holstrom, & Wearing 1985, 221, figure 1

11.4.2.5  Psychiatry Causal network accounts of ill-being also appear in psychiatry (e.g., Cramer and Borsboom 2015). Indeed, these accounts provide the resources to make sense of why depressive symptoms are relieved more reliably and for longer periods of time by certain manipulations like cognitive therapy (Gloaguen et al. 1998; Ma and Teasdale 2004; Wampold et al. 2002), cognitive-behavioral therapy (CBT) (Tolin 2010), and electroconvulsive therapy (ECT) (Tor et  al. 2015). In fact, it was the literature in psychiatry that inspired the story of Jordan and Lincoln. This literature suggests that—among other things—CBT’s method of identifying and challenging certain negative, unhealthy, or even counterproductive beliefs can weaken negative causal network dynamics and thereby relieve depressive symptoms. These data suggest that—among other things—beliefs are part of many peoples’ ill-being networks. The data also suggest that reflecting on our beliefs can have psychiatric benefits. More recent meta-analysis have examined the efficacy of virtual versions of some of these therapies. They find that the virtual therapies are as effective as their nonvirtual counterparts (Chesham et al. 2018). This is one way that digital technology can be part of the solution to various cases of ill-being. In causal network terminology, the nodes referring to therapy could involve in vivo or virtual therapy.

236

N. Byrd

11.4.2.6  Sports Medicine Some have pointed out that eHealth technology might lead to greater adoption of physical activity regimens (Burr et al. 2020). Some evidence supports this hypothesis. For instance, a pre-registered randomized controlled trial found that activity tracker use stemmed reductions in moderate-to-vigorous physical relative to a control group 6 months after the experiment ended (Finkelstein et al. 2016). Although activity tracking often improves physical activity and mobility—which might reduce ill-being in various ways—their effects on quality of life is not well understood (Oliveira et al. 2019). Further research could clarify this part of causal networks involving activity, digital activity tracking, and well-being.

11.4.2.7  Other Sciences Other domains of science reveal additional details about the network of causes that account for forms of ill-being like depression. For instance, depression seems to be causally related to genetic variation (Okbay et al. 2016), genetic expression (Gujral et al. 2014), opioid activity (Hsu et al. 2015), and endogenous cytokine production (Müller et al. 2006).

11.4.2.8  Correlational Studies Some scientific findings about ill-being are correlational rather than causal. For instance, depressive symptoms correlate with race (Weaver et al. 2015), urban (vs. suburban) residence (ibid.; see also Anderson 2010), endogenous protein production (Kahndaker et  al. 2014), and vitamin D efficiency (Kerr et  al. 2015). While correlational studies do not reveal causes, they reveal details which guide follow-up studies that might identify causes. So correlational studies are still amenable and instrumental to causal network accounts of ill-being and the science thereof.

11.4.3  Practical Utility As philosophers continue to explicate the nodes in ill-being networks and scientists investigate their causal relationship to other nodes, we can better predict and control ill-being. Importantly, this philosophical and scientific work can be useful beyond philosophy and science. It can also serve ordinary people and institutions.

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

237

11.4.3.1  Institutions Consider how governments and businesses might make use of the insights of scientific interventions on ill-being. First, they might be able to inform and reform policies that reliably inhibit their constituents’ ill-being. Moreover, institutions might have the ability to intervene on nodes in people’s ill-being networks in ways that individuals cannot. Thus, individuals might be reliant on institutions for certain interventions on their ill-being. Given this relationship, one might wonder if institutions owe it to their constituents to understand the causal networks involved in ill-­ being and implement policy accordingly. Intervening on constituents’ ill-being networks need not be entirely altruistic. Indeed, institutions might find that they can nudge constituents toward greater well-­ being in mutually beneficial ways (Conly 2012). For instance, businesses might find that returns on investments increase as a result of employer-subsidized ill-being interventions. Indeed, businesses might already be crunching these numbers (Hargrave and Hiatt 2005; Hargrave et  al. 2008; McLeod 2010; McLeod and McLeod 2001; Stewart et al. 2003). 11.4.3.2  Individuals Causal network accounts of ill-being can also deliver practical advice. Recall that various forms of physical activity, exercise, and therapy reliably inhibit certain forms of ill-being like depression and cognitive decline. It does not strain imagination to conceive of how people might use this knowledge to intervene on instances of ill-being. This raises questions about how to think about failures to learn from and apply causal network accounts of well-being. Are we personally responsible for intervening on our own ill-being? Such universal responsibility seems difficult to defend. There are many reasons why we could know about interventions that will reliably inhibit our ill-being, fail to implement the intervention, and yet not be responsible (or blameworthy) for failing to implement these interventions—e.g., Jonsi’s case of learned helplessness. Nonetheless, our ill-being may have impacts on others’. So insofar as we have duties to not harm others’ well-being, we might have duties to limit our own ill-­ being or its impact on others. Causal network accounts need not commit to positions about personal responsibility. The point is just that causal networks provide resources for identifying or tracing responsibility between individuals (Dennett 2015).

238

N. Byrd

11.5  Objections & Replies Causal network accounts of ill-being offer philosophical, scientific, and practical benefits. So causal network accounts should be preferred to accounts of ill-being that offer less than this. Nonetheless, the causal network theorist may encounter objections. Consider some objections related to normativity, triviality, intuitive appeal, and completeness.

11.5.1  Normativity Some might complain that the causal network account of ill-being doesn’t actually capture what is bad about ill-being. It doesn’t account for why we should avoid ill-­ being and be concerned about others’ avoiding ill-being. In other words, the causal network account of ill-being does not meet the “normativity requirement” (Bishop 2015, p. 198). The normativity requirement has many forms. Michael Bishop offers five possible ways to interpret the normativity requirement and then explains why it is not clear that any of the interpretations of the requirement are devastating to the causal network account of well-being (2015, pp.  198–207). Since the causal network account of ill-being is based, in large part, on Bishop’s causal network account of well-being, there is prima facie reason to think that Bishop’s responses to the normativity requirement apply to the present account of ill-being as well. So unless it becomes clear that the normativity requirement is uniquely devastating to the present account of ill-being, further responses to the normativity requirement need not be invented. So consider Bishop’s conclusion about normativity. First, we can agree on two outcomes: a causal network account either meets the normativity requirement or it doesn’t. If it does meet the requirement, then the complaint about normativity is moot. If it doesn’t meet the requirement, then there are at least two reasons why this failure might not be devastating: (a) the normativity requirement is illegitimate or (b) the normativity requirement is too controversial to constitute a universal requirement (Bishop 2015, p. 207). So invoking the normativity requirement evokes two further requirements: (1) a legitimate and uncontroversial standard and (2) an explanation of precisely how the causal network account fails to meet this standard. Until these goods are delivered, causal networks accounts of ill-being will be unscathed by normativity requirements.

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

239

11.5.2  We Knew It All Along Someone might also complain that the causal network account of ill-being is trivial in the following way. “Of course ill-being is the result of causal networks! Why would we have thought otherwise?” One might say something even more detailed: the very fact that the causal network account is (i) intuitively plausible, (ii) implicit in scientific practice, (iii) inferred from various empirical findings, and it (iv) is not already formalized is reason to think that we have known about the causal network account all along, at least implicitly. If this is really what it means for something to be known all along, then so be it. It is not clear that that is a real problem for the causal network account of ill-being. To illustrate, reconsider the complaint. If the causal network accounts of ill-­ being are so obvious, then it might be strange that no one has formalized such an account. After all, many scholars are in the business of—among other things—formalizing and tidying up our implicit, intuitive views about the world. So if a scholar provides a more formal and explicit version of our intuitive, implicit understanding, then they have not made an error. On the contrary, they have delivered the goods.

11.5.3  Intuition Fitting Someone might also complain that causal network accounts of ill-being do not capture all of their intuitions about ill-being. The response to this complaint is simple: Satisfying every intuition is not an achievable standard. Further, it is not clear that satisfying every intuition is a good standard. Indeed, many instances of good scholarship seem to challenge the most common or potent intuitions. At this point, causal network accounts of ill-being seem to provide a coherent and compelling account of many intuitions about cases of ill-being—e.g., Lincoln’s rumination and Jonsi’s learned helplessness. A complainant might reply as follows: “But lots of philosophical accounts capture some of our intuitions. How is the causal network account any better than the other philosophical accounts?” The answer to this question, of course, depends on the details of the alternative accounts. Causal network accounts of ill-being do more than just capture our intuitions. They also provide philosophical, scientific, and practical utility. If there are competing accounts of ill-being that can deliver all of these goods, then the complainant’s point is well-taken. If, however, alternative accounts of ill-being do not deliver all of these goods, then causal network accounts seem to be preferable.

240

N. Byrd

11.5.4  Completeness Another complaint is that the present attempt to bolster causal network accounts of ill-being is incomplete. After all, this paper focuses on just a couple instances of ill-being: depression and digital ill-being. It does not catalog and account for every instance of ill-being. So it certainly does not follow from anything in this paper that ill-being, generally speaking, is well-captured by causal networks. This complaint seems reasonable. Still, the friend of the causal network account might hope that the present account, while incomplete, provides a framework for more complete causal network accounts of ill-being. One framework that is implicit in this paper can be made explicit as follows: (A) identify instances of ill-being and then (B) appeal to first-­ personal and third-personal observations to propose which nodes might be involved in such ill-being; (C) provide empirically tractable explications of these nodes and their relationships; (D) test correlational and causal relationships between these nodes; (E) once some nodes and their relationships have been discovered, hypothesize the causal network fragment and its implications; (F) test the implications of the hypothetical causal network fragment; (G) guide philosophical, scientific, and practical endeavors according to successful hypotheses about ill-being causal network fragments; (H) compare the causal network accounts of ill-being to competing accounts to determine which account delivers more utility. So while the present account does not constitute a complete account of ill-being, it may be instrumental in a more complete account.

11.6  Conclusion Causal network accounts of depression and digital ill-being offer a framework for a more complete account of ill-being. Causal network accounts of ill-being also provide motivation to pursue this more complete account: philosophical, scientific, and practical utility. Further, insofar as the more complete causal network account of ill-being would complement an existing causal network account of well-being, causal network accounts are instrumental to a unified and complete account of welfare, digital and otherwise. Insofar as the alternative accounts of ill-being cannot do all of this, we should prefer causal network accounts of ill-being. Acknowledgments  This paper was improved by Rachel Amoroso, Aaron Brooks, Chris Burr, Mike Bishop, Frances Fairbairn, Mary Marcous, Al Mele, Sam Sims, and anonymous reviewers.

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

241

References Aalbers, G., R.J. McNally, A. Heeren, S. de Wit, and E.I. Fried. 2019. Social Media and Depression Symptoms: A Network Perspective. Journal of Experimental Psychology: General 148 (8): 1454–1462. https://doi.org/10.1037/xge0000528. Abramson, L.Y., M.E.  Seligman, and J.D.  Teasdale. 1978. Learned Helplessness in Humans: Critique and Reformulation. Journal of Abnormal Psychology 87 (1): 49–74. https://doi. org/10.1037/0021-843X.87.1.49. Anderson, E. 2010. The Imperative of Integration. Princeton: Princeton University Press. Avenevoli, S., J. Swendsen, J.-P. He, M. Burstein, and K.R. Merikangas. 2015. Major Depression in the National Comorbidity Survey–Adolescent Supplement: Prevalence, Correlates, and Treatment. Journal of the American Academy of Child & Adolescent Psychiatry 54 (1): 37–44. e2. https://doi.org/10.1016/j.jaac.2014.10.010. Baxter, A.J., K.M. Scott, A.J. Ferrari, R.E. Norman, T. Vos, and H.A. Whiteford. 2014. Challenging the Myth of an “Epidemic” of Common Mental Disorders: Trends in the Global Prevalence of Anxiety and Depression Between 1990 and 2010. Depression and Anxiety 31 (6): 506–516. https://doi.org/10.1002/da.22230. Beck, A.T. 1970a. Cognitive Therapy: Nature and Relation to Behavior Therapy. Behavior Therapy 1 (2): 184–200. https://doi.org/10.1016/S0005-7894(70)80030-2. ———. 1970b. Depression: Causes and Treatment. Philadelphia: University of Pennsylvania Press. ———. 1979a. Cognitive Therapy and the Emotional Disorders. New York: Penguin. ———. 1979b. Cognitive Therapy of Depression. New York: Guilford Press. ———. 1991. Cognitive Therapy: A 30-Year Retrospective. American Psychologist 46 (4): 368–375. https://doi.org/10.1037/0003-066X.46.4.368. Beck, J.S. 1995. Cognitive Therapy: Basics and Beyond. New York: Guilford Press. Beck, A.T., and R.L.  Greenberg. 1984. Cognitive Therapy in the Treatment of Depression. In Foundations of Cognitive Therapy, ed. N.  Hoffman, 155–178. Boston: Springer. https://doi. org/10.1007/978-1-4613-2641-0_7. Bewernick, B.H., R.  Hurlemann, A.  Matusch, S.  Kayser, C.  Grubert, B.  Hadrysiewicz, et  al. 2010. Nucleus Accumbens Deep Brain Stimulation Decreases Ratings of Depression and Anxiety in Treatment-Resistant Depression. Biological Psychiatry 67 (2): 110–116. https:// doi.org/10.1016/j.biopsych.2009.09.013. Bishop, M. 2012. The Network Theory of Well-being: An Introduction. The Baltic International Yearbook of Cognition, Logic and Communication 7 (1): 1–29. ———. 2015. The Good Life: Unifying The Philosophy and Psychology of Well-being. Oxford: Oxford University Press. Blumenthal, J.A., M.A. Babyak, K.A. Moore, W.E. Craighead, S. Herman, P. Khatri, et al. 1999. Effects of Exercise Training on Older Patients with Major Depression. Archives of Internal Medicine 159 (19): 2349–2356. Bright, L.K., D. Malinsky and M. Thompson. 2016. Causally Interpreting Intersectionality Theory. Philosophy of Science 83(1): 60–81. https://doi.org/10.1086/684173. Byrd, N. 2019. What we can (and can’t) infer about implicit bias from debiasing experiments. Synthese 1–29. https://doi.org/10.1007/s11229-019-02128-6. Burnette, J.L., D.E.  Davis, J.D.  Green, E.L.  Worthington Jr., and E.  Bradfield. 2009. Insecure Attachment and Depressive Symptoms: The Mediating Role of Rumination, Empathy, and Forgiveness. Personality and Individual Differences 46 (3): 276–280. https://doi.org/10.1016/j. paid.2008.10.016. Burr, C., M. Taddeo, and L. Floridi. 2020. The Ethics of Digital Well-Being: A Thematic Review. Science and Engineering Ethics. https://doi.org/10.1007/s11948-020-00175-8. Busseri, M.A., and T.-R. Mise. 2019. Bottom-Up or Top-Down? Examining Global and Domain-­ Specific Evaluations of How One’s Life Is Unfolding Over Time. Journal of Personality. https://doi.org/10.1111/jopy.12499.

242

N. Byrd

Carnelley, K.B., P.R.  Pietromonaco and K.  Jaffe. 1994. Depression, working models of others, and relationship functioning. Journal of Personality and Social Psychology 66 (1): 127–140. https://doi.org/10.1037/0022-3514.66.1.127. Chesham, R.K., J.M. Malouff and N.S. Schutte. 2018. Meta-Analysis of the Efficacy of Virtual Reality Exposure Therapy for Social Anxiety. Behaviour Change 35(3): 152–166. https://doi. org/10.1017/bec.2018.15. Conly, S. 2012. Against Autonomy: Justifying Coercive Paternalism. Cambridge: Cambridge University Press. Cooney, G.M., K. Dwan, C.A. Greig, D.A. Lawlor, J. Rimer, F.R. Waugh, et al. 2013. Exercise for Depression. In Cochrane Database of Systematic Reviews, ed. The Cochrane Collaboration. Chichester: Wiley. https://doi.org/10.1002/14651858.CD004366.pub6. Cotman, C.W., N.C. Berchtold, and L.-A Christie. 2007. Exercise builds brain health: Key roles of growth factor cascades and inflammation. Trends in Neurosciences 30(9): 464–472. https://doi. org/10.1016/j.tins.2007.06.011. Cramer, A. O. J., and D. Borsboom. 2015. Problems Attract Problems: A Network Perspective on Mental Disorders. In Emerging Trends in the Social and Behavioral Sciences. Retrieved from https://doi.org/10.1002/9781118900772.etrds0264/abstract. Dennett, D.C. 2015. Elbow Room: The Varieties of Free Will Worth Wanting. Cambridge, MA/ London: MIT Press. Dutilh Novaes, C. 2018. Carnapian Explication and Ameliorative Analysis: A Systematic Comparison. Synthese 197 (3): 1–24. https://doi.org/10.1007/s11229-018-1732-9. Faelens, L., K. Hoorelbeke, E. Fried, R. De Raedt, and E.H.W. Koster. 2019. Negative Influences of Facebook Use Through the Lens of Network Analysis. Computers in Human Behavior 96: 13–22. https://doi.org/10.1016/j.chb.2019.02.002. Finkelstein, E.A., B.A.  Haaland, M.  Bilger, A.  Sahasranaman, R.A.  Sloan, E.E.K.  Nang, and K.R.  Evenson. 2016. Effectiveness of Activity Trackers with and Without Incentives to Increase Physical Activity (TRIPPA): A Randomised Controlled Trial. The Lancet Diabetes & Endocrinology 4 (12): 983–995. https://doi.org/10.1016/S2213-8587(16)30284-4. Floridi, L. 2018. Soft Ethics and the Governance of the Digital. Philosophy & Technology 31 (1): 1–8. https://doi.org/10.1007/s13347-018-0303-9. Gloaguen, V., J. Cottraux, M. Cucherat, and Ivy-Marie Jordinburn. 1998. A Meta-Analysis of the Effects of Cognitive Therapy in Depressed Patients. Journal of Affective Disorders 49 (1): 59–72. https://doi.org/10.1016/S0165-0327(97)00199-7. Gujral, S., S.B. Manuck, R.E. Ferrell, J.D. Flory, and K.I. Erickson. 2014. The BDNF Val66Met Polymorphism Does Not Moderate the Effect of Self-Reported Physical Activity on Depressive Symptoms in Midlife. Psychiatry Research 218 (1–2): 93–97. https://doi.org/10.1016/j. psychres.2014.03.028. Hargrave, G.E., and D.  Hiatt. 2005. The EAP Treatment of Depressed Employees. Employee Assistance Quarterly 19 (4): 39–49. https://doi.org/10.1300/J022v19n04_03. Hargrave, G.E., D.  Hiatt, R.  Alexander, and I.A.  Shaffer. 2008. EAP Treatment Impact on Presenteeism and Absenteeism: Implications for Return on Investment. Journal of Workplace Behavioral Health 23 (3): 283–293. https://doi.org/10.1080/15555240802242999. Haslanger, S. 2012. Resisting Reality: Social Construction and Social Critique. New York: Oxford University Press. Headey, B., E. Holmström, and A. Wearing. 1984. Well-Being and Ill-Being: Different Dimensions? Social Indicators Research 14 (2): 115–139. https://doi.org/10.1007/BF00293406. ———. 1985. Models of Well-Being and Ill-Being. Social Indicators Research 17 (3): 211–234. https://doi.org/10.1007/BF00319311. Hsu, D.T., B.J.  Sanford, K.K.  Meyers, T.M.  Love, K.E.  Hazlett, S.J.  Walker, et  al. 2015. It Still Hurts: Altered Endogenous Opioid Activity in the Brain During Social Rejection and Acceptance in Major Depressive Disorder. Molecular Psychiatry 20 (2): 193–200. https://doi. org/10.1038/mp.2014.185.

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

243

Kerr, D.C.R., D.T. Zava, W.T. Piper, S.R. Saturn, B. Frei, and A.F. Gombart. 2015. Associations Between Vitamin D Levels and Depressive Symptoms in Healthy Young Adult Women. Psychiatry Research 227 (1): 46–51. https://doi.org/10.1016/j.psychres.2015.02.016. Khandaker, G.M., R.M.  Pearson, S.  Zammit, G.  Lewis, and P.B.  Jones. 2014. Association of Serum Interleukin 6 and C-Reactive Protein in Childhood with Depression and Psychosis in Young Adult Life. JAMA Psychiatry 71 (10): 1121–1128. https://doi.org/10.1001/ jamapsychiatry.2014.1332. Kitcher, P. 1981. Explanatory Unification. Philosophy of Science 48 (4): 507–531. Knobe, J., and S.  Nichols. 2007. An Experimental Philosophy Manifesto. In Experimental Philosophy, ed. J. Knobe and S. Nichols, 3–14. Oxford: Oxford University Press. Kramer, A.F., and K.I. Erickson. 2007. Capitalizing on Cortical Plasticity: Influence of Physical Activity on Cognition and Brain Function. Trends in Cognitive Sciences 11 (8): 342–348. https://doi.org/10.1016/j.tics.2007.06.009. Kross, E., P. Verduyn, E. Demiralp, J. Park, D.S. Lee, N. Lin, et al. 2013. Facebook Use Predicts Declines in Subjective Well-being in Young Adults. PLoS One 8 (8): e69841. https://doi. org/10.1371/journal.pone.0069841. Lozano, A.M., H.S. Mayberg, P. Giacobbe, C. Hamani, R.C. Craddock, and S.H. Kennedy. 2008. Subcallosal Cingulate Gyrus Deep Brain Stimulation for Treatment-Resistant Depression. Biological Psychiatry 64 (6): 461–467. https://doi.org/10.1016/j.biopsych.2008.05.034. Ma, S.H., and J.D.  Teasdale. 2004. Mindfulness-Based Cognitive Therapy for Depression: Replication and Exploration of Differential Relapse Prevention Effects. Journal of Consulting and Clinical Psychology 72 (1): 31–40. https://doi.org/10.1037/0022-006X.72.1.31. Maddock, R.J., G.A. Casazza, D.H. Fernandez, and M.I. Maddock. 2016. Acute Modulation of Cortical Glutamate and GABA Content by Physical Activity. The Journal of Neuroscience 36(8): 2449–2457. https://doi.org/10.1523/JNEUROSCI.3455-15.2016. Maier, S.F., and M.E. Seligman. 1976. Learned Helplessness: Theory and Evidence. Journal of Experimental Psychology: General 105 (1): 3–46. https://doi.org/10.1037/0096-3445.105.1.3. Mayberg, H.S., A.M. Lozano, V. Voon, H.E. McNeely, D. Seminowicz, C. Hamani, et al. 2005. Deep Brain Stimulation for Treatment-Resistant Depression. Neuron 45 (5): 651–660. https:// doi.org/10.1016/j.neuron.2005.02.014. McIntosh, W. 1996. When Does Goal Nonattainment Lead to Negative Emotional Reactions, and When Doesn’t It?: The Role of Linking and Rumination. In Striving and Feeling: Interactions Among Goals, Affect, and Self-Regulation, 53–77. Mahwah: Lawrence Erlbaum Associates, Inc. McIntosh, W.D., and L.L.  Martin. 1992. The Cybernetics of Happiness: The Relation of Goal Attainment, Rumination, and Affect. Review of Personality and Social Psychology, 222–246. Newbury Park: Sage. McIntosh, W.D., T.F.  Harlow, and L.L.  Martin. 1995. Linkers and Nonlinkers: Goal Beliefs as a Moderator of the Effects of Everyday Hassles on Rumination, Depression, and Physical Complaints. Journal of Applied Social Psychology 25 (14): 1231–1244. https://doi. org/10.1111/j.1559-1816.1995.tb02616.x. McLeod, J. 2010. The Effectiveness of Workplace Counselling: A Systematic Review. Counselling and Psychotherapy Research 10 (4): 238–248. https://doi.org/10.1080/14733145.2010.485688. McLeod, J., and J.  McLeod. 2001. How Effective Is Workplace Counselling? A Review of the Research Literature. Counselling and Psychotherapy Research 1 (3): 184–190. https://doi. org/10.1080/14733140112331385060. Millar, K.U., A.  Tesser, and M.G.  Millar. 1988. The Effects of a Threatening Life Event on Behavior Sequences and Intrusive Thought: A Self-Disruption Explanation. Cognitive Therapy and Research 12 (5): 441–457. https://doi.org/10.1007/BF01173412. Motl, R.W., J.F.  Konopack, E.  McAuley, S.  Elavsky, G.J.  Jerome, and D.X.  Marquez. 2005. Depressive Symptoms Among Older Adults: Long-Term Reduction After a Physical Activity Intervention. Journal of Behavioral Medicine 28 (4): 385–394. https://doi.org/10.1007/ s10865-005-9005-5.

244

N. Byrd

Müller, N., M.J. Schwarz, S. Dehning, A. Douhe, A. Cerovecki, B. Goldstein-Müller, et al. 2006. The Cyclooxygenase-2 Inhibitor Celecoxib Has Therapeutic Effects in Major Depression: Results of a Double-Blind, Randomized, Placebo Controlled, Add-On Pilot Study to Reboxetine. Molecular Psychiatry 11 (7): 680–684. https://doi.org/10.1038/sj.mp.4001805. Nolen-Hoeksema, S. 1991. Responses to Depression and Their Effects on the Duration of Depressive Episodes. Journal of Abnormal Psychology 100 (4): 569–582. Okbay, A., B.M.L.  Baselmans, J.-E.  De Neve, P.  Turley, M.G.  Nivard, M.A.  Fontana, et  al. 2016. Genetic Variants Associated with Subjective Well-Being, Depressive Symptoms, and Neuroticism Identified Through Genome-Wide Analyses. Nature Genetics. https://doi. org/10.1038/ng.3552. Oliveira, J.S., C.  Sherrington, E.R.Y.  Zheng, M.R.  Franco, and A.  Tiedemann. 2019. Effect of Interventions Using Physical Activity Trackers on Physical Activity in People Aged 60 Years and Over: A Systematic Review and Meta-Analysis. British Journal of Sports Medicine. https:// doi.org/10.1136/bjsports-2018-100324. Orben, A., and A.K.  Przybylski. 2019. The Association Between Adolescent Well-Being and Digital Technology Use. Nature Human Behaviour 3 (2): 173–182. https://doi.org/10.1038/ s41562-018-0506-1. Pedersen, B.K., and B. Saltin. 2015. Exercise as Medicine – Evidence for Prescribing Exercise as Therapy in 26 Different Chronic Diseases. Scandinavian Journal of Medicine & Science in Sports 25: 1–72. https://doi.org/10.1111/sms.12581. Peters, D., R.A. Calvo, and R.M. Ryan. 2018. Designing for Motivation, Engagement and Wellbeing in Digital Experience. Frontiers in Psychology 9. https://doi.org/10.3389/fpsyg.2018.00797. Peterson, C., and M.E.  Seligman. 1984. Causal Explanations as a Risk Factor for Depression: Theory and Evidence. Psychological Review 91 (3): 347–374. https://doi. org/10.1037/0033-295X.91.3.347. Prakash, R.S., M.W.  Voss, K.I.  Erickson, and A.F.  Kramer. 2015. Physical Activity and Cognitive Vitality. Annual Review of Psychology 66 (1): 769–797. https://doi.org/10.1146/ annurev-psych-010814-015249. Radecki-Bush, C., A.D. Farrell, and J.P. Bush. 1993. Predicting Jealous Responses: The Influence of Adult Attachment and Depression on Threat Appraisal. Journal of Social and Personal Relationships 10 (4): 569–588. https://doi.org/10.1177/0265407593104006. Saatsi, J., and M. Pexton. 2013. Reassessing Woodward’s Account of Explanation: Regularities, Counterfactuals, and Noncausal Explanations. Philosophy of Science 80 (5): 613–624. Scheines, R., P. Spirtes, C. Glymour, C. Meek, and T. Richardson. 1998. The TETRAD Project: Constraint Based Aids to Causal Model Specification. Multivariate Behavioral Research 33 (1): 65–117. https://doi.org/10.1207/s15327906mbr3301_3. Schuch, F.B., A.C. Deslandes, B. Stubbs, N.P. Gosmann, C.T.B. da Silva, and M.P. de A. Fleck. 2016. Neurobiological Effects of Exercise on Major Depressive Disorder: A Systematic Review. Neuroscience & Biobehavioral Reviews 61: 1–11. https://doi.org/10.1016/j. neubiorev.2015.11.012. Schwab, K. 2017. The Fourth Industrial Revolution. New York: Crown Publishing Group. Shepherd, J., and J.  Justus. 2015. X-Phi and Carnapian Explication. Erkenntnis 80: 381–402. https://doi.org/10.1007/s10670-014-9648-3. Spasojević, J., and L.B. Alloy. 2001. Rumination as a Common Mechanism Relating Depressive Risk Factors to Depression. Emotion 1 (1): 25–37. https://doi.org/10.1037/1528-3542.1.1.25. Spielman, L.J., J.P.  Little, and A.  Klegeris. 2016. Physical activity and exercise attenuate neuroinflammation in neurological diseases. Brain Research Bulletin 125: 19–29. https://doi. org/10.1016/j.brainresbull.2016.03.012. Stewart, W.F., J.A. Ricci, E. Chee, S.D. Hahn, and D. Morganstein. 2003. Cost of Lost Productive Work Time Among Us Workers with Depression. JAMA 289 (23): 3135–3144. https://doi. org/10.1001/jama.289.23.3135. Tasca, G.A., L. Szadkowski, V. Illing, A. Trinneer, R. Grenon, N. Demidenko, et al. 2009. Adult Attachment, Depression, and Eating Disorder Symptoms: The Mediating Role of Affect Regulation Strategies. Personality and Individual Differences 47 (6): 662–667. https://doi. org/10.1016/j.paid.2009.06.006.

11  Causal Network Accounts of Ill-Being: Depression & Digital Well-Being

245

Tolin, D.F. 2010. Is Cognitive–Behavioral Therapy More Effective Than Other Therapies?: A Meta-Analytic Review. Clinical Psychology Review 30 (6): 710–720. https://doi.org/10.1016/j. cpr.2010.05.003. Tor, P.-C., A. Bautovich, M.-J. Wang, D. Martin, S.B. Harvey, and C. Loo. 2015. A Systematic Review and Meta-Analysis of Brief Versus Ultrabrief Right Unilateral Electroconvulsive Therapy for Depression. The Journal of Clinical Psychiatry. https://doi.org/10.4088/ JCP.14r09145. Tse, W.S., and T.H.J.  Yip. 2009. Relationship Among Dispositional Forgiveness of Others, Interpersonal Adjustment and Psychological Well-Being: Implication for Interpersonal Theory of Depression. Personality and Individual Differences 46 (3): 365–368. https://doi. org/10.1016/j.paid.2008.11.001. Twenge, J.M., G.N. Martin, and W.K. Campbell. 2018. Decreases in Psychological Well-Being Among American Adolescents After 2012 and Links to Screen Time During the Rise of Smartphone Technology. Emotion 18 (6): 765–780. https://doi.org/10.1037/emo0000403. US Census Bureau. 2011. 2010 Census. Retrieved from http://www.census.gov/2010census/data/ van Harmelen, A.-L., J.L. Gibson, M.C.S. Clair, M. Owens, J. Brodbeck, V. Dunn, et al. 2016. Friendships and Family Support Reduce Subsequent Depressive Symptoms in At-Risk Adolescents. PLoS One 11 (5): e0153715. https://doi.org/10.1371/journal.pone.0153715. Verduyn, P., O.  Ybarra, M.  Résibois, J.  Jonides, and E.  Kross. 2017. Do Social Network Sites Enhance or Undermine Subjective Well-Being? A Critical Review. Social Issues and Policy Review 11 (1): 274–302. https://doi.org/10.1111/sipr.12033. Wampold, B.E., T. Minami, T.W. Baskin, and S. Callen Tierney. 2002. A Meta-(re)analysis of the Effects of Cognitive Therapy Versus “Other Therapies” for Depression. Journal of Affective Disorders 68 (2–3): 159–165. https://doi.org/10.1016/S0165-0327(00)00287-1. Weaver, A., J.A.  Himle, R.  Taylor, N.N.  Matusko, and J.M.  Abelson. 2015. Urban vs Rural Residence and the Prevalence of Depression and Mood Disorder Among African American Women and Non-Hispanic White Women. JAMA Psychiatry. https://doi.org/10.1001/ jamapsychiatry.2015.10. Woodward, J. 2003. Making Things Happen: A Theory of Causal Explanation. New York/Oxford: Oxford University Press. ———. 2014. Methodology, Ontology, and Interventionism. Synthese: 1–23. https://doi. org/10.1007/s11229-014-0479-1. World Health Organization. 2019. GuSidelines on Physical Activity, Sedentary Behaviour and Sleep for Children Under 5 Years of Age. Retrieved from https://apps.who.int/iris/ handle/10665/311664 Nick Byrd is a Philosopher-Scientist studying reasoning, well-being and agency. He is a Fellow at Florida State University where he will receive his PhD before starting as an Assistant Professor of Philosophy at Stevens Institute of Technology in Fall 2020. His research combines philosophy of cognitive science and cognitive science of philosophy. Existing projects examine what we can (and cannot) infer from social scientific research, how psychological factors predict philosophical dispositions and how network effects predict human welfare. These projects aim to better understand human psychology in order to improve human flourishing. Funding for this work has come from the Society of Christian Philosophers, the John Templeton Foundation, Duke University and Florida State University. From 2017 to 2020, over 200,000 people from 195 countries learned about this work on byrdnick.com. These projects have also been featured in press releases, radio segments, podcasts interviews, invited blog posts, conference presentations, social media posts and other mediums in the USA, the UK, Canada, Australia, the Netherlands and Switzerland. For more information, including free preprints of the latest accepted papers, see byrdnick.com/cv. Research Interests: Ethics, Experimental Philosophy, Judgement and Decision-Making, Public Policy and Philosophy of Science. [email protected]

Chapter 12

Malware as the Causal Basis of Disease Michael Thornton

Abstract  In this paper, I will argue that certain digital medical devices which are closely coupled to one’s biological systems (e.g. a digital pacemaker) should be considered to be a part of one’s body for the purpose of determining one’s health status. If such a device is working properly, such that the functional efficiency of the biological system is normal, we should consider one healthy, and if that device is hacked or infected with malware or malfunctions for any reason, then we should consider this reduction in functioning to be a different pathology than the underlying condition which necessitated the device. I will argue this is not only true for internal devices, but also for those that use cloud resources, Wi-Fi, and interact with the Internet of Things. While this argument may seem odd to some, I will argue that this view is consistent with two of the most popular definitions of disease—Christopher Boorse’s biostatistical theory and Jerome Wakefield’s hybrid approach. This argument suggests that there is substantial value to viewing digital threats through the lenses of the philosophies of public health and medicine, and it suggests the need to rethink how certain technology products are designed, built, maintained, regulated, and even owned. Keywords  Malware · Disease · Health · Cybersecurity · Pathology · Public health

12.1  Introduction The robustness, security, and resiliency of digital networks (henceforth ‘cyberhealth’) impact human health in a myriad of ways. A few of the most significant and widely discussed examples include the failure of critical infrastructure (e.g. Puerto Rico following hurricanes Irma and Maria) (Thieme 2018), the poor cybersecurity of hospital infrastructure (Newman 2017), and a lack of network access leading to inferior medical care (Samuels et al. 2015). In this paper, however, I will explore more unusual cases where the cyberhealth of a device or network partly constitutes M. Thornton (*) Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK e-mail: [email protected] © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1_12

247

248

M. Thornton

one’s health status. I will argue that when a digital technology is closely coupled to a biological system (e.g. a digital pacemaker), malware, unreliability, or malicious hacking can be causal bases of disease. First, I will argue that for the purpose of determining one’s health status, we should consider certain artificial parts like digital pacemakers to be a part of one’s body. For example, if a person has a properly functioning digital pacemaker to correct a slow heart rhythm and no other diseases, then we should consider them healthy. While we may not typically think of digital pacemakers in this way, less complex artificial parts such as artificial hips are typically thought of as cures (Lewens 2015, 180). I will then argue that if that pacemaker malfunctions for any reason (e.g. malware, hacking, poor design) such that the functional efficiency of one’s circulatory system is meaningfully reduced, we should consider this dysfunction to be a different causal basis of disease from the dysfunction which originally led to the patient receiving the device. While most digital devices are not closely coupled to biological systems, sophisticated medical devices increasingly make use of cloud resources, transmit data over Wi-Fi, and interact with the broader Internet of Things (often accidentally so). As a result, a great many digital technologies have the ability to be a constitutive part of one’s health status. Identifying a dysfunctional device (or network) as a causal basis of disease is potentially significant for at least three reasons. First, there may be a moral claim to treatment for some or all diseases (depending on how one defines the term). This is a vast departure from the traditional way of conceiving of cyberhealth issues as merely hindrances to business or State strategic interests. Second, there may be significant value in viewing the problem through the lens of the philosophy of medicine or the philosophy of public health as these fields are better suited to conceptualizing ethical questions related to bodily-integrity, health inequality, and the moral right to treatment than a lens of cybersecurity. And third, it may affect how we think about the distinction between biotech treatments and enhancements. This, in turn, has ramifications for deciding which biotechnologies states and insurance companies make available to patients. In Sect. 12.2, I will introduce two of the most influential definitions of disease or pathology—Christopher Boorse’s naturalist definition and Jerome Wakefield’s hybrid definition. Broadly speaking naturalist definitions define disease as some form of biological dysfunction, while hybrid definitions define disease as a harmful biological dysfunction, where harmful is a sociocultural designation (Wakefield 2007). Note that throughout this paper I will use the terms disease and pathology as essentially interchangeable as is Boorse’s practice (Boorse 1997, 7). In Sect. 12.3, I will then work through the example of the malfunctioning pacemaker mentioned previously and provide criteria for determining when a device should be considered to be a part of one’s body for disease diagnosis. While this argument may seem rather odd, I will argue that both Boorse and Wakefield’s accounts of disease can accommodate such a claim with only minor, independently plausible, changes. Finally, in Sect. 12.4, I will argue that if we consider certain cyberhealth issues to be the causal basis of a pathology (or pathologies in and of themselves), then technology companies and regulators should reconsider how digital products are designed,

12  Malware as the Causal Basis of Disease

249

built, upgraded, maintained, and even owned. While I will explore some specific examples, these should only be treated as initial thoughts on the matter.

12.2  Two Definitions of Disease 12.2.1  Boorse’s Biostatistical Theory While there is no single agreed upon definition of disease or pathology, in this section I will present two of the most commonly cited definitions. The first is Christopher Boorse’s influential biostatistical theory (BST). The essential formulation of Boorse’s theory, directly quoted, is as follows: 1. The reference class is a natural class of organisms of uniform functional design; specifically, an age group of a sex of a species 2. A normal function of a part or process within members of the reference class is a statistically typical contribution by it to their individual survival and reproduction 3. A disease is a type of internal state which is either an impairment of normal functional ability, i.e. a reduction of one or more functional abilities below typical efficiency, or a limitation on functional ability caused by environmental agents 4. Health is the absence of disease (Boorse 1997, 7–8). According to BST, if a person breaks their ankle, we would say they have a pathology because their ability to walk is far below typical efficiency for their reference group and being able to walk is important to one’s ability to survive in many contexts. Boorse argues that the terms disease and pathology are value-free. As such, not all diseases according to BST need be considered harmful by society or the person with the pathology. For example, BST typically classifies homosexuality as a disease, as it generally reduces one’s functional ability to reproduce (Boorse 1975, 63). The fact that in many cultures homosexuality is not considered a harmful or undesirable state does not factor into this determination.

12.2.2  Wakefield’s Harmful Dysfunction Approach The second definition of disease I will consider is Jerome Wakefield’s ‘harmful dysfunction’ approach—one of the most influential hybrid accounts of disease. While Boorse argues that disease is a value free term, Wakefield’s approach explicitly combines value judgments with scientific assessments of functionality. Wakefield describes his ‘harmful dysfunction’ approach saying, “a disorder is a harmful dysfunction, where ‘harmful’ is a value term, referring to conditions judged

250

M. Thornton

negative by sociocultural standards, and ‘dysfunction’ is a scientific factual term, referring to failure of biologically designed functioning” (Wakefield 2007).1 There are two primary differences between Boorse and Wakefield’s accounts. The first is in how they define dysfunction. In BST dysfunction is a departure from the species typical contribution of a part or process to an individual’s capacity to survive and reproduce. In contrast, Wakefield defines dysfunction in relation to the evolutionary purpose of a part. The second difference is that Wakefield argues that a dysfunction is only a disease if it is considered harmful. ‘Considered’ is the important word here. Within BST, dysfunctions are harmful in the sense that they reduce an individual’s survivability or reproducibility but need not be considered harmful by the individual or the culture to count as diseases (e.g. homosexuality, being on birth control). In contrast to BST, Wakefield would argue that in cultures where homosexuality is not generally considered harmful, it should not be considered a disease—even if the trait is, in an evolutionary sense, a dysfunction. While this may seem like a preferable conclusion to some, it is important to reiterate that within BST disease is a value-free term; i.e. to say an individual is diseased is not to imply any further moral claim. Additionally, Wakefield’s approach also may lead one to some surprising conclusions. Tim Lewens (Lewens 2015, 188) and Rachel Cooper (Cooper 2002) have argued in different works that depression may not be a disease within Wakefield’s approach, as it may not qualify as a failure of biologically designed functioning. I mention this example merely to dissuade one of the simplistic notion that Wakefield’s approach is on its face clearly preferable to BST. By arguing that diseases are only those biological dysfunctions which are considered harmful, Wakefield imbues the term disease with an ethical salience that Boorse’s value-free conception lacks. As such, a ‘disease’ by Wakefield’s definition is more likely to carry a claim to care than a ‘Boorsean disease,’ although the strength of this claim varies considerably depending on the degree of harm and the cause of the pathology. Lastly, it is worth noting that in most cases, these two approaches lead one to the same designation. Wakefield and Boorse would both agree that someone with malaria, a broken hip, or a torn ACL have a pathology. With these two definitions in mind, in the next section I will argue that under certain conditions we should consider cyberhealth issues (e.g. a hacked pacemaker, unreliable network access) to be causal bases of diseases or pathologies. Note that while digital medical devices and their associated resources (e.g. cloud services, Wi-Fi, etc.) will be my ultimate focus, I will first illustrate this point using less sophisticated technologies, such as artificial hips, as it will make the argument more intuitive.

1  While Wakefield tends to use the term disorder rather than disease, I will treat the two terms as synonymous. As Wakefield and Boorse compare their respective theories to the other, I believe this is reasonable.

12  Malware as the Causal Basis of Disease

251

12.3  Poor Cyberhealth as Pathology The underlying assumption of both Wakefield and Boorse’s accounts of disease is that the constitutive causal basis of dysfunction must be some organic part or biological system. As such, when determining if a person has a dysfunction, one assesses the functioning of organic parts or biological systems without counting the contribution of artificial components or tools. For example, when determining if one is myopic, one assesses the functional efficiency of one’s vision without glasses and contact lenses even though these devices play a role in one’s ability to see on a day to day basis. While that distinction may be appropriate in the case of eyeglasses (I will revisit this later in this section), the assumption that one should not count the contribution of artificial parts when determining if someone has a disease is complicated by the intimate integration of artificial and biological parts in many modern medical interventions. To motivate this claim, I will now work through the following example. Consider the following: Jim has bradycardia (a slow heart rhythm) due to sinus node dysfunction. When Jim’s bradycardia is symptomatic, it causes fatigue, weakness, and can lead to fainting (Mayo Clinic Staff 2017). Within both Boorse and Wakefield’s accounts of disease, Jim has a pathology. Jim receives a digital pacemaker which corrects his heart rhythm when it is too slow, relieving his symptoms. The digital pacemaker can run without maintenance for 15 years, and Jim can resume all normal physical activities, including hobbies like mountain biking and hiking. As with other patients with bradycardia who have digital pacemakers, Jim’s life expectancy is normal. After a number of years, Jim’s digital pacemaker is infected with malware, leading to a drop in the functional efficiency of his circulatory system. He has a second intervention to replace the device, and he returns to his active lifestyle. This example highlights the oddness of only considering biological parts when determining dysfunction and, by extension, disease. When Jim’s pacemaker is working, he seems healthy—he can pursue an active lifestyle and his lifespan is expected to be normal. Additionally, the pacemaker can only be separated from his circulatory system via surgical intervention. Based on this example, I will argue for the following three claims: 1. For the purpose of determining if Jim is healthy, we should count the contribution of Jim’s pacemaker towards the functional efficiency of his circulatory system. 2. We should consider Jim to be disease-free when his pacemaker is working properly, assuming that (a) the functional efficiency of Jim’s circulatory system is typical for his reference class and (b) that his circulatory system adequately performs its ‘biologically designed’ purpose of circulating blood. 3. When Jim’s pacemaker is infected with malware such that the functional efficiency of his circulatory system drops, we should consider this a distinct disease or pathology from the underlying sinus node dysfunction. I will defend these claims by addressing a series of objections, and then I will discuss the potential ramifications for health and digital policy in Sect. 12.4.

252

M. Thornton

Objection  A malfunctioning digital pacemaker cannot itself be considered a constitutive part of a disease state because it is not a biological dysfunction. Response  When we speak of a biological function, we should separate two senses of the term. The first sense refers to a function that biological creatures normally must perform in order to go about their life, such as pumping blood, moving around, eating and digesting, thinking, seeing, etc. The second sense refers to a function that is being performed solely by organic parts—the pumping of blood is being performed by the heart as opposed to a heart-lung machine. I suggest that only the first of these senses should be relevant to diagnosing someone with a disease—in determining if Jim is healthy, we should care that his heart is beating at the appropriate rate and not that it is being regulated by a pacemaker. This can be intuitively understood in the case of less technologically sophisticated devices, such as an artificial hip. Consider the following example: Barbara fractures her hip. She is in pain and cannot walk. By any common definition Barbara has a pathology. Barbara has surgery to fix her hip. The surgery entails replacing part of the hip socket and the upper portion of the femur with artificial components. Once Barbara recovers from surgery, her new hip performs at least as well as her old one, if not better. While Barbara has a pathology when her hip is broken, once she has recovered from her hip surgery and regained her ability to walk, she should be considered disease-free. Lewens, for one, argues that artificial hips are indeed often thought of as cures (Lewens 2015, 180). However, this is only the case if we take into account her artificial hip when measuring her functional efficiency. If we only consider her organic parts when evaluating her functional efficiency, she is in fact worse-off than when she had a broken hip, as now she is also missing the top half of her femur and a sizeable portion of her hip socket. Measuring functionality in this way would be a ridiculous thing to do. For clinicians—and most everyone else—the determining factor as to whether Barbara is diseased is whether or not she can perform the biological function of walking, not the artificial or organic nature of her hip. In all clinically important senses, the artificial parts are now simply a part of her ambulatory system. If Barbara broke her artificial hip, I suspect that most people would simply say she broke her hip. As with an artificial hip, we should count the contribution of Jim’s digital pacemaker when measuring the functional efficiency of his circulatory system, given that it is the functioning of this system that matters for Jim’s survivability and ability to reproduce and not the performance of each individual part. I admit that the two cases are not identical. For instance, one might argue that (1) the pacemaker is an addition to the circulatory system rather than a direct one-in-one-out replacement of a dysfunctional part, and (2) the original underlying part-dysfunction remains in the case of the pacemaker. However, if what one ultimately cares about is an organism’s ability to survive and reproduce—as is the case in Boorse’s

12  Malware as the Causal Basis of Disease

253

account—then these differences are immaterial.2 This argument is in some ways similar to Lewens’ argument for a pluralistic naturalism (Lewens 2015, 179). In the case of Wakefield’s hybrid account, we should say Jim is disease-free for two reasons. The first reason is that even if the heart’s sinus node remains dysfunctional, Jim’s condition (having a pacemaker) should no longer be considered harmful given that he can live an active life of normal length. And second, it is not exactly clear that “part” dysfunctions, per se, qualify as a dysfunction within Wakefield’s account. Wakefield argues that dysfunction, “refers to failure of an internal mechanism to perform one of its naturally selected functions,” and he defines internal mechanism as, “a general term to refer both to physical structures and organs as well as to mental structures and dispositions” (Wakefield 2007). If the mechanism in question is treated as the circulatory system as a whole, rather than simply the problematic sinus node, then there is no dysfunction when Jim’s pacemaker is working as designed. The corollary to this is that when the pacemaker malfunctions, such that it meaningfully reduces the functionality of the circulatory system, we should think of this as a different dysfunction than the underlying sinus node dysfunction. This should be true regardless of whether the malfunction is caused by a broken component, poor network connectivity, malware, or a malicious hacker. Objection  While one may count the contribution of an artificial hip when measuring one’s ability to walk, this is not the case for many other types of medical devices such as glasses. If a person, David, uses glasses to correct his near-sightedness, he still has a disease. The glasses merely mitigate the symptoms of that disease. If David’s glasses break or are smudged, we do not think David has a new disease. Response  Glasses are substantially different from the case of the artificial hip or the digital pacemaker because glasses are not as integrated into the visual system of the near-sighted individual. If one pictures a spectrum of integration, on one side you have devices like glasses which I will call “tools,” and on the other side there are technologies like digital pacemakers which once installed become a “part” of a given biological system. While there may not be a clear threshold between tools and parts, one can identify paradigmatic cases on either side. Paradigmatic ‘tools’ include glasses and crutches, while paradigmatic ‘parts’ include pacemakers, cochlear implants, and intraocular lenses used in cataract surgery. In between these poles would be devices such as wheelchairs and oxygen delivery systems. One helpful set of criteria for determining which devices should be considered a part of a biological system and which should be thought of as tools has been developed by Andy Clark and David Chalmers as part of their work on the concept of the ‘extended mind.’ Below, I will outline their framework and then modify it for non-­ cognitive biological systems.

2  As Boorse’s goal is to describe how pathologists use the term disease or pathology, he could maintain that Jim has a disease despite the fact that in practice Jim is performing at a statistically normal level. In most cases, including the clinical context, we should accept that Jim is healthy.

254

M. Thornton

In brief, Clark and Chalmers’s theory of extended mind says that a person’s mind does not need to be defined only by the mental activity which occurs inside their skull (Clark and Chalmers 2016). Instead, what makes something part of a mind is that it is performing a cognitive task, such as remembering, reasoning, or observing the world (Clark and Chalmers 1998). In arguing this, Clark and Chalmers implicitly distinguish between the two senses of biological function that I described previously. Clark and Chalmers’ classic example of an ‘extended mind’ is of a man named Otto and his notebook. Otto, who has Alzheimer’s disease, and Inga, who does not, are both going to the Museum of Modern Art in New York. Upon deciding to go to the museum, Inga searches through her memory to recall where the museum is located. Otto, meanwhile, checks his notebook (which he always has with him) for the address. They both find the information and successfully make it to the museum. Clark and Chalmers argue that the two instances of address retrieval “are entirely analogous” (Clark and Chalmers 1998). Inga’s memory is stored solely ‘inside’ her brain, while Otto’s is distributed between his brain and his notebook. As Clark and Chalmers say, “The information in the notebook functions just like the information constituting an ordinary non-occurrent belief; it just happens that this information lies beyond the skin” (Clark and Chalmers 1998, 13). Clark and Chalmers argue that the notebook and Otto’s brain are a coupled system. They describe a coupled system saying: “All the components of the system play an active causal role, and they jointly govern behavior in the same sort of way that cognition usually does. If we remove the external component, the system’s behavioral competence will drop, just as it would if we removed part of its brain. Our thesis is that this sort of coupled process counts equally well as a cognitive process, whether or not it is wholly in the head.” (Clark and Chalmers 1998, 8–9). Returning to my case, a device should be considered a part of a biological system if the system and device form a coupled system. For Clark and Chalmers, the key criteria for this coupling are that the constituent parts are (1) constantly available, (2) the information is easily and directly accessible, and (3) once received the information is readily endorsed (Clark and Chalmers 1998, 18). Within Clark and Chalmers’ framework, not all notebooks are part of minds (some are merely what I called tools), but Otto’s is because he constantly keeps it with him and accepts its contribution as if it came from his brain. While one can lose a notebook or it can contain some inaccurate information, Clark and Chalmers argue that these limitations are not fundamentally different from the brain which can be injured, contain faulty memories, and can become temporarily inaccessible through inebriation or sleep. For now, put to the side whether or not a notebook can be part of a mind. While I find it convincing, it is controversial. I would argue that the basic idea behind Clark and Chalmers’ coupling criteria is less controversial and more intuitive when the artificial part is performing a less purely cognitive function. First, here is a modified version of the coupling criteria that is not specific to cognitive functions:

12  Malware as the Causal Basis of Disease

255

An artificial component is coupled to a biological system if: 1 . It is contributing to the function of a biological system. 2. It is constantly available. 3. Its contribution to the functioning of the system is readily and directly provided. 4. The contribution is automatically endorsed/accepted by the system in question. Applying these criteria to David and Barbara, one can say that David’s glasses and vision system are not coupled, while Barbara’s artificial hip and her ambulatory system are coupled. While David’s glasses contribute to the functioning of this vision system, they are not always available (e.g. can be easily lost or stolen) and the contribution is not always readily provided (e.g. smudged lenses, glare). Barbara’s artificial hip, meanwhile, is always available,3 it directly and readily offers its contribution to her ambulatory system, and Barbara’s ambulatory system automatically accepts the contribution. As a result, we should (and I would say generally do) think of Barbara’s artificial hip as just another part of her body, but we do not and should not consider David’s glasses a part of his body—despite their value, they remain a tool. Other forms of vision interventions, meanwhile, would pass the criteria of being closely coupled to the biological vision system. For example, during cataract surgery the biological lens is removed and replaced with an artificial lens. As with Barbara’s hip, the lens is always available, performs reliably, and its contribution is automatically endorsed. Intuitively we accept that the new artificial lens is a part of one’s vision system, as almost no one with cataract surgery goes around talking about their bionic eye.4 Returning to the case of Jim and his digital pacemaker, using Clark and Chalmers’ criteria for coupling, it seems like we should consider Jim’s pacemaker a part of his circulatory system—an “extended heart” in a manner of speaking. The device is contributing to the circulatory system, it is always available, and its contribution is readily given and accepted. If we accept the pacemaker as essentially a part of Jim, then (1) as long as it is working properly we should consider Jim disease-free, and (2) if malware, bugs, hardware failure, or a malicious hacker affects the performance of the device, we should consider this condition as much a disease as his original sinus node dysfunction. The fact that Jim’s pacemaker is capable of transmitting data via digital networks does not change the fact that it is coupled to his circulatory system. Imagine Barbara’s hip had sensors that sent her doctor data on her activity levels, or imagine that the cataract patient’s artificial lens could measure glucose levels—a feature that was developed by Google and Novartis before ultimately being abandoned due to inconsistent results (Park et al. 2018). These additional features—assuming the base components are still readily available, reliable, and perform tasks associated with walking and seeing respectively—should not alter our fundamental belief that the hip and lens are now simply part of a person’s functional systems. However, this  While Barbara’s artificial hip could break, it is as reliable (at least) as her non-artificial hip.  It is worth noting that these intuitions may not extend to how we think about the mind, but this may be that the workings of the mind are more mysterious than joints or the heart. Perhaps with greater clarity into the workings of the brain, our intuitions may change. 3 4

256

M. Thornton

should also be true if the core functionalities of the device depend on digital networks (assuming the coupling criteria are still met). In these cases, we should think of the network and external computing resources as also part of the coupled system. While the network enabled features of a digital pacemaker probably do not rise to this level, it is easy to imagine devices that would. For example, imagine contact lenses with facial recognition capabilities that compensate for an individual with face blindness or perhaps devices that interpret neurological signals for advanced prosthetics. In this case, the cyberhealth of the network and any external or shared computing resources would also be partly constitutive of one’s health status. There is certainly something a bit odd about the idea that a shared resource like a cloud server could be a part of multiple people’s coupled systems. While we do not usually think of body parts as being shared, it is not without precedent. Conjoined twins can share a single liver, heart, pelvis, spine, part of the intestine and occasionally even brain tissue (Mayo Clinic Staff 2018). Yet, we recognize conjoined twins as separate people despite their shared resources. The case of cloud resources may also seem different because the parts are physically distant, whereas the pacemaker or hip are “internal.” Again, this is not normally how we think of bodies working, but I do not think it should affect whether we treat the resources as being part of a coupled system as long as the contribution is reliable, always available, and readily accepted. One could surgically implant a device into a person to perform sophisticated feats of computing like facial recognition, but it just seems like a worse medical and technical solution than letting the server sit in a warehouse. As an aside, while the digital pacemaker case still might feel a bit different than the hip, I chalk this up mostly to the terminology involved. An artificial hip is called a hip, something each of us naturally have two of. In contrast, a ‘pacemaker’ sounds more like it belongs on a racetrack than inside a human body. If the pacemaker was instead called an artificial heart or artificial sinus node, I think we would feel more comfortable accepting it as part of Jim. Objection  While the clinician or philosopher may consider Jim to be disease-free when his pacemaker is working properly, Jim may still think of himself as having a pathology. He may even want the pacemaker removed or the network connected features turned off despite the physical benefit. It should be the patient who decides whether or not the artificial device or the shared computing resources it uses are considered a part of their body. Response  The objection above conflates two different questions. The first question is whether one should consider artificial parts and their associated functions as part of biological systems (e.g. the circulatory system) for the purpose of disease diagnosis. The second question is whether one should consider those parts and functions to be part of a person’s body apart from the diagnosis of disease. In regard to the first question, an individual’s feelings about whether or not the artificial parts should be considered a part of their body is irrelevant for determining if the individual has a disease within both Boorse and Wakefield’s accounts. In both Boorse and Wakefield’s accounts of disease, if there is no dysfunction, then there is

12  Malware as the Causal Basis of Disease

257

no disease. While Jim may be experiencing harm in the form of mental distress at the idea of having a pacemaker, the mental distress is not related to a physical dysfunction—his circulatory system is both performing its evolutionarily designed function (relevant to Wakefield)5 and performing at typical efficiency (relevant to Boorse). This leads us to the second question of whether or not we should consider artificial parts to be part of one’s body apart from the context of disease diagnosis. While this question is largely beyond the scope of this particular paper, empirical research suggests that whether or not people do in fact consider these devices to be a part of their body is highly context dependent. While some children who depend on medical devices incorporate these devices into their self-presentation, others try to conceal the device and pass as ‘normal’ (Kirk 2010). In the case of prosthetics, the degree to which individuals think of the device as embedded, or a part of the bodily assemblage, depends on both the purpose of the prosthetic (e.g. functional replacement, aesthetic addition, rehabilitative) and external factors (e.g. appearance, capabilities, who controls the device) (Browne et al. 2018). One famous example is how Stephen Hawking came to accept his ‘robotic’ voice as part of his identity and refused to adopt more natural sounding voice synthesizers (Martin 2014). Given these empirical findings, one could imagine that some networked features may be more easily incorporated into one’s bodily identity if they use cloud computing resources rather than a cumbersome physical device one must constantly lug around. One set of concepts which might be useful for thinking about artificial parts and bodily identity are Havi Carel’s concepts of bodily certainty and doubt. Carel defines bodily certainty as “the natural confidence in [one’s] bodily abilities,” while bodily doubt is a doubt in those bodily abilities that can lead to “helplessness, alarm, and distrust in [one’s] body” (Carel 2013, 184). Carel speaks about illness as being one state which leads to bodily doubt, but one could imagine other (new) sources of doubt which may be unique to networked devices, such as not living in an environment with adequate cyberhealth or relying on shared computing resources that one does not control. In my conversations with doctors, I have found that the mere presence of artificial parts in one’s body may lead to bodily doubt. This is even the case when the artificial part is beneficial to the overall physical health of the individual. For example, while an individual’s capacity to survive and reproduce may not be harmed by leaving rods in their leg that have been used to fix a fracture, the bodily doubt that accompanied the fracture may persist as long as the foreign objects remain. This mental distress should be taken seriously when determining whether or not the artificial parts should remain in place, however, this is a separate matter than whether or not we should consider the parts to be a part of a body for disease diagnosis. While the philosophies of public health, medicine, and biology do not provide a single answer as to whether or not one should consider artificial parts and their

5  In the case of Wakefield, even if one argued there was still a dysfunction, this likely would not matter given that the condition of having a pacemaker is generally not thought of as being harmful.

258

M. Thornton

network enabled functions to be a part of one’s body in non-diagnostic contexts, these philosophies do provide theoretical tools for thinking about the question which are generally absent from existing discussions of the digital landscape and cybersecurity. By incorporating concepts like bodily certainty and doubt, bodily integrity, and definitions of disease into how we think about ICTs, one can (1) craft technology policies and products that are sensitive to individuals’ rights and (2) encourage those in the public health and medical fields to think more deeply about how traditional network threats like malware and network fragility can impact health.

12.4  Implications In Sect. 12.3, I argued that in certain cases artificial parts should be considered a part of one’s body and, by extension, certain cyberhealth issues should be considered diseases (or at least the causal basis of a disease). In this section, I will explore how this argument might suggest new approaches to designing, maintaining, and regulating technology products. The first implication is that medical device manufacturers who have largely been focused on the safety of their devices’ core functionality should be more thoughtful about how their networked digital devices interact with the broader Internet of Things. As Richard Clayton, Ross Anderson, and Éireann Leverett have argued, as networked devices become increasingly ubiquitous, “many regulators who previously thought only in terms of safety will have to start thinking of security as well” (Anderson et al. 2018). While this would be the case regardless of whether or not a device is considered a part of one’s body and health status, the cost of failure is higher for the individual who has incorporated the device into their bodily identity and sense of bodily certainty. As such, devices which can be coupled to one’s biological systems should be held to higher standards of robustness and resiliency than those which cannot. While one might argue that medical devices are already regulated to a greater degree than non-medical devices, this scrutiny does not generally extend to the cybersecurity of devices. For example, States typically do not require third-party code review or penetration testing, leaving it up to the device manufacturer to determine the appropriate level of security (U.S.  Food & Drug Administration n.d.). Second, this argument suggests changes to how technology products are maintained. Today, companies generally have complete autonomy to decide when they stop supporting a product, however, this may not be appropriate if the product in question is a part of one’s body and health status. As an example, let us return to the idea of contact lenses that use cloud computing resources to treat people with face blindness. If the product is not as financially successful as expected, a company may want to shut down the product line and quickly phase-out support for the product. However, for individuals with face blindness, this particular product may have become an integral part of their health status and self-identity. For someone whose face blindness was effectively cured through the use of the product, shuttering the

12  Malware as the Causal Basis of Disease

259

product may be akin to a form of brain damage. As such, for certain types of products, policymakers may want to require companies to support products for a certain number of years or force companies to transfer maintenance of the product to another entity rather than allowing companies to summarily drop support. Categorizing certain cyberhealth issues as pathologies may also give one a stronger claim to have those cyberhealth issues addressed than if they were not considered pathologies, which may impact how companies prioritize new features and fix potential bugs or vulnerabilities. I say “may” because a right to treatment often depends on how one defines disease. Lewens has argued persuasively that naturalist theories of disease, like BST, are unable “to serve as the basis for views that hold the health/disease distinction to be an ethically salient one in itself” (Lewens 2015, 177). However, Lewens does temper this point by arguing that certain classes of disease such as chronic pain and degenerative diseases may be ethically salient categories, given the degree to which these diseases limit one’s ability to function in the world and the fact that they clearly require medical treatment (Lewens 2015). Within Wakefield’s account of disease, it is easier to argue for a right to treatment given that diseases are by definition harmful dysfunctions. Given this, one may have a claim that a company should address a cyberhealth vulnerability that could lead to a pathology before developing new discretionary features. In the case of cyberhealth problems, it seems reasonable that one’s right to treatment is stronger as the disease or pathology is often not the result of bad luck, as in the case of, say, a genetic abnormality. In the case of a malfunctioning device, often there will be someone or some entity to blame for cutting corners in manufacturing or design, failing to account for security or environmental risks, or not updating a device’s software. Third, the argument in Sect. 12.3 suggests we rethink who should own and control the shared computing resources that may be necessary to run advanced biotechnologies. While today cloud computing resources are typically owned and controlled by a company (e.g. Amazon, Google, Microsoft), it is also possible for such resources to be owned and controlled by a group of individuals, such as a community of people with the same disease, a patient organization, members of the same family, or a group of friends. Perhaps if individuals incorporate these devices into their bodily identity then there is a prima facie argument that they should have greater control over how these devices are managed, maintained, and improved. However, much more work is needed to draw any definitive conclusions.6 Fourth, if we are going to treat digital devices as being part of someone’s body, then threats like malware start to look a lot more like traditional health threats (e.g. malaria, the flu) than matters of property destruction (e.g. someone breaking my laptop). If we think of cyber threats in this way, then improving the cyberhealth of the internet is akin to draining malarial swamps—the removal of a hazardous environment. As States are justified in draining (or have an obligation to drain) malarial 6  For example, Martha Nussbaum includes the ability to have bodily integrity on her list of Central Functional Human Capabilities. If cloud-computing hardware and software is incorporated into one’s bodily identity, then the ability to achieve bodily integrity may require that one has some measure of control over these external devices.

260

M. Thornton

swamps, then it seems reasonable that they might also be justified in mitigating (or have an obligation to mitigate) malware and ensure a sufficient level of cyberhealth more generally (Klosko 1987). At least this might be the case when networked biotechnologies become more common. Additionally, from the perspective of public health policymakers, just as draining malarial swamps is a core part of public health policy, so may be ensuring a robust internet.

12.5  Conclusion In this paper, I argued that medical devices which are coupled to one’s biological systems should be considered a part of one’s body for determining one’s health status. While these devices must be nearly always available and their contribution readily accepted by the biological system in question, I argued that these coupled devices may include and even rely upon cloud services and digital networks. I then argued that when these coupled devices malfunction for any reason—malware, hacking, defects, poor network connectivity—then the corresponding drop in functional efficiency should be considered as much a disease as the original condition which necessitated the device. Lastly, I offered some introductory thoughts on how this argument may change how technology products are developed, regulated, and owned. While this paper suggests more questions than it answers, it highlights the potential value of applying the philosophies of medicine, public health, and biology to digital threats which thus far have been only viewed through a cybersecurity lens. Acknowledgements  I would like to thank Dr. Stephen John and Professor Tim Lewens for their insightful comments on earlier drafts of this paper and the Leverhulme Centre for the Future of Intelligence. Conflict of Interest Statement  At the time of publication, the author had begun working as a Product Lead at BIOS, a biotech company creating neural interfaces. This paper was entirely written prior to the author assuming that position, and the opinions expressed in this paper belong solely to the author.

References Anderson, Ross, Richard Clayton, and Éireann Leverett. 2018. Standardisation and Certification of Safety, Security and Privacy in the ‘Internet of Things’. Luxembourg: EU Joint Research Centre. https://publications.europa.eu/en/publication-detail/-/publication/80bb1618-16b b-11e8-9253-01aa75ed71a1/language-en. Boorse, Christopher. 1975. On the Distinction Between Disease and Illness. Philosophy and Public Affairs 5 (1): 49–68. ———. 1997. A Rebuttal on Health. In What Is Disease? ed. James M.  Humber and Robert F. Almeder, 3–134. Totowa: Humana Press.

12  Malware as the Causal Basis of Disease

261

Browne, Abbe, Shawn H.E.  Harmon, Rory O’Connor, Sitat Popat, and Sarah Whatley. 2018. Body Extension and The Law: Medical Devices, Intellectual Property, Prosthetics and Marginalisation (Again). Law, Innovation and Technology 10 (2). https://doi.org/10.108 0/17579961.2018.1526853. Carel, Havi. 2013. Bodily Doubt. Journal of Consciousness Studies 20 (7–8): 178–197. Clark, Andy, and David Chalmers. 1998. The Extended Mind. Analysis 58 (1): 7–19. ———. 2016. The Extended Mind. Inter Action 8 (1): 48–64. Cooper, Rachel. 2002. Disease. Studies in History and Philosophy of Biological and Biomedical Sciences 33 (2): 263–282. Kirk, Susan. 2010. How Children and Young People Construct and Negotiate Living with Medical Technology. Social Science & Medicine 71 (10): 1796–1803. Klosko, George. 1987. Presumptive Benefit, Fairness, and Political Obligation. Philosophy & Public Affairs 16 (3): 241–259. Lewens, Tim. 2015. The Biological Foundations of Bioethics. Oxford: Oxford University Press. Martin, Rachel. 2014. Stephen Hawking Gets a Voice Upgrade. NPR Weekend Edition Sunday, December 7, 2014. https://www.npr.org/2014/12/07/369108538/ stephen-hawking-gets-a-voice-tech-upgrade?t=1549637874457. Mayo Clinic Staff. 2017. Bradycardia. Mayoclinic.Org (blog). August 23, 2017. https://www. mayoclinic.org/diseases-conditions/bradycardia/symptoms-causes/syc-20355474. ———. 2018. Conjoined Twins. Mayoclinic.Org (blog). March 7, 2018. https://www.mayoclinic. org/diseases-conditions/conjoined-twins/symptoms-causes/syc-20353910. Newman, Lily Hay. 2017. Medical Devices Are the Next Security Nightmare. Wired, March 2, 2017. https://www.wired.com/2017/03/medical-devices-next-security-nightmare. Park, Jihun, Johee Kim, So-Yun Kim, Woon Hyung Cheong, Jiuk Jang, Young-Geun Park, Kyungmin Na, et al. 2018. Smart Contact Lenses with Integrations of Wireless Circuits, Glucose Sensors, and Displays. Science Advances 4 (1). https://doi.org/10.1126/sciadv.aap9841. Samuels, Kate, Mark B.  McClellan, Mohit Kaushal, Kavita Patel, and Margaret Darling. 2015. Closing the Rural Health Connectivity Gap: How Broadband Funding Can Improve Care. USC-Brookings Schaeffer On Health Policy. The Brookings Institute. https://www.brookings.edu/blog/usc-brookings-schaeffer-on-health-policy/2015/04/01/ closing-the-rural-health-connectivity-gap-how-broadband-funding-can-improve-care. Thieme, Nick. 2018. After Hurricane Maria, Puerto Rico’s Internet Problems Go from Bad to Worse. PBS, November 23, 2018. https://www.pbs.org/wgbh/nova/article/ puerto-rico-hurricane-maria-internet/. U.S. Food & Drug Administration. n.d. Cybersecurity. U.S. Food & Drug Administration (blog). Accessed 4 March 2019. https://www.fda.gov/medicaldevices/digitalhealth/ucm373213.htm. Wakefield, Jerome C. 2007. The Concept of Mental Disorder: Diagnostic Implications of the Harmful Dysfunction Analysis. World Psychiatry 6 (3): 149–156. Michael Thornton is Philosopher of Digital Information and an experienced product manager in the technology industry. He is currently the Product Lead at BIOS – a biotech company developing a full-stack neural interface platform to treat chronic health conditions. His academic research is focused on the intersection of the philosophy of technology and the philosophy of public health. Previously, he was a Director of Product Management at MasterCard and a Student Fellow at the Leverhulme Centre for the Future of Intelligence, University of Cambridge. Research Interests: Philosophy of Information, Public Health, Well-Being and the Self, Data Ethics and Policy, and History of Science. [email protected]

Index

A Agencies, 2, 36, 114, 155, 163, 164, 170, 177, 191, 203, 213, 214 Algorithms, 15, 17, 36, 40, 41, 57, 59, 61–65, 70–75, 81, 123, 131, 147, 193, 196, 208, 213, 214 Analytics, 11, 18, 32, 103–109, 127, 152, 166–168, 171, 213 Anonymity, 145, 147, 148, 155, 159, 161, 163, 164, 170 Apple, 126, 148, 181, 196 Artificial intelligence (AI), 2, 3, 8, 12, 14, 17–19, 31–50, 115, 123, 124, 129, 134, 147, 181, 197–199, 208–215 Automated interventions, 3, 18, 20 Automation, 36, 46, 165 Autonomy, 8, 20, 31–50, 57, 68, 69, 72–75, 82, 83, 85–87, 91, 142, 208, 212, 258

D Deliberations, 5, 16, 17, 103, 139–148, 157 Depression, 7, 18, 19, 122, 129, 157, 209, 210, 221–240, 250 Designs, 2, 4, 9, 11, 14–17, 19–22, 32, 33, 36–39, 42, 44–50, 65, 84, 115, 120, 139–148, 154, 156–159, 161–165, 168–171, 189, 195, 209, 211, 248, 249, 259 Digital artefacts, 56, 59–61, 64, 66, 67 Digital ethics, v, 82 Digital platforms, 120 Digital well-being, 1–23, 33, 56, 57, 62, 66, 68, 70, 75, 76, 82, 83, 86, 87, 97, 101–117, 119–135, 139–148, 171, 221–240 Diseases, 7, 12, 13, 18, 200, 247–260

B Benevolent, 114, 167, 171, 202 Big data, 6, 10, 11, 19, 176–179, 181–183, 185, 187–191, 193, 196–203

E Economics, 3, 4, 9–13, 36, 39, 65, 74, 111, 113, 175–203, 234 Education, 2–4, 10, 11, 18, 19, 32, 33, 36, 42, 46, 49, 123, 155, 160, 179, 183, 189, 191 Embodied artificial intelligence, 18, 208, 209 Emotions, 7, 8, 15, 16, 72, 84, 104, 106, 124, 140–148, 164, 166, 185, 189, 190, 193–195, 208 Ethics, 1–23, 32, 33, 36, 37, 45, 48, 50, 63, 82, 120, 161–163, 211

C Clickbait, 40, 49 Commercialisation, 16, 55–76 Communications, 19, 40, 46, 49, 59, 141–146, 148, 160, 180, 196, 198 Competence, 7, 8, 33, 38, 39, 42, 43, 234, 254 Cybersecurity, 116, 247, 248, 258, 260

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020 C. Burr, L. Floridi (eds.), Ethics of Digital Well-Being, Philosophical Studies Series 140, https://doi.org/10.1007/978-3-030-50585-1

263

Index

264 Experiences, 2, 7, 8, 10, 16, 18, 32, 33, 35, 37, 39, 40, 42–44, 46–49, 56, 66, 67, 96, 105, 114, 130, 131, 151–153, 155, 157, 159, 160, 162, 163, 212, 213, 225, 227, 232 Expressions, 5, 6, 16, 17, 56–68, 74, 75, 104, 105, 112, 116, 142–148, 193, 194, 200, 236 F Facebook, 6, 36, 56, 59, 70, 73, 81, 83, 95, 115, 116, 119, 126, 127, 176, 180, 181, 185, 193, 196, 197, 200, 230 Filtering, 16, 55–76, 148 Frameworks, 4, 8, 10, 19, 20, 22, 31–50, 82, 93, 94, 103, 104, 122, 134, 209, 211, 222, 231, 240, 253, 254 G Gamification, 124, 125, 128, 133, 134 Goffman, Alvin, 56–59, 62, 63 Google, 50, 70, 71, 95, 119, 120, 126, 160, 162, 165, 171, 176, 181, 186, 187, 189, 193, 196, 255, 259 Governance, 13, 66, 140, 166, 209 Gratitude, 3, 5, 16, 17, 123, 133 H Habits, 2, 122, 133, 181, 189, 211, 224, 225, 227 Happiness, 7, 8, 86, 97, 104, 105, 114, 121, 124, 127, 140, 176, 183, 185, 187, 193, 196, 225, 227 Happiness economics, 176, 183, 185, 187, 193, 196 Health, 3, 7, 10–13, 15, 18–20, 32, 33, 36, 42, 48, 105, 109, 111, 115, 116, 133, 152, 177, 180, 182, 183, 185, 189, 191, 193, 197, 198, 200, 208, 210, 211, 213–215, 229, 233, 234, 247–249, 251, 256–260 Healthcare, 2, 12–14, 17–19, 122, 212, 214 Hedonism, 8, 85, 103, 105, 114 Human-computer interaction (HCI), 33, 35, 37, 169 I Identities, 16, 55–76, 124, 152, 155, 157, 159, 164, 165, 170, 179, 191, 193, 213, 214, 257–259

Ill-being, 18, 38, 221–240 Influences, 10, 33, 39, 40, 44, 46–49, 58, 64, 68, 70, 82, 84, 86–91, 94, 116, 177, 194, 230 Intelligent software agent (ISA), 46, 82, 84, 86, 93–96 Introspection, 7, 157, 191 J Jurisprudence, 202 M Machine learning, 2, 10, 11, 15, 36, 61, 176, 183, 187, 208 Malware, 18, 247–260 Manipulations, 16, 34, 37, 41, 66, 71, 73, 81–97, 178, 187, 196, 203, 212, 214, 235 Medicine, 4, 197, 236, 248, 257, 260 Meditations, 123, 124, 126, 130, 131, 133 Mental health, 9, 18, 19, 158, 182, 187, 208–215 Mill, J.S., 34, 65 Mindfulness, 121, 123, 126, 130, 131, 133 Moral philosophy, 141 Motivation, Engagement and Thriving in User Experience (METUX), 33, 42–48 Multidisciplinary, 1–23 N Networks, 1, 4, 18, 63, 67, 185, 187, 193, 196, 221–240, 247, 248, 250, 253, 255, 256, 258, 260 NHS, 13, 14 O Objective list theories, 5, 85, 86, 103, 105 Online targeting, 63 P Parfit, D., 5, 20, 21, 85, 96, 103, 104 Participation, 140, 143, 155, 158, 159, 171, 191 Pathologies, 197, 248–253, 256, 259 Performances, 33, 56–64, 66, 67, 70–73, 75, 107, 159, 164, 252, 255

Index Personal reflections, 156, 167, 170 Persuasive technology, 85, 89, 93, 130 Philosophy of science, 231–232, 236 Political philosophy, 34, 112, 113 Predictions, 62, 63, 176, 181, 183, 185, 187, 190 Preference satisfaction, 10, 116 Privacy, 113, 115, 116, 129, 133, 134, 155, 160, 161, 163, 164, 176, 180, 197–203, 208, 211, 212, 214 Psychiatry, 208, 214, 235 Psychology, 3–5, 7–9, 12, 32, 33, 38, 39, 69, 70, 83, 114, 121, 123, 141, 208, 214, 232 Public health, 18, 198, 248, 257, 258, 260 Public policy, 11, 12 R Rationality, 86, 97, 141 Rawls, J., 109, 114, 116 Recommender systems, 2, 17, 21, 23, 33, 40–42, 45, 46, 49, 70 Reflections, 15, 17, 20, 67, 109, 123, 141, 151–171, 211 Reflective writing, 151–153, 156–158, 161–163, 167, 170, 171 Relatedness, 8, 33, 38, 42 Relational autonomy, 68–75 Resilience, 8, 16, 155, 157, 208, 225 Responsible, 14, 37, 48, 86, 114, 115, 168, 179, 229, 237 Rights, 5, 7, 10, 13, 18, 20, 34, 40, 46, 49, 73, 74, 89, 93, 101, 108–110, 113–116, 130, 132, 134, 178, 193, 198–202, 223, 248, 258, 259 Rumination, 157, 158, 225–230, 239

265 S Scanlon, T., 91, 96, 103, 104, 106, 108, 110–112 Screentime, 120, 122, 125–128, 132 Security, 113, 116, 189, 190, 211, 247, 258, 259 Self, 16, 17, 32 Self-care, 17, 119–135 Self-determination theory (SDT), 8, 32, 33, 38, 39, 42, 45, 68, 87 Self-governance, 34, 57, 69–73, 75 Sensors, 14, 19, 191, 192, 195, 255 Social media, 3, 6, 8, 9, 11, 16–18, 35, 55–76, 81, 82, 97, 121, 127, 139–148, 164, 181, 182, 185, 196, 199, 200 Socio-technical systems, 152, 155, 158, 161, 168, 171 Sustainable design, 22, 47 T Text communication, 141–143, 145, 148 Twitter, 56, 126, 176, 177, 181, 183, 187, 189, 193, 195, 196 W Well-being, 2–13, 15–17, 19–23, 34, 57, 65, 68, 69, 71, 74, 81–97, 102–117, 119, 121, 123, 127, 128, 130, 131, 140, 146, 148, 152, 154–158, 163, 168–171, 183, 189, 222, 224, 226, 229–232, 234, 236–238, 240 Y YouTube, 22, 23, 33, 36, 40–42, 44–49, 56, 126, 148, 160, 193