The 2022 Yearbook of the Digital Governance Research Group 3031286774, 9783031286773

This annual edited volume presents an overview of cutting-edge research areas within digital ethics as defined by the Di

173 113 4MB

English Pages 169 [170] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

The 2022 Yearbook of the Digital Governance Research Group
 3031286774, 9783031286773

Table of contents :
About This Book
Contents
Chapter 1: Introduction
Chapter 2: How to Counter Moral Evil: Paideia and Nomos
References
Chapter 3: Smart Cities: Reviewing the Debate About Their Ethical Implications
1 Introduction
2 What Is Meant by a “Smart City”?
3 Re-dimensioning the Smart City Definition
4 Network Infrastructure
4.1 Control
4.2 Surveillance
4.3 Privacy & Security
5 Post-political Governance
5.1 Public and Private Decision-Making
5.2 Cities as Post-political Entities
6 Social Inclusion
6.1 Citizen Participation and Inclusion
6.2 Inequality and Discrimination
7 Sustainability
8 Conclusion
Appendix: Methodology
References
Chapter 4: The Intersections Between Artificial Intelligence, Intellectual Property, and the Sustainable Development Goals
1 Introduction
2 IP and SDGs
3 AI and SDGs
4 AI and IP
5 Discussing Potential Areas for Future Research
6 Conclusions
References
Chapter 5: Cyber Weapons and the Fifth Domain: Implications of Cyber Conflict on International Relations
1 The Cold War: A Recent Lesson
2 Applying These Lessons to the Fifth Domain
3 States Need a New Approach to Defence That Accounts for Both Cyber Weapons and Cyber Space?
References
Chapter 6: A Comparative Analysis of the Definitions of Autonomous Weapons
1 Introduction
2 Definitions of Autonomous Weapon Systems
2.1 Autonomy, Intervention, and Control
2.2 Adapting Capabilities
2.3 Purpose of Deployment
3 A Definition of AWS
3.1 Autonomous, Self-Learning, Weapons Systems
3.2 Human Control
4 Conclusion
References
Chapter 7: English School on Cyberspace: Examining the European Digital Sovereignty as an International Society and Standard of Civilization
1 Introduction
2 Research Methods
3 Conceptual Frameworks: International Society and Standard of Civilization
3.1 International Society
3.2 Standard of Civilization
4 Discussion
4.1 The Emergence of Digital Sovereignty
4.2 The European Digital Sovereignty: Cyberspace with European Values
4.3 The European Digital Sovereignty as a Cyber International Society
4.4 The European Digital Sovereignty as a Standard of Digital Civilization
5 Conclusion
References
Chapter 8: Strategic Autonomy for Europe: Strength at Home and Strong in the World, Illusion or Realism
1 Introduction
2 Strategic Autonomy and Sovereignty
3 Achieving Strategic Autonomy
4 Strengths and Weaknesses
5 Conclusion: Putting Europe on the Map, Internally and Externally
References
Chapter 9: Saving Human Lives and Rights: Recommendations for Protecting Human Rights When Adopting COVID-19 Vaccine Passports
1 Introduction: Saving Human Lives and Rights
2 The Importance and Influence of Human Rights and International Health Regulations
3 The Risk of Discrimination
4 Recommendations for Designing, Developing, and Deploying COVID-19 Vaccine Passports
5 Conclusion
References
Chapter 10: In Defense of Sociotechnical Pragmatism
1 Introduction
2 The Politics of Algorithms
2.1 The Sociolegal Landscape
2.2 Framing the Debate
2.3 Fairness and Its Discontents
2.4 The Pragmatic Turn
2.5 Sociotechnical Pragmatism
3 The Philosophy of Explanation
3.1 The Deductive-Nomological Model
3.2 Counterfactuals and Interventionism
3.3 Epistemological Pragmatism
3.4 Trust and Testing
4 Conclusion
References
Index

Citation preview

Digital Ethics Lab Yearbook

Francesca Mazzi   Editor

The 2022 Yearbook of the Digital Governance Research Group

Digital Ethics Lab Yearbook Series Editors Luciano Floridi, Oxford Internet Institute, Digital Ethics Lab, University of Oxford, Oxford, UK The Alan Turing Institute, London, UK Mariarosaria Taddeo, Oxford Internet Institute, Digital Ethics Lab, University of Oxford, Oxford, UK The Alan Turing Institute, London, UK

The Digital Ethics Lab Yearbook is an annual publication covering the ethical challenges posed by digital innovation. It provides an overview of the research from the Digital Ethics Lab at the Oxford Internet Institute. Volumes in the series aim to identify the benefits and enhance the positive opportunities of digital innovation as a force for good, and avoid or mitigate its risks and shortcomings. The volumes build on Oxford’s world leading expertise in conceptual design, horizon scanning, foresight analysis, and translational research on ethics, governance, and policy making.

Francesca Mazzi Editor

The 2022 Yearbook of the Digital Governance Research Group

Editor Francesca Mazzi Saïd Business School University of Oxford Oxford, UK

ISSN 2524-7719     ISSN 2524-7727 (electronic) Digital Ethics Lab Yearbook ISBN 978-3-031-28677-3    ISBN 978-3-031-28678-0 (eBook) https://doi.org/10.1007/978-3-031-28678-0 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

About This Book

This annual edited volume presents an overview of cutting-edge research areas within digital ethics as defined by the Digital Ethics Lab of the University of Oxford. It identifies new challenges and opportunities of influence in setting the research agenda in the field. The 2022 edition of the Yearbook presents research on the following topics: autonomous weapons, cyber weapons, digital sovereignty, smart cities, artificial intelligence for the Sustainable Development Goals, vaccine passports, and sociotechnical pragmatism as an approach to technology. This book appeals to students, researchers, and professionals in the field.

v

Contents

1 Introduction����������������������������������������������������������������������������������������������    1 Francesca Mazzi 2 How  to Counter Moral Evil: Paideia and Nomos����������������������������������    5 Luciano Floridi 3 Smart  Cities: Reviewing the Debate About Their Ethical Implications����������������������������������������������������������   11 Marta Ziosi, Benjamin Hewitt, Prathm Juneja, Mariarosaria Taddeo, and Luciano Floridi 4 The  Intersections Between Artificial Intelligence, Intellectual Property, and the Sustainable Development Goals����������   39 Francesca Mazzi 5 Cyber  Weapons and the Fifth Domain: Implications of Cyber Conflict on International Relations������������������   51 Joshua Jaffe 6 A  Comparative Analysis of the Definitions of Autonomous Weapons ������������������������������������������������������������������������   57 Mariarosaria Taddeo and Alexander Blanchard 7 English  School on Cyberspace: Examining the European Digital Sovereignty as an International Society and Standard of Civilization ������������������������������������������������������������������������������������������   81 Abid A. Adonis 8 Strategic  Autonomy for Europe: Strength at Home and Strong in the World, Illusion or Realism����������������������������������������  105 Paul Timmers

vii

viii

Contents

9 Saving  Human Lives and Rights: Recommendations for Protecting Human Rights When Adopting COVID-19 Vaccine Passports������������������������������������������������������������������  117 Emmie Hine, Jessica Morley, Mariarosaria Taddeo, and Luciano Floridi 10 In  Defense of Sociotechnical Pragmatism����������������������������������������������  131 David Watson and Jakob Mökander Index������������������������������������������������������������������������������������������������������������������  165

Chapter 1

Introduction Francesca Mazzi

The year 2022 was characterised by the event of a war that threatened an already precarious political and economic stability. While the world was recovering from the Covid-19 pandemic, which had already shown the importance of digital technologies in critical times, the geopolitical turbulences of 2022 highlighted, even more, the role of the digital concerning strategic infrastructures and defence. The contributions in this volume are all from members of the University of Oxford’s Digital Governance Research Group, whose work in 2022 has focused on advancing the dialogue on various topics related to the governance of the digital. The chapters provide a snapshot of different discourses concerning the intersection of digital technologies with contemporary social, legal, and ethical challenges. The Yearbook begins with a philosophical digression on the role of science and technology in countering moral evil, relevant at that moment when beliefs on human nature are to guide political actions, policies, and digital governance. In Chap. 2, the director of the Group, Floridi, underlines the difficulties in distinguishing between what counts as natural and moral evil. However, he argues that science and technology can transform natural evil into moral evil. He uses two main philosophical anthropologies to explain moral evil: namely, Socrates’ thought, which sees ignorance as the cause of moral evil, and Hobbes’ theory, which roots it in wickedness. He suggests that a society that seeks to counter evil should rely on science and technology to transform natural evil into moral evil, and on education (Paideia) and regulations (Nomos) to minimise or even eliminate moral evil. One of the fundamental objects of the war has been energy. The critical interdependence between geopolitics and energy supply became more evident this year, with sanctions, attacks, and threats that affected the civil population of many countries, directly and indirectly involved in the conflict. At the same time, climate change does not stop because of a war: for example, the UK experienced its highest F. Mazzi (*) Saïd Business School, University of Oxford, Oxford, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_1

1

2

F. Mazzi

registered temperatures during the Summer of 2022. Hence, some of the group’s contributions for this year focus on sustainability. In Chap. 3, Ziosi, Hewitt, Juneja, Taddeo and Floridi elucidate the ethical implications of smart cities. They start from a host of definitions and labels attached to the concept of smart cities to identify four dimensions that are transversal and allow for a review of ethical concerns. These are network infrastructure, post-political governance, social inclusion, and sustainability. Each dimension presents a series of elements of existing ethical concerns identified by reviewing the relevant literature. They address the concerns of control, surveillance, and data privacy and ownership; the tensions between public and private decision-making and cities as post-political entities; citizen participation and inclusion, as well as inequality and discrimination; and the environment as an element to protect and as a strategic ingredient for the future. In Chap. 4, Mazzi focuses on digital technologies and sustainable innovation. Considering that the Sustainable Development Goals (SDGs) represent the main hope for peace and prosperity in the near future, the author investigates how this area of research interacts with Intellectual Property (IP), as the system that incentivises innovation worldwide, and Artificial Intelligence (AI), as a significant advancement of the Fourth Revolution. The chapter aims to illustrate the intersections between these fields by analysing the literature and reporting the significant negative and positive influences that emerge and can be relevant from a policy perspective. The author highlights research lines that can advance understanding of how IP contributes to the SDGs by using and incentivising AI methods to inform IP offices, businesses, and policymakers. The following two chapters focus on another topic that became of fundamental importance during 2022: the types of weapons the fourth industrial revolution brought about and the related importance of cyber security and resilience. In Chap. 5, Jaffe offers reflections on cyber weapons and cyberspace. He describes how technological revolutions shape economies, state politics, and international relations, often resulting in inter-state competition for advantage. Sometimes, technological advances result in new weapons that upend the strategic paradigms of the day. He also stresses that a revolutionary discovery in the natural world can open an entirely new theatre for competition and conflict. He argues that 2022 represents a moment when such a revolutionary discovery and a game-­ changing RMA occurred simultaneously, with the creation of paradigm-shifting cyber weapons and the ongoing competition for the so-called fifth domain of conflict, cyberspace. He draws lessons to learn from the Cold War’s parallels with modern cyber challenges, painting a picture of the problem posed by cyberweapons and explaining the differing state approaches to incorporating cyber capabilities in their arsenal. Taddeo and Blanchard elucidate the definition of autonomous weapons systems (AWS) in Chap. 6. They provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. Their analysis shows how focusing on different aspects of AWS in providing definitions mirrors different approaches to addressing these weapon systems’

1 Introduction

3

ethical and legal problems. They argue that such an unharmonised understanding of AWS stifles agreements around deployment conditions and regulations of their use and, indeed, whether AWS are to be used. They offer a definition that provides a value-neutral ground to address the relevant ethical and legal problems, identifying four key aspects (autonomy, adapting capabilities of AWS, human control, and purpose of use) as the essential factors to define AWS. The discourse on cyberspace and autonomous weapons also evidences the criticalities of states’ dependency on solid digital infrastructure, and the desire for digital sovereignty. On the topic, Adonis, in Chap. 7, investigates the role of the English School’s conceptual toolbox in examining the case of European Digital Sovereignty. He argues that the emergence of European Digital Sovereignty can be well-captured through two conceptual frameworks of the English School: international society and standard of civilisation. He suggests that the European Union’s intention to construct and promote European Digital Sovereignty creates a new type of international society in cyberspace, by instrumentalising qualitative ideal-type and interpretive methods. It reflexively institutionalises new behaviours and practices inside and outside cyberspace. This brings a new idealisation of cyberspace by creating a new set of norms believed to be morally superior. The chapter evidences how this normative consequence entails a new standard of civilisation in cyberspace that is characterised by an expansionist nature and highlights the limitations of the English School conceptual toolbox. Timmers also contributes to the debate on Europe’s digital sovereignty in Chap. 8. He questions the extent to which Europe can achieve strategic autonomy, i.e., the means to safeguard and strengthen sovereignty, inquiring about its strengths and what place in the world the EU can aspire to. The chapter identifies elements that can inform an answer, showing how such an answer would depend firstly, on the kind of sovereignty that is aspired, and secondly, on the feasibility of achieving the necessary EU strategic autonomy to provide for such sovereignty. The author takes the digital world as an example, based on the facts that the digital world changes perceptions of sovereignty, and given its pervasiveness and disruptive nature exposes the need for and feasibility of strategic autonomy. Such a conversation on the sovereignty of the digital space also invites reflections on the values underpinning decision-making processes about the ‘onlife’. To this end, in Chap. 9, Hine, Morley, Taddeo and Floridi provide a contribution on a significant example, namely the digital Covid-19 vaccine passports. Two years after the beginning of the pandemic, governments and businesses faced the challenge of reopening society whilst still protecting public health. One of the options considered to facilitate such balance concerned the use of digital ‘COVID-19 Vaccine Passports’, which aim to prove that an individual has had an approved COVID-19 vaccination (both doses where applicable). The opportunity of such Passports raises ethical and legal questions, for example, concerning mandatory vaccination policies, unequal global distribution of effective vaccines, and the effect of (and on) the digital divide. The authors discuss the ethical and human rights implications of COVID-19 vaccine passports based on a systematised literature

4

F. Mazzi

review and documentary analysis. They provide concrete recommendations for supranational bodies, national governments, and businesses. The Yearbook concludes with a piece that broadens the conversation on the ethical approach to technology, questioning the current debate between the two narratives of sociotechnical dogmatism and scepticism. In Chap. 10, Watson and Mokander take the Luddite Rebellion as an example of an early instance of long-­ running tension between these forces. On the one side, those willing to promote greater automation (sociotechnical dogmatists, primarily if not exclusively for financial gain). On the other side, those who resist this impulse (sociotechnical sceptics, typically concerned about the potential injustices that will result). The authors analyse this purported dichotomy, showing how this is a persistent and instructive feature in the history of technological development, arguing that it is ill-­ equipped to conceptualise the opportunities and challenges posed by fairness, accountability, and transparency in machine learning. They propose sociotechnical pragmatism, a constructive and coherent stance that allows researchers who seek to identify and mitigate the risks associated with emerging technologies to navigate between the Scylla of dogmatism and the Charybdis of scepticism.

Chapter 2

How to Counter Moral Evil: Paideia and Nomos Luciano Floridi

Abstract  In this short article, I argue that (a) the distinction between what counts as natural and moral evil is not fixed; that (b) science and technology can transform natural evil into moral evil; that (c) two main philosophical anthropologies explain moral evil as due to ignorance (Socrates) or wickedness (Hobbes); and hence that (d) a society that seeks to counter evil should rely on science and technology to transform natural evil into moral evil and then on education (Paideia) and regulations (Nomos) to minimise or even eliminate moral evil. Keywords  Hobbes · Moral evil · Natural evil · Nomos · Paideia · Socrates Humanity has always wondered about good and evil. And especially about evil, seen as made up of suffering, fear, disappointment, humiliation, sorrow, offence, abuse, injustice, violence, atrocity, and anything else negative that life has in store for us. Evil plays a leading role in all cultures and civilisations, from the first cuneiform tablets, which speak of unpaid debts, to the Epic of Gilgamesh and the Odyssey. There is no Dante, Shakespeare, or Goethe without evil as a great actor in human affairs. Evil is a constant in history. It is also the object of study of ethics, which investigates its nature and causes, why it exists, and how it can be countered. Philosophers agree on the nature of evil insofar as they distinguish two kinds: the nature-based and the human-made, called moral (Neiman, 2002). An example can clarify the difference. In December 2021, many tornadoes caused deaths and injuries in various states of the United States, especially Kentucky. Pain, suffering, fear, losses of all kinds ... these were all aspects of natural evil, something that even the legal system calls “an act of God”, for which nobody can be held responsible. Still in December, still in the United States, a student killed four people and injured seven others at a Michigan school. Equally devastating effects, but a very different

L. Floridi (*) Oxford Internet Institute, University of Oxford, Oxford, UK Department of Legal Studies, University of Bologna, Bologna, Italy © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_2

5

6

L. Floridi

cause, which in this case is entirely and exclusively moral because it is made up of human choices and responsibilities. It was an (evil) act of Humanity. If you rely on similar examples, or consult an ethics textbook, the distinction between natural and moral evil seems clear and uncontroversial. But things quickly get complicated. Natural evil has always been a major headache for many religions, especially Christianity, which sees God as omnipotent, omniscient, and infinitely benevolent. If God can do anything, knows everything, and always wants the good, how do we square that with the sufferings in Kentucky? God’s will? Did people deserve it? Or could God do nothing about it? Whichever way you turn it around, it is a thorny problem that goes by the name, made famous by Leibniz, of theodicy (Leibniz, 1951): how to reconcile the existence of God (as described above) with the existence of natural evil. Leibniz thought that the theodicy problem could be solved by arguing that our world, as a whole, is the best of all possible worlds, despite all its limitations. A little bit like saying that things may not be great, but they cannot get any better than this. Think of a kind of ontological Pareto equilibrium. Voltaire thought Leibniz’s suggestion was a bad joke, and he famously mocked Leibniz and his philosophy in his classic satire Candide, o Optimism (Voltaire, 2013). The novella was published in 1759. In it, we find references to historical events, such as the Lisbon earthquake (1755), a natural evil that killed between 12,000 and 50,000 people, one of the worst outcomes in earthquake history; and the Seven Years’ War (1756–1763), a moral evil that caused between 900,000 and 1,400,000 deaths and is often considered the first global conflict in history. As Voltaire might have said: just imagine if this were not the best of all possible worlds. The story seems to end here, but in reality, over time, another factor takes over. Could the suffering and losses in Kentucky have been prevented? Tornadoes today are unpredictable. Too sudden and chaotic, they generate too much data, and there is too little time to do the necessary calculations. Nevertheless, we can already do simulations, assign probabilities, play the precautionary card. Most importantly, one day, we may have the data, the models, and the computing power necessary to predict them with sufficient accuracy and reliability. And then there are the buildings. We should build them tornado-proof, as we do with anti-seismic measures in earthquake-prone areas. In other words, as science and technology advance, natural evil does not remain fixed, but is translated more and more into moral evil. That is, if things end badly, it is no longer God’s fault, but Humanity’s alone. For example, Hegel died of malaria, like Dante. It was a natural evil at the time, but today dying of malaria is an entirely human responsibility. It has morphed into a moral evil. In 2020, there were 241 million cases of malaria worldwide and an estimated 627,000 deaths.1 Like them, the deaths caused by the Lisbon earthquake today would be a human crime, not something for which to doubt the existence of the God of Christianity. So, Leibniz’s idea could be updated in the following version: this is not yet the best of possible worlds, but we are getting there, and in the future, natural

 https://www.who.int/news-room/fact-sheets/detail/malaria

1

2  How to Counter Moral Evil: Paideia and Nomos

7

evil could be a memory, leaving only human intelligence, freedom, and responsibility to prevent, avoid, minimise, or eradicate evils in the world. In the presence of moral evil, the theological solution is to excuse God and charge humanity with the mistaken use of its freedom. Evil would be an utterly immanent problem, a human problem. Perhaps this is the best of all possible worlds, after all, because it offers humanity the opportunity of removing any natural. Over time, on the ethical scale, the plate of natural evil is becoming lighter and that of moral evil heavier. Human responsibilities are increasing, not only for the many wrongs we cause – just think of climate change – but also because of the natural evils we can but do not prevent, minimise, or eliminate. Here too, science, technology, and, more generally, human intelligence make a huge difference, for better or for worse. If the student in Michigan had not had a gun, he would not have been able to kill and injure so many people in an instant. Mass shootings (defined as at least four people shot, plus the shooter) are so common in the United States that there is an entry for each year on Wikipedia. That of 2020 lists 703 people dead and 2842 injured, for a total of 3545 total victims. Proof that human stupidity and responsibility are immense because good legislation would be enough to eradicate an evil that is entirely and only moral. Everyone understands this, except some Americans. This path of translating natural evil into moral evil seems like bad news, but it is not. Because as far as natural evil is concerned – think of the pandemic – there is little to do except transform it into a subsequent human responsibility, for example, in the production and distribution of vaccines to everyone. But as far as moral evil is concerned, one can work to eradicate it, for instance, by getting vaccinated. So, the first step is to transform natural evil into a moral one, from acts of God to human shortcomings. The next is to fight moral evil itself. To do so, one must understand it. Hence the crucial question: why are we evil? Or, as some ethicists would rather put it: why do we behave evilly? Ethics has done much work on this too, but in the end, there seem to be two prevalent interpretations of human nature that explain moral evil. Neither does us credit, but I believe that each usefully captures part of the story, as often is the case. The first dates to Socrates (see, for example (Plato, 1996)), but we also find it in the Stoics (Marcus Aurelius, 1998), Rousseau (2019), or Arendt (1994). We do evil not because we are immoral by nature, but because we do not understand what good is for ourselves and others. Vices, wickedness, and horrors of all kinds are the result of human stupidity, moral ignorance, or some other epistemic shortfalls. Then there is another tradition, attributable to Hobbes as its best-known supporter (Hobbes, 2017), but which also includes Kant (2009), for example. According to it, moral evil is the fruit of human intelligence at the service of human intrinsic immorality. Each of us pursues our selfish interests and goals as much as possible, and if we stop, it is only because the outcome no longer suits us. The shortfalls are moral, not epistemic. Famously, Kant made this point by saying that “out of the crooked timber of humanity no straight thing was ever made” (this echoes Ecclesiastes 1:15 “what is crooked cannot be made straight”, but is more pessimistic than Luke 3:5 “[…] and the crooked shall be made straight”.

8

L. Floridi

In summary, and simplifying, moral evil is due to the fact that humanity is either good but stupid – let us call this the Socratic anthropology – or intelligent but evil – let us call this the Hobbesian anthropology. From these two philosophical anthropologies derive different ethical and political theories and practices, but above all, different answers to how moral evil can be at least limited if not eliminated. If we are good but stupid, then we must invest in our education: to make people understand more and better what is authentically good for themselves and others, for society and the environment. In this case, the Socratic solution to moral evil is called Paideia. Using a trivial example, warning messages on the packaging of cigarettes and other tobacco products concerning their harmful health effects are a typical case of a Socratic approach: more information should lead to better behaviour. These messages have been implemented since 1969. In 2011, a systematic report concluded that “prominent health warnings on the face of packages serve as a prominent source of health information for smokers and non-smokers, can increase health knowledge and perceptions of risk and can promote smoking cessation. The evidence also indicates that comprehensive warnings are effective among youth and may help to prevent smoking initiation. Pictorial health warnings that elicit strong emotional reactions are significantly more effective” (Hammond, 2011). It seems that the Socratic approach may have some merits. However, if we are intelligent but evil, then one must motivate through incentives and disincentives, which rational and selfish agents will find more or less compelling. Even devils incarnated can be coaxed into doing the right thing if properly nudged. In this case, the solution to moral evil is called Nomos, the body of laws and rules that make things work as they should. From a Hobbesian perspective, that is where society must invest in terms of designing its preferred forms of civil cohabitation. Using the previous, trivial example, increasing the price of tobacco is a Hobbesian solution to motivate a rational choice and more virtuous behaviour. According to a recent study, it does have an impact, especially when you do not have much money and you can still give up smoking: “taxation is an effective means of socially-enacted preventative medicine in deterring youth smoking” (Ding, 2003). The history of civilisations oscillates between Paideia and Nomos, preferring one or the other depending on the contexts. But these are not two incompatible visions. Except for a few cases of pure holiness and utter wickedness, we are almost all a little bit good but stupid and a little bit evil but intelligent. For this reason, innovation and development must support both Paideia and Nomos to make us Socratically intelligent and Hobbesianly good. The tricky bit is to reach an equilibrium that is also tolerant of individual preferences and choices (Floridi, 2015, 2016). Which is a somewhat philosophical way of saying that society can hope to improve only if it invests in science and technology, to eliminate natural evil or translate it into a moral one, and in education and rules, to reduce moral evil, and perhaps even eliminate it one day, to make any negative impact of an act of God a thing of the past. Acknowledgements  I am very grateful to Emmie Hine and Mariarosaria Taddeo for their feedback on previous versions of this article.

2  How to Counter Moral Evil: Paideia and Nomos

9

References Arendt, H. (1994). Eichmann in Jerusalem: A report on the banality of evil (Rev. and Enl. ed.). Penguin Books. Ding, A. (2003). Youth are more sensitive to price changes in cigarettes than adults. The Yale Journal of Biology and Medicine, 76(3), 115. Floridi, L. (2015). Toleration and the Design of Norms. Science and Engineering Ethics, 21(5), 1095–1123. Floridi, L. (2016). Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. Science and Engineering Ethics, 22(6), 1669–1688. Hammond, D. (2011). Health warning messages on tobacco products: A review. Tobacco Control, 20(5), 327–337. Hobbes, T. (2017). Three-text edition of Thomas Hobbes’s political theory: The elements of law, De Cive and Leviathan. Cambridge University Press. Kant, I. (2009). Kant’s idea for a universal history with a cosmopolitan aim: A critical guide. Cambridge University Press. Leibniz, G. W. (1951). Theodicy: Essays on the goodness of god, the freedom of man, and the origin of evil. Routledge & Kegan Paul. Marcus Aurelius. (1998). The meditations of Marcus Aurelius Antoninus. Oxford University Press. Neiman, S. (2002). Evil in modern thought: An alternative history of philosophy. Princeton University Press. Plato. (1996). Protagoras. Oxford University Press. Rousseau, J.-J. (2019). The social contract and other later political writings (2nd ed.). Cambridge University Press. Voltaire. (2013). Candide, or, optimism. Penguin.

Chapter 3

Smart Cities: Reviewing the Debate About Their Ethical Implications Marta Ziosi, Benjamin Hewitt, Prathm Juneja, Mariarosaria Taddeo, and Luciano Floridi

Abstract  This paper considers a host of definitions and labels attached to the concept of smart cities to identify four dimensions that ground a review of ethical concerns emerging from the current debate. These are: (1) network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership; (2) post-political governance, embodied in the tensions between public and private decision-making and cities as post-political entities; (3) social inclusion, expressed in the aspects of citizen participation and inclusion, and inequality and discrimination; and (4) sustainability, with a specific focus on the environment as an element to protect but also as a strategic element for the future. Given the persisting disagreements around the definition of a smart city, the article identifies in these four dimensions a more stable reference framework within which ethical concerns can be clustered and discussed. Identifying these dimensions makes possible a review of the ethical implications of smart cities that is transversal to their different types and resilient towards the unsettled debate over their definition. Keywords  Artificial intelligence · Data privacy · Ethics · Smart cities · Surveillance

M. Ziosi (*) · B. Hewitt · P. Juneja Oxford Internet Institute, University of Oxford, Oxford, UK e-mail: [email protected]; [email protected]; [email protected] M. Taddeo Oxford Internet Institute, University of Oxford, Oxford, UK Alan Turing Institute, British Library, London, UK e-mail: [email protected] L. Floridi Oxford Internet Institute, University of Oxford, Oxford, UK Department of Legal Studies, University of Bologna, Bologna, Italy e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_3

11

12

M. Ziosi et al.

1 Introduction Most of the world’s population lives in cities. Cities are the sites where most consumption and production occur and where most of the negative environmental externalities originate (Allam & Dhunny, 2019; Yun et al., 2016). In terms of numbers, around 55% of the world’s population resides in cities (Chen et al., 2020), with this figure reaching a peak of 85% in countries like Australia, the UK, and the Netherlands (Metaxiotis et al., 2010). This is why cities rather than nations have become the unit of interest of a substantial part of social, economic and sustainability policy (Praharaj et al., 2018; Yigitcanlar & Dur, 2013). This shift has given rise to the idea of using technological innovations to address major urban and societal challenges (Trencher, 2019). Smart cities use technologies like AI and big data for various applications ranging from transportation, trash collection, street repairs, administrative efficiency, surveillance, and more (Kitchin, 2018; Sourbati & Behrendt, 2021). They can represent a solution to traditional cities’ problems (Csukás & Szabo, 2021; Hassan & Awad, 2018; Lam & Ma, 2019; Zou, 2019) as well as entirely new opportunities (Yigitcanlar et  al., 2020b). While both stances are compatible with a rhetoric of techno-solutionism (Morozov, 2013), they entail different framings of a smart city. As solutions to traditional cities’ problems, researchers understand smart cities in terms of their potential to improve efficiency compared to traditional cities. Here, “smartness” can be understood in terms of efficiency gains, where new technologies’ value is defined by their capacity to address the shortcomings of existing approaches to traditional cities challenges. An example would be gathering and analysing traffic data to optimize transport in the city, reduce pollution, and avoid bottlenecks. At the same time, smart cities bring about entirely new opportunities (Yigitcanlar et al., 2020b). For example, the technologies involved in a smart city do not just make the trains run more efficiently, they also enable city officials to collect information about the train schedule and train passengers with techniques including facial recognition scans, gait recognition, body temperature, and more. As the example shows and as we shall discuss in the following pages, this means that smart cities may also include technology and sensors that follow people into their private spaces. Thus, smart cities present both solutions to old problems and new opportunities for the present, and come with their own risks and challenges, which require ethical scrutiny. The very idea of a smart city is controversial. Its prevalent conceptualisation merely in terms of technology and optimisation potential (e.g. Anand, 2021; Yigitcanlar et al., 2020a) might eclipse other relevant aspects. Thus, some authors urge acknowledging the complex character of urban life, instead of conceiving the city as an element to optimise (Green, 2020; Kourtit & Nijkamp, 2012). Inevitably, the definition of a smart city plays an important part in setting the stage for a review of the debate about smart cities’ ethical implications. Concurrently, given that the definition of what may count as a smart city is still contested (Albino et al., 2015;

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

13

Praharaj & Han, 2019), any review that privileges only a specific conception of smart city would struggle to be sufficiently inclusive if not universal in the first place. To bypass the problem, this paper will first provide an overview of the various definitions and labels attached to the concept of the smart city, to then identify four emerging dimensions that are sufficiently common and invariant among the different interpretations. These dimensions will then be used to ground the review. They are the framework within which ethical concerns can be clustered and reviewed. As an analysis of the debate about ethical implications of smart cities, this article does not aim to prescribe a specific ethical framework, but rather identify and analyse existing ethical concerns in the smart cities’ literature. With this framework, we hope to allow future work on ethical concerns to translate across differing definitions of a smart city. Following this approach, the article is structured in seven more sections. In Sects. 2 and 3, we argue that, even though the definition of smart city is disputed, four dimensions are transversal to multiple definitions (Albino et al., 2015; Yigitcanlar et  al., 2020b). Then, in the following sections, for each dimension, we present a series of elements of existing ethical concern, identified by conducting a review of the relevant literature. In Sect. 4, we focus on network infrastructure, with the corresponding concerns of control, surveillance, and data privacy and ownership. In Sect. 5, we analyse post-political governance, embodied in the tensions between public and private decision-making and cities as post-political entities. In Sect. 6, we turn to social inclusion, expressed in citizen participation and inclusion, and inequality and discrimination. In Sect. 7, we discuss sustainability, focusing on the environment as an element to protect and as a strategic ingredient for the future. In the last section, we draw some general conclusions.

2 What Is Meant by a “Smart City”? The term “smart city” may refer to technological additions to existing cities, or entirely new cities built with “smartness” in mind. The first example of a smart city that comes to mind might be prototypical, either from a Silicon Valley or a Utopian framing (Gibbs et al., 2013; Hollands, 2008; March, 2016). “Smart city” may refer to citywide efforts to implement new Information and Communication Technologies (ICT) or transportation systems, like in New York or Los Angeles. Alternatively, smart cities may be brand new, entirely constructed cities, like Songdo International Business District in South Korea or the New Clark City (NCC) development project in the Philippines. The expression “smart city” might also refer to the development of a particular neighbourhood, such as Quayside in Toronto or Speirs Locks in Glasgow. Analysing the host of labels and definitions revolving around the term “smart city” has normative relevance in itself, as it can shed light on the complex dynamics at play. To understand why and how, consider the following three points.

14

M. Ziosi et al.

First, the terminological debate is likely to reveal the set of conflicting interests behind the term. The label “smart city” belongs to contemporary jargon around urban development and management. This specialised language is used by consultants and marketing professionals, among others, and it frames how cities are conceived and planned (Praharaj & Han, 2019). In the literature, this is expressed by the citizen-led, private-led, or city-led smart city jargon (Cohen & Cohen, 2015), as well as in the tension behind technology-driven and human-driven conceptions of smart city (Echebarria et al., 2020; Kummitha & Crutzen, 2017). The technology-­ driven (also labelled sometimes techno-centric or techno-optimistic) perspective is often found in smart city initiatives spearheaded by US tech companies like IBM (Batty et al., 2012; Kitchin, 2014). It focuses primarily on “hard infrastructure”, like ICT, and it usually comes with the assumption that technology has the answer to solve the old challenges that cities face (.e.g. traffic). A more human-driven approach is reflected in several European cities, of which Barcelona is an example (Tieman, 2017). This approach focuses primarily on “soft infrastructure”, like human and social capital, e.g. education and knowledge (Caragliu et al., 2011; Martin et al., 2018; McFarlane & Söderström, 2017). Its perspective is that technology alone is insufficient to meet the challenges of cities, especially without essential lifestyle changes and public policies to preserve and restore urban ecosystems in danger. As these approaches suggest, each label stresses a different connotation which may show or hide the agenda of different actors. Second, the ambiguity and disagreement around the term “smart city” may also be evidence of a lack of sound theorising behind it (Praharaj & Han, 2019). In this respect, many labels indicate the historical process of evolution of the term and its trends. This is expressed by terms such as “digital city” (Yovanof & Hazapis, 2009), “tech city” (Foord, 2013), “wired city” (Batty et  al., 2012), “ubiquitous city” (Anthopoulos & Fitsilis, 2010), “intelligent city”, “information city” (Sairamesh et  al., 2004), “knowledge city” (Yigitcanlar et  al., 2008) and “sustainable city” (Praharaj & Han, 2019). For example, some authors present the relationship between digital, intelligent, and smart in a historical key (Mora et al., 2017). “Digital city” originates from the internet wave at the beginning of the 2000s (Cocchia, 2014). And “intelligent city” comes from the meeting of the digital city with the idea of the knowledge society, and it refers to the possibility to use ICT towards human learning and technological innovation (Albino et al., 2015). As for what differentiates the “intelligent city” from the “smart city”, the latter stresses the importance of the institutional and social apparatuses to support policies aimed at forming integrated solutions for different types of city challenges (Ojo et al., 2016). Referring to the term “intelligent”, other authors suggest that a smart city is more user-friendly and accessible than an intelligent city (Albino et al., 2015; Nam & Pardo, 2011). Zheng et al. (2020) conceive an intelligent city as the first generation in the wave of urban innovation and consider smart cities as the second generation because of the higher level of participation from urban authorities in the deployment of smart technologies. Finally, the fact that there is not one single, agreed-upon definition of smart city might be because the concept reflects different perspectives depending on where

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

15

one is in the world (Praharaj & Han, 2019). Not only does it take on different meanings for different people or at different times, but it also means different things in different places. The understanding of “smart city” varies depending on where you are in the world. It changes according to the resources available for innovation, the readiness for change, and the aspirations and expectations of citizens (Praharaj & Han, 2019). We have already seen that even North America and Europe differ in their conception of smart cities. The former tends to adopt a technology-driven perspective influenced by the presence of ICT companies such as IBM and Cisco. The latter reflects a leaning towards a low-carbon economy, expressed in the aspirations of the European Union (Mora et al., 2019). The history and the geography behind the different labelling approaches just discussed bring to light otherwise hidden tensions and divergences and so play an informative role in a normative review of smart cities. However, an excessive focus on the concept rather than on the components of a smart city may take attention away from, for example, the potentially detrimental effects of ICTs on the city environment (Caragliu et al., 2011; Lam & Ma, 2019). Additionally, and importantly for this article, the elusive dynamics of labels and concepts might undermine, rather than ground, any evaluation that aims to focus on the overarching, ethical aspects of smart cities, and thus start from an agreed definition. For this reason, the next step is to offer a sufficiently stable understanding of smart cities and their constant features, so that an ethical review of the concerns that smart cities may raise becomes reasonably feasible. This is the task of the next section.

3 Re-dimensioning the Smart City Definition Even though different conceptions of a smart city mirror different interests, historical trends, and places, some authors have tried to identify a set of dimensions that hold constant across them. In this section, we identify these dimensions in the literature on conceptual frameworks for smart cities, as they provide an analytical clarification of some definitional ambiguities. A widely adopted conceptual framework (Zheng et al., 2020) maps the concept of smart city on the six dimensions of smart economy, smart governance, smart living, smart people, smart environment, and smart mobility (European Parliament, 2014; Giffinger et al., 2007). While this framework is sufficiently broad to cover a variety of smart-city projects (Cocchia, 2014), its “smart” labelling might encounter the same problems against which we have warned in the previous section. For it leaves open what makes a city, as much as an economy or a kind of governance, “smart” in the first place. In an attempt to circumvent such circularity and the ambiguity concerning the smart city label, Yigitcanlar et al. (2018) created a “global” conceptual framework to examine smart cities practices across the world. By reviewing 78 definitions of smart city, they identified (a) economy, in terms of productivity and innovation; (b) society, in terms of liveability and wellbeing; (c) governance, in terms of

16

M. Ziosi et al.

governance and planning; and (d) environment, in terms of sustainability and accessibility as the main smart city development dimensions. Although these dimensions are sufficiently general, they still leave out a crucial aspect of smart cities: technology. Several authors warn against characterising smart cities merely in relation to technology (Glasmeier & Christopherson, 2015). However, its role as an essential component in defining a city as “smart” cannot be denied nor omitted. In this respect, Caragliu et al. (2011) identified aspects common across smart cities by devising a framework meant to unpack what makes a city “smart”. These are “(1) the use of networked infrastructure to improve economic and political efficiency and enable social, cultural and urban development, (2) an underlying emphasis on business-led urban development, (3) a strong focus on the aim of achieving the social inclusion of various urban residents in public services, (4) a stress on the crucial role of high-tech and creative industries in long-run urban growth; (5) a profound attention to the role of social and relational capital in urban development, and (6) social and environmental sustainability as a major strategic component for smart cities” (Caragliu et al., 2011, p. 67). On a similar note, by reviewing more than a dozen definitions of smart cities, Albino et al. (2015) isolated four prevailing aspects that enable one to identify a city as “smart”. These are “(1) the presence of a city’s networked infrastructure that enables political efficiency and social and cultural development, (2) an emphasis on business-led urban development and creative activities for the promotion of urban growth, (3) social inclusion of various urban residents and social capital in urban development, and (4) the natural environment as a strategic component for the future” (Albino et al., 2015, p. 13). Both approaches unpack, rather than rely on, the term “smart”. Additionally, they are general and yet specifically tailored around the characteristics of a smart, rather than any other traditional city. Taking inspiration from the above studies, and by merging the last two frameworks, in this article we identify (a) network infrastructure, (b) post-political governance, (c) social inclusion, and (d) sustainability as the four dimensions of the conceptual framework that best accommodates the review of a series of ethical concerns identified by a systematic search (Grant & Booth, 2009) of the relevant literature on smart cities. “Relevant” here qualifies articles that appeared in a systematic search across four databases,1 for the main terms of (“smart city” AND ethics), iteratively complemented by a theme- or topic-specific search for each of the four identified dimensions of our framework. The search process for each dimension was guided by theme-specific words, such as “connectivity” and “transportation” for (a) Network Infrastructure, or “environment” for (d) Sustainability (see full Methodology in Appendix). As indicated in the introduction, smart cities can represent the solution to traditional cities’ problems (Csukás & Szabo, 2021) and be catalysts of new opportunities (Yigitcanlar et al., 2020b). In this respect, a review on the ethical aspects of smart cities should consider that some of the challenges it identifies might simply be

 The specific databases were Google Scholar, PhilPapers, Scopus, and Web of Science.

1

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

17

longstanding issues relating to urban development, rather than novel problems. Indeed, some common concerns about smart cities, from increased surveillance to accessibility of services, may be the same issues affecting traditional cities. At the same time, such a review should identify new ethical concerns, which may not be found in a more traditional conception of the city, but rather arise from affordances unique to a smart city. In order to draw this distinction clearly, the rest of the article will present each dimension in relation to its present role in a smart city and its past in a more traditional city context. This strategy will help to structure the review along a narrative that sheds light on the ethical concerns raised by smart cities as solutions to traditional cities’ problems and ethical concerns related to the new opportunities, unique to smart cities.

4 Network Infrastructure Some argue that the smart city wave we are seeing now is only the most recent development of a longstanding trend. Pointing to bureaucratic modernisations and new knowledge technologies of the nineteenth century, Robertson and Travaglia argue that we are in the midst of a second big data revolution that differs in volume and velocity of data but presents many of the same challenges (2015). While one can admit that techniques, such as data collection, often target now, as then, groups marked as “moral outsiders” (Robertson & Travaglia, 2015), one should also note that the data collection involved in contemporary smart cities occurs with unprecedented granularity and seamless efficiency, affecting citizens’ lives and relationships with government and the city in profound and rather unprecedented ways (Yigitcanlar et al., 2020a). We find ourselves in an information age, generating and relying on data like never before. It is a new stage in human evolution (hyperhistory), wherein information and communication technologies record, transmit and process data, with human societies crucially relying on ICTs and on information as an essential resource [reference anonymised]. In this new stage, concerns around a city infrastructure are not solely about urban planning, but they extend to the whole network of technologies that pervade the smart city. These technologies may include, among others, big data analytics, cloud computing, IoT, blockchain, robotics, 3D printing, 5G and Artificial Intelligence (AI) (Yigitcanlar et al., 2020b). Here, we identify the ethical concerns relating to this “networked” infrastructure as issues of control, surveillance, data privacy and security. As we will show, these points are highly interconnected. For example, a simple outage suffered by a private company such as Facebook in October 2021 (Talmazan, 2021) can lead to loss of control over government services (e.g. it caused a disruption of healthcare, education, and other government services in cities across the globe), potential loss of essential and private data as well as lead to an increase in surveillance methods following the incident in the name of improved security.

18

M. Ziosi et al.

4.1 Control The centralisation of data in smart cities gives considerable power to those who control it. There are different kinds of control in play here. One is the control of architecture, on what can be done physically within the boundaries of a space. This kind of control in cities is not new and falls under the category of urban planning typically discussed in traditional cities. The other kind of control is that of data and knowledge. This refers to a network rather than an urban infrastructure. As cities become “smarter” and increasingly connected with sensors and reliant on algorithms being fed large quantities of real-time data, the power centred in the administration of city services moves from the mayor’s office and city council chambers to the control rooms, from officials who are responsive to democratic will to those processing the data. Control rooms in cities are not new. However, as Kitchin notes (2015), they are becoming more consolidated and streamlined. Early control rooms were siloed and dealt with monitoring and managing closed systems like an electricity grid. Now, control rooms are not only broader in their remit, but they are also increasingly automated, sometimes with humans-in-the-loop who can actively intervene, enacting what Dodge and Kitchin call “automated management” (Dodge & Kitchin, 2007). As an example, Greenfield (2013) and Kitchin (2015) point to the Intelligent Operations Centre in Rio de Janeiro. Built by IBM, this $14 million facility brings together in one place real-time data from thirty different agencies. This includes data from traffic cameras, social media posts, weather stations and police patrols. Although smart city technologies can increase the control of the government over people, they can also shift that control to private entities. Fisher (2020) brings the notable example of Waze, a navigational app offered directly to consumers, unlike most smart city technologies, although Waze functions in much the same way as many smart city projects. Waze collects real-time data from millions of drivers’ devices to deliver a personalised service, in this case directions to best navigate traffic. In doing so, it redirects traffic through side streets and residential neighbourhoods, causing tension among residents over the management of traffic, traditionally in the public sector’s control. The degree of control afforded to officials in smart cities exceeds that of any previous era. Those in, or otherwise responsible for, control rooms can monitor and affect city activities and systems with extraordinary detail and pervasiveness. Such power can be used well, but also misused or even underused (opportunity costs and ethically wrong omissions), and in each case there is a pertinent moral dimension to be considered and addressed, possibly leading to regulations.

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

19

4.2 Surveillance A major ethical implication of smart cities concerns the surveillance of their citizens. Surveillance as a consideration for urban planning is not new. In a related sense, for example, Georges-Eugene Haussmann, the architect of the great boulevards of Paris, acknowledged the military value of having broader and straighter streets to dispatch riots quickly, allegedly using this justification to attain more funding for his projects (Andrews, 2017). The same issue seems to be reappearing in the restructuring of El Cairo (Lewis & Ebrahim, 2020). However, this is an area where the deployment of ICTs around the city departs from the past, especially since the onset of COVID-19. For instance, it is estimated that there are around 691,000 CCTV cameras in London (CCTV.co.uk, 2020). Israel is using a facial recognition surveillance system that can detect individuals through face masks (Halon, 2020). And even before the novel Coronavirus, the smartphones that many people carry around in their pocket relayed detailed location tracking information (Thompson & Warzel, 2019). Yigitcanlar et al. (2020b) report that state-of-the-art AI surveillance technologies can be applied for the monitoring of communication networks, and to recognise threats, from accidents and fire to crime and fraud. These technologies include predictive analytics, drones, motion detection and other autonomous devices. All this can help cities improve their services and economic and security status (Allam, 2019). In this respect, surveillance is often closely associated with the optimisation of services (e.g., urban services) and, more often, with an increase in security and prevention. However, it can also, and very easily, be used to control and influence citizens’ behaviour in extraordinary detail and pervasiveness. For example, ‘smart streetlight’ cameras in San Diego were initially introduced to help city officials study traffic patterns, but were later regularly used by police officers to investigate purported crimes (Holder, 2020). In cases like this, smart cities may run the risk of becoming a tool or even a catalyst for unwarranted surveillance as well as exacerbating existing inequities in policing systems in the name of increased security. Additionally, some smart cities may install surveillance tools specifically for policing, raising additional ethical questions. In the city of Chicago, “Shot Spotter” gun-shot-detection boxes placed at streetlights around the city are meant to use AI to detect the sound of gunshots in order to prevent crimes from going unreported and speed up responses. Work has since shown that Shot Spotter is unlikely to have a significant impact on firearm-­ related homicides or arrests (Doucette et al., 2021), while there have been cases of people wrongfully jailed because of the technology (Burke et  al., 2021). Other surveillance-­related smart city technologies, such as “predictive policing” programs that seek to help optimise routes for police officers, may suffer from data biases, sending police to areas with high crime rates simply because they have historically been policed more often (Ferguson, 2016). Peculiar to smart cities is the fact that people themselves are also participating in their own surveillance, including through wearable devices. Clarke and Steele

20

M. Ziosi et al.

(2011) argue that personal fitness tracking devices and data can be used in smart cities to inform public health and population health data, urban planning and environmental monitoring, fitness trends and social network analysis, and personalisation of health information. Other scholars have found that such self-quantification has ambivalent or even conflicting effects, being empowering, disempowering, and overpowering (Mau, 2019). The emergence of self-quantifying devices has led to what De Moya and Pallud (2020) call the heautopticon, a panopticon applied on oneself by oneself. Additionally, Manokha argues that employers are increasingly turning to surveillance measures of this kind to control their workers and increase productivity (Manokha, 2019). Regardless of the merits of Manokha’s specific claim, it is hard to dispute that cities and employers are both, and sometimes in tandem, increasing surveillance measures in the name of efficiency. This raises ethical concerns over individual autonomy as well as privacy.

4.3 Privacy & Security In line with what has been presented above, new technologies enable multiple stakeholders and government bodies to collect real-time data, analyse it and to act quickly in response. This may promote an increased level of security and of protection of privacy compared to traditional cities (Allam, 2019). At the same time, the pervasive deployment of ICTs makes cities vulnerable to data security problems, such as data breaches or cyber-attacks (Lam & Ma, 2019). Additionally, their pervasive process of data collection presents a challenge to data privacy (Pavlou, 2011; Price et al., 2005). On the one hand, a smart city can present privacy and security as improvements over a traditional city’s problems. In terms of privacy, van Zoonen (2016) notes that a great portion of the data used in smart cities is impersonal data, often used to improve a city’s services. This highlights an arguably more beneficial use of data rather than surveillance, and the use of impersonal over personal data. In terms of security, some authors speak in terms of a “safe city” (Allam, 2019) when referring to security in relation to smart cities. In particular, Lacinák and Ristvej (2017) report that the concept of “safe city” generally refers to increased security in cities in terms of tackling urban conflicts and crimes, as well as to violence prevention in the context of urban tensions such as forced evictions, land conflicts and scarce urban resources. Additionally, Edwards (2016) argues that both privacy and security are important to smart cities as a prerequisite to keeping the trust and engagement of smart city residents. On the other hand, authors like Hassan and Awad (2018) and Lam and Ma (2019) report concerns around privacy and security that derive specifically from smart cities’ use of new technologies and their increased level of connectivity. In terms of privacy, Ziegeldorf et al. (2013) identify seven privacy threats specific to IoT use in Smart Cities. These are: “identification, localization & tracking, profiling, privacy-­ violating interaction and presentation, lifecycle transitions, inventory attack and

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

21

finally, linkage” (Ziegeldorf et al., 2013, p. 2734). User profiling is considered as a major threat among them. Additionally, Caron et al. (2016) highlight how the use of increasingly complex technologies in a smart city allows a great amount of data about citizens to be collected. This often happens without them being asked for consent nor being given an explanation about why the data is collected and how it will be used. In terms of security, Yigitcanlar et al. (2019) call attention to how the reliance on cyberinfrastructure, often considered the core fabric of smart cities, makes them vulnerable to cyberattacks. Data centres might be hacked, and data can be stolen or intercepted in transit (Mohamed et al., 2020). Besides cyberattacks, Lam and Ma (2019) stress errors in design, the complexity of large and interdependent systems involving multiple stakeholders, and weak encryption as other major causes of security breaches. The effects of security breaches can be highly damaging both for the city and the individual citizens. Examples vary from loss of control over and the potential failure of a city’s systems and non-availability of essential services (McClure et al., 2001) to breaching the confidentiality of citizens’ data (Ferraz & Ferraz, 2014) and losses at the economic level (Mok, 2014; Yadron, 2016). In 2021, an Amazon Web Services outage caused significant disruptions to internet traffic, including an hour-long outage to the UK government’s “gov.uk” website (Hern, 2021). Given the often cloud-focused implementation of ICTs in smart cities, it is not hard to imagine vital government services going offline for significant periods of time. For example, the October 2021 Facebook outage may have disrupted healthcare, education, and other government services in cities across the globe (Talmazan, 2021). While security and privacy are a prerequisite for citizens’ trust and engagement, their failure and abuse can risk public trust and threaten democracy. Techniques such as security and privacy (SPE) enhancement framework can support potential mitigation strategies (Krupp et  al., 2017). Nevertheless, these represent technical fixes with social implications and impacts not yet fully explored (Hassan & Awad, 2018).

5 Post-political Governance The “post-political” can be understood as a reliance on market mechanisms and privatisation with the added backing of technology to appear objective (Beveridge & Koch, 2017). The appeal of easy and efficient solutions that appear to be objective raise the question of whether smart cities may represent a new model of governance, called post-politics. In this section, we analyse how the transforming effects of smart cities challenge the traditional roles of, and boundaries between, public and private decision-making, and the conception of the city itself, seen as a post-­ political entity.

22

M. Ziosi et al.

5.1 Public and Private Decision-Making The previous paragraphs mostly depicted the dangers of excessive government power and control, but smart cities are also places that reveal government dependency on private actors. For example, when cities contract with private entities to transform aspects of their public services, or when national governments do so to build new smart cities from scratch, they cede some degree of decision-making power to the group designing the digital solutions. This shift in decision-making power questions the traditional conception of governance as the monopoly of the government. In this respect, Meijer & Bolívar (2016) present four configurations for smart city governance: governance of a smart city tout court, smart decision making, smart stewardship, and smart urban collaboration. These configurations represent four theoretical perspectives on the role that governance can play in a smart city. And in turn, these perspectives differ in what they envision as the degree of transformation needed in governance to make a city “smarter”. The most conservative conceptualisations describe the preservation of existing institutional agreements towards the creation of smart cities. Instead, more extreme conceptualisations suggest that governance itself should be transformed for a city to achieve a “smart” status. In this respect, the fourth level of conceptualisation is the most transformative, as it envisions smart governance in terms of intelligent collaboration among the multiple actors in the city (Echebarria et al., 2020). These different configurations of governance may raise questions about the legitimacy of government bodies, as they employ predictive algorithms and data-­ processing software that they did not produce and may not fully understand. These tools may also present problems for government transparency; while many governments allow residents to send public records request to view government information (e.g., Freedom of Information requests in the U.S. and U.K.), the decision-making processes of black-box predictive algorithms are often un-interpretable to even the developers themselves. Guidelines and internal documentation can explain decisions made by humans, but the use of non-transparent technology in smart cities may complicate public oversight. When private companies develop these technologies for public use, transparency becomes even more problematic. Calo & Citron (2020) write about the use of automation in US federal agencies, detailing the trend of government agencies to automate their specially delegated power based on expertise and discretion, creating a crisis of legitimacy. They note that nearly half of all federal agencies are using or looking into using AI. At the same time, authors like Yigitcanlar et al. (2020a) stress the need for governments and municipalities to assess their digital infrastructure first. On that matter, the Covid-19 pandemic has shed light on governments’ technological inefficiencies, with several cases like the Queensland state government in Australia, which, at first determined to offer education online, saw its infrastructure failing due to excessive web traffic (Yigitcanlar et al., 2020a). Additionally, technology professionals claim that just 15–20% of large public sector technology projects are successful, partly

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

23

because of poor planning and procurement, and partly because of mid-project changes in scope (Susskind, 2019). The lack of readiness of the government to harness innovation provides opportunities for private companies to take over and re-shape the rules of previously-public services. Platforms like Airbnb and Uber offer good examples of the case in point. Their increased use impacts urban settings (e.g. changes traffic flows) and the balance of responsibilities between the public and the private sectors (e.g. see Uber Files for examples on Uber’s role in lobbying government and defying the law (Davies et al., 2022)). Private interests have always shaped urban housing and transportation to some extent, but the reach and tactics of these companies constitute a shift in kind (Davies et  al., 2022). Consider for example, the reports of alleged attempts of Uber senior executives to lobby head of states or the accusation against Uber to thwart law enforcement by using a “kill switch” to hide data from police during raids (Davies et al., 2022). The success of Uber epitomises the trend of increasing privatisation and individualisation of public services. Few public services draw as much attention from policymakers and, quite often, from city residents as these platforms. At the same time, Uber also demonstrates the utility of transport data to deliver a more effective service. In Brighton and Hove, for instance, researchers found that community transport systems and the local governments commissioning them did not utilise data in a structured or effective way, compared to Uber’s “data-first approach to transport” (Sourbati & Behrendt, 2021). Although they are not the prototypical examples of smart cities, these platforms demonstrate the trend of privatisation and datafication of services formerly guaranteed by public entities. In cities like San Francisco, where government agencies have partnered with companies like Uber, platforms have been integrated into the smart city framework (Khosrowshahi, 2018), only further stressing the need for close ethical consideration.

5.2 Cities as Post-political Entities The smart city features not only different configurations of governance, but also different modes of governing. Kitchin captures this idea in the shift from data-­ informed to data-driven urbanism (2015). New technologies such as AI and data analytics can speed-up the decision-making process by analysing a large amount of data to inform decisions. These new technologies can also automate this process of decision-making by letting the results of these complex analyses determine rather than simply inform decisions. While the former aspect can be conceived as an increase in efficiency from past modes of decision-making, the latter presents a new scenario where decision-making becomes entirely (or almost entirely) automated (Dong et al., 2019; Soomro et al., 2019; Yu et al., 2019). This process of increasing automation comes with advantages as well as risks. On the one hand, the International City/County Management Association (ICMA) lists, among the advantages of automation, the possibility for local governments to run more efficiently, to focus on

24

M. Ziosi et al.

their residents, to tackle human bias, and to optimize the use of public funds (2019). On the other hand, some authors stress how increasing automation can pose a threat to social inclusion and participation (Barocas & Selbst, 2016), exacerbate existing bias and inequality (O’Neil, 2016), and provide austere solution to existing economic and social crises of cities (Gray, 2018). Automation can also undermine accountability. With regards to urban choices more specifically, Neil Gray (2018, p. 1) describes the post-political as a cover for what he calls “soft austerity urbanism”, which he defines as “seemingly progressive, instrumental small-scale urban catalyst initiatives that in reality complement rather than counter punitive hard austerity urbanism”. Masked by the cover of innovative sounding programs often shortened to acronyms, Gray points out that such programming actually served to reduce 10,4000 social rented homes in Glasgow, displacing many local inhabitants. These programs fit the post-political profile, as initiatives guided by a reliance on market mechanisms and privatisation with the added backing of technology to appear objective (Beveridge & Koch, 2017). However, Beveridge and Koch (2017) push back against the inevitability of this post-political stage, arguing that depoliticisation is a dynamic and contingent process that can reframe the scope and stakeholders of conflicts, rather than necessarily skirt such conflicts. Davidson and Iveson (2015) describe a framework for conceiving the city as a political entity that moves beyond the rhetoric of post-politics and responds to all constituencies and their grievances. This shows how the post-political idea associated to smart cities can be seen as mimicking old solutions to old problems (e.g. see austerity urbanism example above) or as bringing about new opportunities to depoliticise conflicts between stakeholders in a smart city through data-driven policy decisions.

6 Social Inclusion The benefits of smart cities depend on what these cities envision as smart citizens. This determines which benefits (if any) citizens receive and who receives those benefits. In this context, we identify citizen participation and inclusion as well as inequality and discrimination as key points of relevance. Overall, which citizens get to participate in the shaping of a smart city, to what extent and on which issues are key issues in matters of social inclusion.

6.1 Citizen Participation and Inclusion The input and participation of urban residents are essential for the fair and effective deployment of smart city technologies. Therefore, some retain that cities can be defined as “smart” only when they successfully integrate the democratic participation of their multiple stakeholders, among which citizens, within a city’s

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

25

management (Fernandez-Anez et al., 2018). The literature presents different avenues for citizen participation. They vary from innovative projects, such as digital urban platforms, where citizens can vote on urban initiatives but also collaborate and solve specific problems together, to citizens acting as sensors for private or public bodies by, for example, flagging problems or creating content. These modes of participation vary in envisioning a more active or passive role for the citizen (Gooch et  al., 2015; March & Ribera-Fumaz, 2018; Waal & Dignum, 2017). Examples of digital platforms where citizens can vote and come up with proposals for the city are e-democracy platforms such as Decidim.2 Examples of initiatives that allow citizens to co-create and engage in collective “problem-solving” are fablabs (Trencher, 2019). These, like other ‘makerspaces’, are creative places for people to gather, learn about and employ manufacturing technologies and digital design to make things in collaboration (Hielscher & Smith, 2014; Smith et al., 2016). Nevertheless, several authors are critical of these initiatives. Some argue that digital solutions like e-democracy platforms and fablabs are designed with tech-­ savvy people in mind, as they require good data literacy levels or programming skills (Trencher, 2019). Furthermore, the role of citizens in them is secondary and engaged in problem-solving activities that only have an indirect impact on the city. The role of citizens is often limited to reporting problematic conditions, like potholes, or simply voting on ideas worthy of funding (Cowley et  al., 2018). Smith et al. (2016) suggest that these initiatives serve instead to reveal the inability of the current political-economic system to adapt successfully to the need for new forms of production and consumption centred around citizens, democratic ideals, and sustainability. Overall, there is still much work to be done in actively integrating citizens in shaping the city, as much as in adapting the understanding of participation to the new conditions afforded by the smart city. The failure to address this can have major consequences for democracy and may exacerbate inequality and discrimination in the city.

6.2 Inequality and Discrimination The benefits of smart city technologies may not reach all city residents equally, and their deployment may exacerbate longstanding inequalities. Cities are sources of great prosperity and GDP output, but they also feature sharp divides between the rich and the poor and other social, digital divides, often with devastating effects. In Chicago, there is a 30-year gap in life expectancy between rich and poor neighbourhoods (City Health Dashboard, 2015). In the San Francisco Bay Area, more than 120,000 workers commute more than 3 h each day because of a lack of affordable housing (Board, 2020). There is great inequality in cities today, as in the world

 https://decidim.org/

2

26

M. Ziosi et al.

generally, and without due consideration, smart cities may accelerate rather than ameliorate this divide. Many people lack the digital literacy skills, the technologies, or sufficient internet connection to use smart city technologies. According to Pew Research Center, about 75% of Americans in urban areas have broadband at home, which is 12 percentage points higher than those in rural areas (Vogels, 2021). Another Pew study found that 14% of U.S. adults have low digital skills and low trust in online information (Horrigan, 2016). Similar gaps are part of the digital divide and are likely to impact the effectiveness of smart city technologies. When cities, say, move benefit sign-ups and other crucial forms online, they may create new inequities, or exacerbate the same inequities the transition was meant to solve. The use of the technologies themselves, regardless of the population’s connectivity, may also entrench inequalities. Much has been written about the problems of fairness in the use of algorithms. City officials and other customers of smart city technologies like to point to the outcomes of algorithmic predictions and decisions as objective and unburdened by value judgments. However, these algorithms are trained with data from the “real world,” which invariably reflects ethical and political choices and historical trends that may be open to criticism. Algorithms that help decide who should get a bank loan, the economic impact of routing a new highway through poor neighbourhoods, or where police should send more patrols, are informed by, and reinforce economic and racial disparities (Yigitcanlar et  al., 2020b), tending to punish the poor (O’Neil, 2016). The benefits of smart cities depend on what the cities envision as smart citizens, those who are in the position to exploit, or have access to, the means and the knowledge required to use the available resources in the best way (Gran et  al., 2021; Janssen et al., 2015). Older people and ethnic minorities, for example, are often left out of data sets and further marginalised by technological innovation (Sourbati & Behrendt, 2021). Thus, smart cities must consider the impact of their technological deployments on the goal of data justice, or the fairness in the way people are made visible, or not, in their handling of digital data (Taylor, 2017). The smart city often caters primarily to entrepreneurs and high-skill professionals as its “smart citizens”. By attracting these into newly developed neighbourhoods or cities, smart cities can raise home prices and accelerate gentrification. Scholars have noted that smart cities can end up replacing existing populations by tearing down old buildings or neighbourhoods to make room for new developments, which often include insufficiently affordable accommodations (Gray, 2018). This is why Shamsuddin & Srinivasan (2021) argue for more attention to the needs of vulnerable groups, specifically relating to housing, in order to build more inclusive smart cities.

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

27

7 Sustainability Sustainability should not be understood one-dimensionally, merely in relation to the environment. Bibri (2020a, b), for example, defines the three dimensions of sustainability as the social, the economic and the environmental. However, as smart city projects are often justified by making references to the environmental dimension (Albino et al., 2015; Yigitcanlar et al., 2020a), we will specifically focus on the latter. In this respect, we shall see in this section that the environment is alternately understood as an element to protect and as a strategic component for the future. As an element to protect, smart cities can help mitigate the adverse effects that traditional cities had and keep on having on the environment. Some authors argue that urbanisation has had profoundly adverse effects on the environment due to excessive urban growth (Dodman, 2017; Estevez et  al., 2016; Han et  al., 2017). These effects include environmental degradation, air and water pollution, resource depletion and intensive energy use, inefficient planning systems and the mismanagement of facilities, poor housing and working conditions, public health and safety hazards, the exacerbation of inequalities and so on (Bibri, 2018). New technologies can help to mitigate these effects, for example, by introducing smart energy systems to minimise energy consumption and production, by monitoring and anticipating changes in the environment, or by operationalising more efficient transport systems (Yigitcanlar et al., 2020a). For example, the city of South Bend, Indiana, which has a population of just over 100,000 people, saved $437 million by implementing “smart sewers” that optimise the traffic of wastewater (Blasko, 2021). Positive about initiatives like these, some authors (Dodgson & Gann, 2011; Pham, 2014) have emphasized the difference between environmental preservation and economic growth, claiming that smart cities initiatives can contribute to both. However, others are sceptical about the compatibility of these goals (Hollands, 2008). Bibri (2020a) argues that the economic dimension wins over the social and environmental one and that smart city projects prioritize the efficiency of solutions rather than providing solutions for sustainability challenges. Several initiatives show how smart cities can relate to the environment as a strategic element for the future. For example, Bibri & Krogstie (2020) list smart grids, meters and buildings, as well as smart urban metabolism and environmental monitoring as examples of some technological solutions for environmental sustainability in smart cities. They are positive about the capacity of these solutions to produce, all combined, a greater positive environmental impact than the sum of their individual effects. This impact can entail improvements in energy efficiency, a decrease in environmental pollution and a shift towards renewable energy (Bibri & Krogstie, 2020). However, they also acknowledge that several of these initiatives come at a great cost for the environment, given the negative impact of the technologies involved. Behind their virtual appearance, there hides a very material side to smart city solutions (Berkhout & Hertin, 2004; Williams, 2011). Large quantities of scarce elements, like rare earth minerals and critical metals, are required to develop these technologies (Chancerel et al., 2015). Additionally, the extraction of such materials

28

M. Ziosi et al.

might lead to socio-environmental impacts and conflicts in the territories of interest. Their recycling also represents a significant concern (Ali, 2014). Technology use also necessitates energy use, and as smart cities begin to embrace technologies like blockchain – the newly elected mayor of New York City has requested that some of his salary be paid in Bitcoin (Banjo & Maglione, 2021) – smart cities may present serious ethical questions about energy consumption [reference anonymised]. The relation between smart city technologies and environmental improvement is not unidirectional, but full of complexity and uncertainty (Berkhout & Hertin, 2004). Further research should investigate this intricate relationship. Failing to do so might result in a Trojan horse, where we welcome future disasters as solutions to present problems.

8 Conclusion Both as solutions to traditional city problems and as new opportunities for the present, smart cities come with their own risks and challenges, which call for ethical scrutiny. As solutions to traditional city problems, smart city projects are often conceived as an increase in efficiency over previous approaches. This feeds into a conceptualisation of smart cities mostly in terms of technology and optimisation potential which, as stated in the beginning, might eclipse the complex character of urban life and its multidimensional challenges. When unquestioningly introduced, these technological solutions might incur the danger of exacerbating pre-existing problematic aspects of old solutions as well as generating new ones. As we have seen in the section on surveillance, technologies that entail predictive analytics, motion detection and autonomous or semi-autonomous devices such as drones are often introduced in the name of increased protection and improvement of a city’s traditional services and its social and economic status. However, they can pose a threat to the autonomy and privacy of citizens. Additionally, as shown by the example of San Diego, even when introduced for seemingly beneficial purposes – such as “smart streetlight” cameras to understand traffic patterns  – these initiatives can serve controversial purposes, such as increased policing, which can, in turn, exacerbate existing inequities and introduce new forms of bias and discrimination. It is paramount that smart city projects, when presented as new solutions, are informed by the problems that made previous solutions problematic or redundant. Additionally, when conceiving smart cities as solutions to current social, environmental, and economic problems, it is crucial to assess their potential for direct and indirect social, environmental, and economic impact. Some of these impacts are intuitive, and easily discoverable. It is clear that policies of increased technological surveillance in policing, potential downtime of vital city services because of private company outages and shifts from public accountability to privately-led projects deserve consistent ethical scrutiny and rectification. However, smart cities bring other, often inter-linked, potential ethical issues, that may be less visible. Increased surveillance from traffic sensors shifts the responsibility of safety and neighbourhood management from city officials to traffic control

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

29

rooms. In the name of increased accessibility, the digitisation of government forms may do the opposite and translate inequities in digital access to inequities in government services. The optimisation of city services through technology may result in social service austerity, leaving behind groups whose needs are not easily quantifiable in models. Due to their ease of scale, even small changes in smart city technologies  – from the location of sensors to the way buttons are placed on city websites – can significantly impact the lives of residents. As new opportunities for the present, the information revolution which powers smart cities has led to redesigning the environment surrounding us to make it more digitally-friendly [reference anonymised]. These transformations have profound ethical, social, and legal implications (ELSI). For example, when applied to cities, they imply changes in the way citizens access services and can exert their rights. For smart cities to hold their promise to improve individual lives, social wellbeing, and environmental conditions, it is essential to consider the many aspects and implications of these transformations, anticipate or minimise problems, and provide redressing opportunities. These measures affect a large spectrum of features characterising smart cities, from procurement policies and public-private partnerships to policy strategies and unintended side-effects. In many cases, lessons about the ELSI of digital transformation can be learned from domain-specific cases, and work on smart cities should build on this expertise. When considering the urban environment, an extra challenge emerges with respect to how problems may be intertwined. Solving them in a satisfactory way often requires careful balancing of competing interests and rights, and ultimately political strategies able to understand both opportunities and limits of the digital transformation and align them with the values of our societies. The four dimensions identified here provide the groundwork for an ethical analysis of a fast-changing field that can offer many solutions and opportunities, as well as for tracking the multiple aspects in which smart cities reinforce old, and introduce new, ethical challenges.

Appendix: Methodology This review of the debate about Smart Cities‘ethical implications was conducted by means of a systematic search and review of scholarship relating to smart cities and ethics (Grant & Booth, 2009). This type of review entails a comprehensive search process, allowing the incorporation of multiple study types (Grant & Booth, 2009). This is suitable for this case, where the reviewed scholarship included papers from multiple disciplines (from Ethics to Science and Technology Studies to Urban Studies, etc.), as well as different study types (research articles, metareviews, literature reviews, etc.), These studies were collected primarily from top research databases: Google Scholar, PhilPapers, Scopus, and Web of Science. The search process was split into a general search and a series of more thematic searches. The relation between the two types of searches is chronological. The general search was used to identify definitions of smart cities and frameworks used to

30

M. Ziosi et al.

categorize them. We elaborated on them in the corresponding Sects. 2 and 3 of this paper. Once we identified four dimensions of interest from an analysis of smart city definitions and frameworks, we conducted a more targeted thematic search. This informed Sects. 4, 5, 6 and 7 of this paper. Figure 3.1 below shows the keyword search, the selected time range, and the number of results (aggregated across the multiple search engines considered) for each search. As it can be observed from the keywords used, none of the words that head the subsections of each dimension in the paper (e.g. surveillance, privacy & security for the dimension of “Network Infrastructure”) was directly used in the search. These are sub-themes that emerged during the review process, which followed an inductive approach. Additionally, each initial search returned around 17.000 results (apart from the Social Inclusion one, which returned 3.150). At this stage, it was paramount to adopt a strategy to reduce this number to a manageable size. To reduce the number of results, “Publish or Perish”3 (PoP), a freely available software program for retrieving and analysing academic citations, was used to re-­ run the above queries. PoP allows to analyse papers according to a range of citation

General search

Defined Themes (a) Network Infrastructure

Thematic Search Keywords ("smart city" AND ethics AND technology AND connect ivity AND network AND transportat ion AND services), range 2000-22 Nr. of results = 17.200

(b) Post-Political Governance

Keywords ("smart city" AND ethics AND governance AND services AND polit ics AND plat form AND part icipation), range 2000-22 Nr. of results = 17.100

Keywords: ("smart city" AND ethics AND def init ion OR review") Nr. of results = 18.000+ (c) Social Inclusion

Keywords ("smart city" AND ethics AND inclusion AND accessibility AND fairness AND equality AND wellbeing AND plat form), range from 2000-22 Nr. of results = 3.150

(d) Sustainability

Keywords ("smart city" AND ethics AND environment AND sustainability AND energy AND ef f iciency), range 2000-22 Nr. of results = 17.300

Fig. 3.1  keyword search, selected time range, and number of results for each search  https://harzing.com/resources/publish-or-perish

3

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

31

metrics (e.g. total citations, h-index), year of publication, journal and type of publication (e.g. article or book). On one hand, PoP was used to validate the results of the initial search (only the searches on Google Scholar, Scopus and Web of Science were re-run as PhilPapers is not available on the software). On the other, the software was used for data cleaning and to determine relevance. Data cleaning was conducted by removing duplicates, assessing the overlap between the targeted searches, and by removing papers with equal or less than one citation from our dataset. A screening process for relevance to the ethics of smart cities as well as to the topics of each specific theme was conducted by reading each remaining title and, where in doubt, retrieving the abstract. Specifically, we made sure that the process of syntactic relevance in our search matched that of semantic relevance (e.g. that keyword “wellbeing” was not about specific health apps but about the quality of life in a smart city in general). Overall, this process allowed us to cut down the total number of papers to review to the numbers expressed in Fig. 3.2 below. These are the papers that were reviewed.

General search

Defined Themes (a) Network Infrastructure

Thematic Search Keywords ("smart city" AND ethics AND technology AND connect ivity AND network AND transportat ion AND services), range 2000-22 Nr. of results = 87

(b) Post-Polit ical Governance

Keywords ("smart city" AND ethics AND governance AND services AND polit ics 2000-22 Nr. of results = 96

Keywords: ("smart city" AND ethics AND def init ion OR review") Nr. of results = 134 (c) Social Inclusion

Keywords ("smart city" AND ethics AND inclusion AND accessibility AND fairness AND equality AND wellbeing AND plat form), range from 2000-22 Nr. of results = 63

(d) Sustainability

Keywords ("smart city" AND ethics AND environment AND sustainability AND energy AND eff iciency), range 2000-22 Nr. of results = 101

Fig. 3.2  Final number of papers to review

32

M. Ziosi et al.

References Albino, V., Berardi, U., & Dangelico, R. M. (2015). Smart cities: Definitions, dimensions, performance, and initiatives. Journal of Urban Technology, 22(1), 3–21. https://doi.org/10.108 0/10630732.2014.942092 Ali, S. H. (2014). Social and environmental impact of the rare earth industries. Resources, 3(1), 123–134. https://doi.org/10.3390/resources3010123 Allam, Z. (2019). The emergence of anti-privacy and control at the nexus between the concepts of safe city and smart city. https://semanticscholar.org/paper/a7a14415d4c518b72d3676add8d 0e36ba8dbf8e3 Allam, Z., & Dhunny, Z. A. (2019). On big data, artificial intelligence and smart cities. Cities, 89, 80–91. https://doi.org/10.1016/j.cities.2019.01.032 Anand, P. B. (2021). Assessing smart city projects and their implications for public policy in the Global South. Contemporary Social Science, 16(2), 199–212. https://doi.org/10.1080/2158204 1.2020.1720794 Andrews, S. (2017). Paris upside down: The city under Haussmann’s renovations. The Vintage News. https://www.thevintagenews.com/2017/03/17/ paris-­upside-­down-­the-­city-­under-­haussmanns-­renovations/ Anthopoulos, L., & Fitsilis, P. (2010). From digital to ubiquitous cities: Defining a common architecture for urban development. Sixth International Conference on Intelligent Environments, 2010, 301–306. https://doi.org/10.1109/IE.2010.61 Banjo, S., & Maglione, F. (2021, November 4). NYC Mayor-Elect Adams Says He’ll Take Paycheck in Bitcoin. Bloomberg.Com. https://www.bloomberg.com/news/articles/2021-­11-­04/ nyc-­mayor-­elect-­eric-­adams-­says-­he-­ll-­take-­paycheck-­in-­bitcoin Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. SSRN Electronic Journal. https:// doi.org/10.2139/ssrn.2477899 Batty, M., Axhausen, K.  W., Giannotti, F., Pozdnoukhov, A., Bazzani, A., Wachowicz, M., Ouzounis, G., & Portugali, Y. (2012). Smart cities of the future. The European Physical Journal Special Topics, 214(1), 481–518. https://doi.org/10.1140/epjst/e2012-­01703-­3 Berkhout, F., & Hertin, J. (2004). De-materialising and re-materialising: Digital technologies and the environment. Futures, 36(8), 903–920. https://doi.org/10.1016/j.futures.2004.01.003 Beveridge, R., & Koch, P. (2017). The post-political trap? Reflections on politics, agency and the city. Urban Studies, 54(1), 31–43. https://doi.org/10.1177/0042098016671477 Bibri, S. E. (2018). Smart sustainable cities of the future: The untapped potential of big data analytics and context–aware computing for advancing sustainability. Springer. Bibri, S.  E. (2020a). Advances in the leading paradigms of urbanism and their amalgamation: Compact cities, eco–cities, and data–driven smart cities. Advances in the Leading Paradigms of Urbanism and Their Amalgamation. https://doi.org/10.1007/978-­3-­030-­41746-­8 Bibri, S. E. (2020b). Compact urbanism and the synergic potential of its integration with data-­ driven smart urbanism: An extensive interdisciplinary literature review. Land Use Policy, 97, 104703. https://doi.org/10.1016/j.landusepol.2020.104703 Bibri, S. E., & Krogstie, J. (2020). Environmentally data-driven smart sustainable cities: Applied innovative solutions for energy efficiency, pollution reduction, and urban metabolism. Energy Informatics, 3, 1–59. https://doi.org/10.1186/s42162-­020-­00130-­8 Blasko, E. (2021). Smart sewer’ technology leads to nearly $450 million in savings for South Bend. Notre Dame News. https://news.nd.edu/news/ smart-­sewer-­technology-­leads-­to-­nearly-­450-­million-­in-­savings-­for-­south-­bend/ Board, T. E. (2020, May 11). Opinion | The cities we need. The New York Times. https://www. nytimes.com/2020/05/11/opinion/sunday/coronavirus-­us-­cities-­inequality.html Burke, G., Mendoza, M., Linderman, J., & Tarm, M. (2021, August 19). How AI-powered tech landed man in jail with scant evidence. AP NEWS. https://apnews.com/article/ artificial-­intelligence-­algorithm-­technology-­police-­crime-­7e3345485aa668c97606d4b54 f9b6220.

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

33

Calo, R., & Citron, D. K. (2020). The automated administrative state: A crisis of legitimacy. Emory Law Journal, 70(4), 797–846. Caragliu, A., Del Bo, C., & Nijkamp, P. (2011). Smart cities in Europe. Journal of Urban Technology, 18(2), 65–82. https://doi.org/10.1080/10630732.2011.601117 Caron, X., Bosua, R., Maynard, S. B., & Ahmad, A. (2016). The Internet of Things (IoT) and its impact on individual privacy: An Australian perspective. Computer Law & Security Review, 32(1), 4–15. https://doi.org/10.1016/j.clsr.2015.12.001 CCTV.co.uk. (2020, November 18). How many CCTV cameras are there in London? (update for 2020/21). CCTV.Co.Uk. https://www.cctv.co.uk/how-­many-­cctv-­cameras-­are-­there-­in-­london/ Chancerel, P., Marwede, M., Nissen, N. F., & Lang, K.-D. (2015). Estimating the quantities of critical metals embedded in ICT and consumer equipment. Resources, Conservation and Recycling, 98, 9–18. https://doi.org/10.1016/j.resconrec.2015.03.003 Chen, G., Li, X., Liu, X., Chen, Y., Liang, X., Leng, J., Xu, X., Liao, W., Qiu, Y., Wu, Q., & Huang, K. (2020). Global projections of future urban land expansion under shared socioeconomic pathways. Nature Communications, 11(1), 537. https://doi.org/10.1038/s41467-­020-­14386-­x City Health Dashboard. (2015). City health dashboard. https://www.cityhealthdashboard.com/. Clarke, A., & Steele, R. (2011). How personal fitness data can be re-used by smart cities. https:// doi.org/10.1109/ISSNIP.2011.6146582 Cocchia, A. (2014). Smart and Digital City: A systematic literature review. In R.  P. Dameri & C.  Rosenthal-Sabroux (Eds.), Smart City: How to create public and economic value with high technology in urban space (pp.  13–43). Springer International Publishing. https://doi. org/10.1007/978-­3-­319-­06160-­3_2 Cohen, B., & Cohen, B. (2015, August 10). The 3 generations of smart cities. Fast Company. https://www.fastcompany.com/3047795/the-­3-­generations-­of-­smart-­cities Cowley, R., Joss, S., & Dayot, Y. (2018). The smart city and its publics: Insights from across six UK cities. Urban Research & Practice, 11(1), 53–77. https://doi.org/10.1080/1753506 9.2017.1293150 Csukás, M., & Szabo, R. Z. (2021). The many faces of the smart city: Differing value propositions in the activity portfolios of nine cities. Cities, 112, 103116. https://doi.org/10.1016/J. CITIES.2021.103116 Davidson, M., & Iveson, K. (2015). Recovering the politics of the city: From the ‘post-political city’ to a ‘method of equality’ for critical urban geography. Progress in Human Geography, 39(5), 543–559. https://doi.org/10.1177/0309132514535284 Davies, H., Goodley, S., Lawrence, F., Lewis, P., O’Carroll, L., & Cutler, S. (2022, July 11). Uber broke laws, duped police and secretly lobbied governments, leak reveals. The Guardian. https:// www.theguardian.com/news/2022/jul/10/uber-­files-­leak-­reveals-­global-­lobbying-­campaign de Waal, M., & Dignum, M. (2017). The citizen in the smart city. How the smart city could transform citizenship. It  - Information Technology, 59(6), 263–273. https://doi.org/10.1515/ itit-­2017-­0012 Directorate-General for Internal Policies of the Union (European Parliament), Millard, J., Thaarup, R. K., Pederson, J. K., Manville, C., Wissner, M., Kotterink, B., Cochrane, G., Cave, J., Liebe, A., & Massink, R. (2014). Mapping smart cities in the EU. Publications Office of the European Union. https://data.europa.eu/doi/10.2861/3408 Dodge, M., & Kitchin, R. (2007). The automatic management of drivers and driving spaces. Geoforum, 38(2), 264–275. https://doi.org/10.1016/j.geoforum.2006.08.004 Dodgson, M., & Gann, D. (2011). Technological innovation and complex systems in cities. Journal of Urban Technology, 18(3), 101–113. https://doi.org/10.1080/10630732.2011.615570 Dodman, D. (2017). Environment and urbanization. In International encyclopedia of geography (pp. 1–9). American Cancer Society. https://doi.org/10.1002/9781118786352.wbieg0623 Dong, Y., Guo, S., Liu, J., & Yang, Y. (2019). Energy-efficient fair cooperation fog computing in Mobile edge networks for Smart City. IEEE Internet of Things Journal, 6(5), 7543–7554. https://doi.org/10.1109/JIOT.2019.2901532

34

M. Ziosi et al.

Doucette, M. L., Green, C., Necci Dineen, J., Shapiro, D., & Raissian, K. M. (2021). Impact of ShotSpotter technology on firearm homicides and arrests among large metropolitan counties: A longitudinal analysis, 1999-2016. Journal of Urban Health: Bulletin of the New York Academy of Medicine, 98(5), 609–621. https://doi.org/10.1007/s11524-­021-­00515-­4 Echebarria, C., Barrutia, J. M., & Aguado-Moralejo, I. (2020). The Smart City journey: A systematic review and future research agenda. Innovation: The European Journal of Social Science Research, 34, 159–201. https://doi.org/10.1080/13511610.2020.1785277 Edwards, L. (2016). Privacy, security and data protection in smart cities: A critical EU law perspective. European Data Protection Law Review (EDPL), 2(1), 28–58. Estevez, E., Lopes, N. V., & Janowski, T. (2016). Smart Sustainable Cities – Reconnaissance Study (Vol. 330). Ferguson, A. G. (2016). Policing predictive policing. Washington University Law Review, 94(5), 1109–1190. Fernandez-Anez, V., Fernández-Güell, J. M., & Giffinger, R. (2018). Smart City implementation and discourses: An integrated conceptual model. The case of Vienna. Cities, 78, 4–16. https:// doi.org/10.1016/j.cities.2017.12.004 Ferraz, F. S., & Guimaraes Ferraz, C. A. (2014). Smart City security issues: Depicting information security issues in the role of an urban environment. 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing, 842–847. https://doi.org/10.1109/UCC.2014.137 Fisher, E. (2020). Do algorithms have a right to the city? Waze and algorithmic spatiality Cultural Studies, 0(0), 1–22. https://doi.org/10.1080/09502386.2020.1755711. Foord, J. (2013). The new boomtown? Creative city to Tech City in East London. Cities, 33, 51–60. https://doi.org/10.1016/j.cities.2012.08.009 Gibbs, D., Krueger, R., & MacLeod, G. (2013). Grappling with smart city politics in an era of market triumphalism. Urban Studies, 50(11), 2151–2157. https://doi.org/10.1177/0042098013491165 Giffinger, R., Fertner, C., Kramar, H., Kalasek, R., Milanović, N., & Meijers, E. (2007). Smart cities—Ranking of European medium-sized cities. Glasmeier, A., & Christopherson, S. (2015). Thinking about smart cities. Cambridge Journal of Regions, Economy and Society, 8(1), 3–12. https://doi.org/10.1093/cjres/rsu034 Gooch, D., Wolff, A., Kortuem, G., & Brown, R. (2015). Reimagining the role of citizens in smart city projects. Adjunct Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2015 ACM International Symposium on Wearable Computers, 1587–1594. https://doi.org/10.1145/2800835.2801622. Gran, A.-B., Booth, P., & Bucher, T. (2021). To be or not to be algorithm aware: A question of a new digital divide? Information, Communication & Society, 24(12), 1779–1796. https://doi. org/10.1080/1369118X.2020.1736124 Grant, M.  J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information & Libraries Journal, 26(2), 91–108. https://doi. org/10.1111/j.1471-­1842.2009.00848.x Gray, N. (2018). Neither Shoreditch nor Manhattan: Post-politics, “soft austerity urbanism” and real abstraction in Glasgow North. Area, 50(1), 15–23. https://doi.org/10.1111/area.12299 Green, B. (2020). The false promise of risk assessments: Epistemic reform and the limits of fairness. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 594–606. https://doi.org/10.1145/3351095.3372869 Greenfield, A. (2013). Against the Smart City. Do Projects. Halon, E. (2020). Israeli anti-terror tech to facially recognize mask-wearing health staff. The Jerusalem Post | JPost.Com. https://www.jpost.com/israel-­news/ israeli-­anti-­terror-­tech-­to-­facially-­recognize-­mask-­wearing-­health-­staff-­624416 Han, J., Meng, X., Zhou, X., Yi, B., Liu, M., & Xiang, W.-N. (2017). A long-term analysis of urbanization process, landscape change, and carbon sources and sinks: A case study in China’s Yangtze River Delta region. Journal of Cleaner Production, 141, 1040–1050. https://doi. org/10.1016/j.jclepro.2016.09.177

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

35

Hassan, A.  M., & Awad, A.  I. (2018). Urban transition in the era of the internet of things: Social implications and privacy challenges. https://semanticscholar.org/paper/ a2ce2fd7c4a96fa7604e601f92953bcc21083494. Hern, A. (2021, June 8). Massive internet outage hits websites including Amazon, gov.uk and Guardian. The Guardian. https://www.theguardian.com/technology/2021/jun/08/ massive-­internet-­outage-­hits-­websites-­including-­amazon-­govuk-­and-­guardian-­fastly Hielscher, S., & Smith, A. (2014). Community-based digital fabrication workshops: A review of the research literature (SSRN scholarly paper ID 2742121). Social Science Research Network https://doi.org/10.2139/ssrn.2742121 Holder, S. (2020, August 6). In San Diego, ‘smart’ streetlights spark surveillance reform. Bloomberg.Com. https://www.bloomberg.com/news/ articles/2020-­08-­06/a-­surveillance-­standoff-­over-­smart-­streetlights Hollands, R. G. (2008). Will the real smart city please stand up? City, 12(3), 303–320. https://doi. org/10.1080/13604810802479126 Horrigan, J. B. (2016). Digital readiness gaps. In Pew research Center. Pew Research Center. ICMA. (2019). Using artificial intelligence as a tool for your local government. https://icma.org/ blog-­posts/using-­artificial-­intelligence-­tool-­your-­local-­government Janssen, M., Matheus, R., & Zuiderwijk, A. (2015). Big and Open Linked Data (BOLD) to create smart cities and citizens: Insights from smart energy and mobility cases. In E. Tambouris, M. Janssen, H. J. Scholl, M. A. Wimmer, K. Tarabanis, M. Gascó, B. Klievink, I. Lindgren, & P.  Parycek (Eds.), Electronic government (pp.  79–90). Springer International Publishing. https://doi.org/10.1007/978-­3-­319-­22479-­4_6 Khosrowshahi, D. (2018, April 11). Moving forward together with cities. Uber Newsroom. https:// www.uber.com/newsroom/citesevent/. Kitchin, R. (2014). The real-time city? Big data and smart urbanism. GeoJournal, 79(1), 1–14. https://doi.org/10.1007/s10708-­013-­9516-­8 Kitchin, R. (2015). Data-driven, networked urbanism (SSRN scholarly paper ID 2641802). Social Science Research Network. https://doi.org/10.2139/ssrn.2641802 Kitchin, R. (2018). Reframing, reimagining and remaking smart cities. In Creating Smart Cities. Routledge. Kourtit, K., & Nijkamp, P. (2012). Smart cities in the innovation age. Innovation: The European Journal of Social Science Research, 25(2), 93–95. https://doi.org/10.1080/1351161 0.2012.660331 Krupp, B., Sridhar, N., & Zhao, W. (2017). SPE: Security and privacy enhancement framework for mobile devices. IEEE Transactions on Dependable and Secure Computing, 14(4), 433–446. https://doi.org/10.1109/TDSC.2015.2465965 Kummitha, R. K. R., & Crutzen, N. (2017). How do we understand smart cities? An evolutionary perspective. Cities, 67, 43–52. https://doi.org/10.1016/j.cities.2017.04.010 Lacinák, M., & Ristvej, J. (2017). Smart City, safety and security. Procedia Engineering, 192, 522–527. https://doi.org/10.1016/j.proeng.2017.06.090 Lam, P., & Ma, R. (2019). Potential pitfalls in the development of smart cities and mitigation measures: An exploratory study. Cities, 91, 146–156. https://doi.org/10.1016/J. CITIES.2018.11.014 Lewis, A., & Ebrahim, N. (2020, August 10). Cairo’s Tahrir Square gets a contested makeover. Reuters. https://www.reuters.com/article/us-­egypt-­tahrir-­square-­idUSKCN25620A. March, H. (2016). The Smart City and other ICT-led techno-imaginaries: Any room for dialogue with degrowth? https://doi.org/10.1016/J.JCLEPRO.2016.09.154 March, H., & Ribera-Fumaz, R. (2018). Barcelona: From corporate smart city to technological sovereignty. In Inside Smart Cities. Routledge. Martin, C.  J., Evans, J., & Karvonen, A. (2018). Smart and sustainable? Five tensions in the visions and practices of the smart-sustainable city in Europe and North America. Technological Forecasting and Social Change, 133, 269–278. https://doi.org/10.1016/j.techfore.2018.01.005

36

M. Ziosi et al.

Mau, S. (2019). The metric society: On the quantification of the social | Wiley. Wiley.Com. https://www.wiley.com/en-­us/The+Metric+Society%3A+On+the+Quantification+of+the+Soc ial-­p-­9781509530403. McClure, S., Scambray, J., & Kurtz, G. (2001). Hacking exposed: Network security secrets and solutions (3rd ed.). Osborne/McGraw-Hill. McFarlane, C., & Söderström, O. (2017). On alternative smart cities. City, 21(3–4), 312–328. https://doi.org/10.1080/13604813.2017.1327166 Meijer, A., & Bolívar, M. P. R. (2016). Governing the smart city: A review of the literature on smart urban governance. International Review of Administrative Sciences, 82(2), 392–408. https:// doi.org/10.1177/0020852314564308 Metaxiotis, K., Yigitcanlar, T., & Carrillo, F. (2010). Knowledge-based development for cities and societies: Integrated multi-level approaches. https://doi.org/10.4018/978-­1-­61520-­721-­3. Mohamed, N., Al-Jaroodi, J., Jawhar, I., & Kesserwan, N. (2020). Data-driven security for Smart City systems: Carving a trail. IEEE Access, 8, 147211–147230. https://doi.org/10.1109/ ACCESS.2020.3015510 Mok, D. (2014, August 6). Cyberattack hits 10,000 patients’ health data. South China Morning Post. https://www.scmp.com/news/hong-­kong/article/1567284/ cyberattack-­hits-­10000-­patients-­health-­data Mora, L., Bolici, R., & Deakin, M. (2017). The first two decades of Smart-City research: A bibliometric analysis. Journal of Urban Technology, 24(1), 3–27. https://doi.org/10.1080/1063073 2.2017.1285123 Mora, L., Deakin, M., & Reid, A. (2019). Combining co-citation clustering and text-based analysis to reveal the main development paths of smart cities. Technological Forecasting and Social Change, 142, 56–69. https://doi.org/10.1016/j.techfore.2018.07.019 Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs. Moya, J.-F. D., & Pallud, J. (2020). From panopticon to heautopticon: A new form of surveillance introduced by quantified-self practices. Information Systems Journal, 30(6), 940–976. https:// doi.org/10.1111/isj.12284 Nam, T., & Pardo, T. A. (2011). Conceptualizing smart city with dimensions of technology, people, and institutions. Proceedings of the 12th Annual International Digital Government Research Conference: Digital Government Innovation in Challenging Times, 282–291. https://doi. org/10.1145/2037556.2037602. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy (1st ed.). Crown. Ojo, A., Dzhusupova, Z., & Curry, E. (2016). Exploring the nature of the smart cities research landscape. In J. R. Gil-Garcia, T. A. Pardo, & T. Nam (Eds.), Smarter as the new urban agenda: A comprehensive view of the 21st century city (pp. 23–47). Springer International Publishing. https://doi.org/10.1007/978-­3-­319-­17620-­8_2 Pavlou, P. A. (2011). State of the information privacy literature: Where are we now and where should we go? MIS Quarterly, 35(4), 977–988. https://doi.org/10.2307/41409969 Pham, C. (2014). SMART CITIES IN Japan an assessment on the potential for EU-Japan. Cooperation and Business Development, 67. Praharaj, S., & Han, H. (2019). Cutting through the clutter of smart city definitions: A reading into the smart city perceptions in India. City, Culture and Society, 18, 100289. https://doi. org/10.1016/j.ccs.2019.05.005 Praharaj, S., Han, J.  H., & Hawken, S. (2018). Urban innovation through policy integration: Critical perspectives from 100 smart cities mission in India. City, Culture and Society, 12, 35–43. https://doi.org/10.1016/j.ccs.2017.06.004 Price, B.  A., Adam, K., & Nuseibeh, B. (2005). Keeping ubiquitous computing to yourself: A practical model for user control of privacy. International Journal of Human-Computer Studies, 63(1), 228–253. https://doi.org/10.1016/j.ijhcs.2005.04.008

3  Smart Cities: Reviewing the Debate About Their Ethical Implications

37

Robertson, H., & Travaglia, J. (2015, October 13). Big data problems we face today can be traced to the social ordering practices of the 19th century. Impact of Social Sciences. https://blogs. lse.ac.uk/impactofsocialsciences/2015/10/13/ideological-­inheritances-­in-­the-­data-­revolution/ Sairamesh, J., Lee, A., & Anania, L. (2004). Information cities. Communications of the ACM, 47, 29–31. Shamsuddin, S., & Srinivasan, S. (2021). Just smart or just and smart cities? Assessing the literature on housing and information and communication technology. Housing Policy Debate, 31(1), 127–150. https://doi.org/10.1080/10511482.2020.1719181 Smith, A., Fressoli, M., Abrol, D., Arond, E., & Ely, A. (2016). Grassroots innovation movements. Routledge. https://doi.org/10.4324/9781315697888 Soomro, K., Bhutta, M. N. M., Khan, Z., & Tahir, M. A. (2019). Smart city big data analytics: An advanced review. WIREs Data Mining and Knowledge Discovery, 9(5), e1319. https://doi. org/10.1002/widm.1319 Sourbati, M., & Behrendt, F. (2021). Smart mobility, age and data justice. New Media & Society, 23(6), 1398–1414. https://doi.org/10.1177/1461444820902682 Susskind, R. (2019). Online courts and the future of justice. Oxford University Press. https://doi. org/10.1093/oso/9780198838364.001.0001 Talmazan, Y. (2021). Facebook’s outage was an annoyance in America. Elsewhere, it left “everything in shambles.” NBC News. https://www.nbcnews.com/news/world/ facebook-­whatsapp-­outage-­annoyance-­u-­s-­big-­deal-­rest-­world-­n1280785 Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big Data & Society, 4(2), 2053951717736335. https://doi.org/10.1177/2053951717736335 Thompson, S. A., & Warzel, C. (2019, December 19). Opinion | twelve million phones, one dataset, zero privacy. The New York Times. https://www.nytimes.com/interactive/2019/12/19/opinion/location-­tracking-­cell-­phone.html Tieman, R. (2017, October 26). Barcelona: Smart city revolution in progress. Financial Times. https://www.ft.com/content/6d2fe2a8-­722c-­11e7-­93ff-­99f383b09ff9. Trencher, G. (2019). Towards the smart city 2.0: Empirical evidence of using smartness as a tool for tackling social challenges. https://doi.org/10.1016/J.TECHFORE.2018.07.033 van Zoonen, L. (2016). Privacy concerns in smart cities. Government Information Quarterly, 33(3), 472–480. https://doi.org/10.1016/j.giq.2016.06.004 Vogels, E.  A. (2021). Some digital divides persist between rural, urban and suburban America. Pew Research Center. https://www.pewresearch.org/fact-­tank/2021/08/19/ some-­digital-­divides-­persist-­between-­rural-­urban-­and-­suburban-­america/ Williams, E. (2011). Environmental effects of information and communications technologies. Nature, 479(7373), 354–358. https://doi.org/10.1038/nature10682 Yadron, D. (2016, February 18). Los Angeles hospital paid $17,000  in bitcoin to ransomware hackers. The Guardian. https://www.theguardian.com/technology/2016/feb/17/ los-­angeles-­hospital-­hacked-­ransom-­bitcoin-­hollywood-­presbyterian-­medical-­center Yigitcanlar, T., & Dur, F. (2013). Making space and place for knowledge communities: Lessons for Australian practice. Australasian Journal of Regional Studies, 19(1), 36–63. Yigitcanlar, T., O’Connor, K., & Westerman, C. (2008). The making of knowledge cities: Melbourne’s knowledge-based urban development experience. Cities, 25(2), 63–72. https:// doi.org/10.1016/j.cities.2008.01.001 Yigitcanlar, T., Kamruzzaman, M., Buys, L., Ioppolo, G., Sabatini-Marques, J., da Costa, E. M., & Yun, J. J. (2018). Understanding ‘smart cities’: Intertwining development drivers with desired outcomes in a multidimensional framework. Cities, 81, 145–160. https://doi.org/10.1016/j. cities.2018.04.003 Yigitcanlar, T., Han, H., & Kamruzzaman, M. (2019). Approaches, advances, and applications in the sustainable development of smart cities: A commentary from the guest editors. Energies, 12(23), 4554. https://doi.org/10.3390/en12234554 Yigitcanlar, T., Butler, L., Windle, E., Desouza, K., Mehmood, R., & Corchado, J. (2020a). Can building “artificially intelligent cities” safeguard humanity from natural disasters, pandem-

38

M. Ziosi et al.

ics, and other catastrophes? An urban Scholar’s perspective. Sensors (Basel, Switzerland), 20. https://doi.org/10.3390/s20102988 Yigitcanlar, T., Desouza, K., Butler, L., & Roozkhosh, F. (2020b). Contributions and risks of artificial intelligence (AI) in building smarter cities: Insights from a systematic review of the literature. Energies, 13, 1473. https://doi.org/10.3390/en13061473 Yovanof, G. S., & Hazapis, G. N. (2009). An architectural framework and enabling wireless technologies for digital cities & intelligent urban environments. Wireless Personal Communications, 49(3), 445–463. https://doi.org/10.1007/s11277-­009-­9693-­4 Yu, H., Yang, Z., & Sinnott, R. O. (2019). Decentralized big data auditing for Smart City environments leveraging Blockchain technology. IEEE Access, 7, 6288–6296. https://doi.org/10.1109/ ACCESS.2018.2888940 Yun, J. J., Lee, D., Ahn, H., Park, K., & Yigitcanlar, T. (2016). Not deep learning but autonomous learning of open innovation for sustainable artificial intelligence. Sustainability, 8(8), 797. https://doi.org/10.3390/su8080797 Zheng, C., Yuan, J., Zhu, L., Zhang, Y., & Shao, Q. (2020). From digital to sustainable: A scientometric review of smart city literature between 1990 and 2019. Journal of Cleaner Production, 258, 120689. https://doi.org/10.1016/j.jclepro.2020.120689 Ziegeldorf, J. H., Morchon, O. G., & Wehrle, K. (2013). Privacy in the internet of things: Threats and challenges. Security and Communications Networks. https://doi.org/10.1002/sec.795 Zou, T. (2019). Urban metabolism and the UM-US-SC Nexus. https://semanticscholar.org/pape r/4949716fe9aabe732c2b4773134d08b5090b73cd

Chapter 4

The Intersections Between Artificial Intelligence, Intellectual Property, and the Sustainable Development Goals Francesca Mazzi

Abstract  The Sustainable Development Goals (SDGs) represent the main hope for peace and prosperity in the near future, according to the 2030 UN Agenda. Artificial Intelligence (AI) is a significant technological advancement of the Fourth Industrial Revolution. Intellectual Property (IP) is the system that incentivises innovation worldwide. These three areas influence each other. This chapter aims to illustrate the intersections between AI, IP, and SDGs that emerge from the literature and can be relevant from a policy perspective. The objective is to unveil research areas that can advance scientific understanding of how IP contributes to the SDGs, by using and incentivising AI methods to inform IP offices, businesses, and policymakers. The chapter concludes by providing a line of research of that kind. Keywords  Artificial intelligence · Intellectual property · Sustainable development goals · Sustainable innovation · Digital innovation

1 Introduction Artificial Intelligence (AI) is one of the most fast-paced technological advancements of the ongoing Fourth Industrial Revolution. Significant advancements in technologies are incentivised worldwide through the Intellectual Property (IP) system. Consequently, the IP system plays a fundamental role in driving the development of AI and related fourth industrial revolution innovation: in which fields, what kind of applications, for which purposes, and amongst others, the extent to which innovation serves the Sustainable Development Goals (SDGs). The SDGs represent the main hope for peace and prosperity in the near future, according to the 2030 UN Agenda. Therefore, it is intuitive to question how these three areas interact. This topic was the object of one of the working sessions of the World Trade Organisation’s (WTO) Public Forum of October 2019. The question was: “Can F. Mazzi (*) Saïd Business School, University of Oxford, Oxford, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_4

39

40

F. Mazzi

Artificial Intelligence and the Internet of Things be Governed to Achieve the UN Sustainable Development Goals? An Intellectual Property Law Perspective” (Noto La Diega, 2019). During the session, it was stated that “The WTO can play an important role in achieving the UN sustainable development goals. Investments in AI and IoT could go a long way in that these technologies could lead to economic growth, innovation, good health, and new services. For this to happen, however, they must be adequately governed. This means that we need laws  – and IP laws above all, given their role in incentivising creativity and innovation – that are fit for AI and IoT.  As shown above, AI could lead to unprecedented research developments with revolutionary impact on healthcare. They would do so by changing the way we conduct research  – with AI-powered data mining and cancer-predicting deep learning models  – and by making us rethink certain IP laws” (Noto La Diega, 2019). This chapter aims to contribute to such a policy-oriented conversation by providing a snapshot of the most relevant intersections between the three areas. It categorises the types of one-to-one intersections to highlight some of the aspects that emerge from the literature, as shown in Fig. 4.1. Such contribution allows for the identification of potential areas for future research, to advance the knowledge of how the three fields interact with each other. This research can, in turn, inform policies aimed at improving the governance of IP and technologies, including AI, in relation to the SDGs. The contribution of this chapter requires a high level of analysis to find the intersections between the three macro-areas that are the most relevant for a governance-­ oriented perspective. Consequently, this analysis has two significant limitations: it disregards the intersections between the fields that can be evident at a more granular level, and those that might be useful for a policy debate but have a technical nature that does not serve the policy-oriented approach of our inquiry. The chapter is structured as follows: Sect. 2 reports the literature on IP and SDGs; Sect. 3 on AI and SDGs; Sect. 4 on AI and IP. Section 5 suggests a potential line of research on the intersections between the three areas that can inform IP offices, businesses, and policymakers. Section 6 provides concluding remarks.

2 IP and SDGs The idea that IP can be crucial to achieving the SDGs is not new. Scholars have stressed that IP regulation is fundamental to global knowledge governance and to direct incentives for innovation, the building of innovation capacity, technology transfer, and dissemination and diffusion of the results of innovation worldwide (Chon et al., 2018a). In this paragraph, we focus on the role of IP in relation to the SDGs, providing some of the positive aspects, i.e. illustrating how IP can be an enabler of SDGs, and some of the negative aspects, i.e. the challenges that IP can pose to the achievement of the SDGs.

AI as enabler of SDGs

AI as inhibitor of SDGs

IPRs as enablers of SDGs

IP-related challenges to SDGs

Fig. 4.1  the intersections between AI, IP, and SDGs

AI & SDGs

IP & SDGs

AI both facilitates and challenges IP laws and management

IP regulates the use of and directs investments in AI

AI & IP

4  The Intersections Between Artificial Intelligence, Intellectual Property… 41

42

F. Mazzi

As for the positive aspects, the literature provides many examples at a more granular level of abstraction by connecting specific Intellectual Property Rights (IPRs) to specific SDGs. Francis Gurry, the Director-General of the World Intellectual Property Organization (WIPO), has listed some of the intersections between the two areas (Rimmer, 2018). For example, he highlighted how IP and SDG 9 relate to each other, dealing with industry, innovation, and infrastructure, and how they impact several other SDGs (Rimmer, 2018). He underlined how innovation as a policy can assist in realising other SDGs and how SDG 17 should be perceived as a modality in terms of partnerships for the goal (Rimmer, 2018). The conversation also focused on the role of copyright law as the principal mechanism for financing cultural production, in terms of legal framework in treaties and normative discussions, and in relation to collective and individual rights management, human resource capacity building, and others (Rimmer, 2018). He also stressed the connection between intellectual property and SDG 3, Good Health and Well-Being, and the relationship between health and innovation, patent and upstream R&D activities, access to medicines, and others (Rimmer, 2018). Some studies adopted an analytical approach and tried to analyse the relationship between specific IPRs and SDGs. For example, one study attempted to verify if sustainable urbanization is positively related to the protection of IPRs, using the case of the Huaihai economic zone (HEZ), in China (Gao et  al., 2022). Another study focused on how the Geographical Indication (GI) framework can accelerate the SDGs, to set the ground for developing a coherent GI monitoring and evaluation system globally (Barrera, 2020). Other contributions underline the interconnection between IPRs, such as copyright and education, ICT, and libraries (Chon et al., 2018b). Other studies adopted a programmatic approach at a higher level of abstraction, debating on how to achieve one or more SDGs through an effective intellectual property system. For example, one paper explored some of Africa’s intellectual property frameworks and argued that an effective intellectual property system is critical in driving progress in all societies (McDave & Hackman-Aidoo, 2021). Similarly, one research focused on how the SDGs create an opportunity to activate a development-oriented approach to domestic IP strategies in general (Cadogan, 2019). The authors argued that “Development intersects with IP policies as creativity and innovation are either fostered or frustrated by an economy’s chosen development policy,” and “(…)a well-executed and effective IP strategy needs to be targeted at all aspects of development policy. (…) Even sectors not typically associated with an IP strategy can complement an IP development-oriented policy, impacting rural and smaller communities” (Cadogan, 2019). The Cambridge Handbook on public-private partnerships, intellectual property and sustainable development adopts a programmatic approach, focusing, among other things, on the link between SDG 3, Good health and well-being, and IPRs that impact public health, on IPRs and green innovation, and governance and institutional design perspectives (Chon et al., 2018b). The book “Intellectual property and sustainable markets” provides a more economics-­oriented perspective. It focuses on the interrelation between IP and other

4  The Intersections Between Artificial Intelligence, Intellectual Property…

43

legal fields, contributing to a discussion of the role of IP in promoting and ensuring that development is sustainable, also analysing financial aspects and market dynamics. One of the challenges is that the global dimension of the SDGs needs to be unpacked and tailored to local needs and interests. Therefore, experts advocated collaborative partnerships in IP and development to address the regulatory coordination issues inherent in the production and distribution of global public goods (Chon et al., 2018a, b). Another challenge is that the intellectual property system may sometimes work counterproductively to achieving the SDGs because of some of its effects. For example, by locking up agricultural innovation, inflating drug prices, stalling follow-­on innovation, rewarding the invention and sale of polluting technologies, reducing biodiversity, and preventing technology transfer (Bannerman, 2020). Another challenge applies to IPRs in general, but it is particularly relevant to patent. The patent system is aimed at incentivising innovation in any field of technology. However, there is no definition of innovation in patent regulations. The lack of a definition has the material consequence of incentivising innovation that improves efficiency, effectiveness or competitive advantage, disregarding any sustainability evaluation (Hsu, 2007). Patent law sets requirements for inventions to qualify as such, and to obtain protection. To be eligible for patent protection, inventions should not fall under any subject matter for which patentability is excluded, and they should be new, non-­ obvious, and industrially applicable. None of these parameters implies sustainability considerations. The only definition of innovation derived from patent law is negative: any product or process is usually considered a suitable subject matter for a patent unless it is specifically excluded from protection. Patent law describes what cannot be patented due to underlying public policy considerations. However, sustainability considerations, such as environmental impact, are not included in any negative parameters. The exception for “inventions for which their commercial exploitation would be contrary to morality”, which is present in most jurisdictions, could be subject to sustainability-oriented interpretation. However, an argument for the moral obligatoriness of sustainability might require more explicit consensus or legislation.

3 AI and SDGs The intersections between AI and SDGs can be categorised as follow: AI has a great potential to advance the SDGs, in that it can be an enabler of the Goals (Cowls et al., 2021), but it represents a matter of trade-offs, in that it can also be an inhibitor of the Goals, for example, because of the massive computational resources required, that have a very high energy requirement and carbon footprint (Vinuesa et al., 2020).

44

F. Mazzi

As a general-purpose technology, AI has many possible applications to the SDGs: broadly speaking, AI can be used for understanding problems, solution seeking, and decision making (Findlay, The Ethics of Artificial Intelligence for the Sustainable Development Goals, forthcoming 2022). In many fields, and regarding specific SDG targets, it can be argued that the use of AI represents the best practice, for AI methods and techniques can produce results quantitatively and/or qualitatively superior to those achieved by other means (Cowls et al., 2021). Such superiority in terms of, for example, data processing shall be benchmarked against the environmental impact of using AI. The topic of AI for SDGs derives from using AI for social good (Taddeo & Floridi, 2021). The AI for Social Good movement aims to establish interdisciplinary partnerships centred around using AI applications to support achieving SDG targets (Tomašev et al., 2020). This area of research aims to harness the potential of AI for good, while mitigating associated ethical challenges (Taddeo & Floridi, 2018; The Ethics of Artificial Intelligence for the Sustainable Development Goals, forthcoming 2022). The literature on the topic is vast, and it interests both public and private sectors, discussing government’s readiness to employ AI for SDGs (Liengpunsakul, 2021), analysing existing initiatives that use AI for SDGs (Cowls et al., 2021), providing conceptual and normative approaches to AI governance for a global digital ecosystem supportive of the SDGs (Gill & Germann, 2021), investigating the role of AI in the construction of sustainable business models (Di Vaio et al., 2020) and in typical business challenges that might require conversion to meet SDGs-related standards, such as production and supply-chain disruption, inventory management, budget planning, and workforce management (Visvizi, 2022). However, the challenges accompanying AI development and deployment are similarly complex. As mentioned earlier, AI can be an enabler and an inhibitor of the SDGs (Vinuesa et al., 2020). Amongst others, the use of AI is intimately linked to nonuniversal access to increasingly large data sets and the computing infrastructure required to use them (Visvizi, 2022). AI design, development and deployment can produce unethical outcomes (Floridi et al., 2020). The lack of a comprehensive regulation of AI aimed at mitigating unethical consequences might pose risks to achieving the SDGs, for example, in developing countries. The goal of zero poverty is threatened by the imperfect design and implementation of decision-making algorithms that have displayed evidence of bias, lack ethical governance, and limit transparency in the basis for their decisions, causing unfair outcomes and amplifying unequal access to finance (Truby, 2020; The Ethics of Artificial Intelligence for the Sustainable Development Goals, forthcoming 2022). The challenges and opportunities around AI require answering questions from fields quintessentially human, such as ethics and philosophy (Floridi, 2021) but that do not have easy answers. Scholars have called for all stakeholders, including governments, policymakers, industry, and academia, to advance the dialogue on such issues for the development of AI to avoid potential threats and ensure that ethical principles are embedded in AI applications that affect our everyday lives (Holzinger et al., 2021).

4  The Intersections Between Artificial Intelligence, Intellectual Property…

45

4 AI and IP The intersections between AI and IP that interest our analysis are bi-directional. On the one hand, AI impacts the IP system negatively and positively. On the other hand, the IP system, and more specifically, the patent system, directs innovation in AI, with pros and cons. The intersections between these two areas were the object of two public consultations by the UK government, the last one in November 2021 (‘Artificial Intelligence and IP: Copyright and Patents’, n.d.). AI influences the IP system positively in two ways: as an object of IP protection that is exponentially relevant in the market and various industry fields (de Bruin et al., 2022); and as a tool for IP offices to cut times and costs and maximise efficiency. AI can also be useful for establishing smart Intellectual Property Offices in developing countries (Prihastomo et  al., 2019) and for analysing IP-related data (Aristodemou & Tietze, 2018). At the same time, AI can negatively impact IP because it might challenge current laws and regulations, as an object of protection, a generator of protectable content, and a tool that can replace human resources. The governance of IP protection for AI itself is not straightforward. On the one hand, software and codes can attract copyright protection, but they can be patented if applied to solve a technical problem (Hashiguchi, 2017). On the other hand, even if protectable, AI might be kept as a trade secret, which might affect data-driven innovation or copyright infringement actions (Ebrahim, 2020). Despite the tentative ways in which patent offices around the world have tried to adapt their patent systems to grant protection to software, the patentability of algorithm-based inventions is still an area of legal uncertainty. Concerns also arise about the application of patentability requirements, such as novelty where national differences remain (Jacques, 2020). Moreover, AI can generate music, art, and inventions (Davies, 2011; Abbott, 2016), and as such, it might challenge IP laws and norms that were conceived for human-generated content (Abbott, 2019; Ramalho, 2018). For example, as highlighted by (Ebrahim, 2020), the patentability of AI-generated inventions also complicates the patentability of AI. Currently, the disclosure requirement incentivises the disclosure of AI-generated inventions, i.e. “downstream innovation”, but it does not incentivise the disclosure of the AI system itself and the used dataset, defined as “upstream innovation” (Ebrahim, 2020). IP (mainly, but not only the patent system1) influences the development and applications of AI, as a system of incentives that stimulates investments in innovation. However, patents, as well as copyright, provide technologically neutral incentives. Therefore, specific laws are more favourable for the development of AI, and some others are less (Craig, 2021). For example, trade-offs between strong copyright protection and open data innovation seem to be unavoidable, the way policymakers decide to apply copyright directly affects AI development (Strowel &  Other IP categories are relevant for the development of AI: an example is copyright in relation to data mining. 1

46

F. Mazzi

Ducato, 2021), and the patent portfolio of AI enterprises affects the relationship between R&D intensity and innovation performance (Dong et al., 2021).

5 Discussing Potential Areas for Future Research From the literature discussed in the previous paragraphs, the three areas analysed in this research are interconnected and influence each other differently. As we have seen, on the one hand, AI can inhibit some of the SDGs, and some of the functioning mechanisms of the IP system negatively impact the development of AI for SDGs. However, on the other hand, AI already serves as SDGs-enabler and can further advance the SDGs, and the IP system plays a crucial role in both the achievement of the SDGs and the development of AI. The chapter aimed to report the intersections that might be relevant to unveil gaps and areas for further research concerning the trade-offs between these two angles. From the analysed literature, it is possible to adopt different lines of research, depending on the starting point. Namely, further research can focus on investigating: 1. to what extent the SDGs can guide the development of IP-protectable AI solutions at a policy level. 2. whether IP regulation incentivises the development of AI applications for SDGs. 3. how AI can be used to measure the impact of IP on the SDGs. The first research question requires a multidisciplinary approach, since it involves ethics, philosophy, geopolitics, macroeconomics, and international cooperation. It can be unpacked in micro-research questions that focus on advancing the dialogue between globalisation and localisation, cooperation and sovereignty, exploitation of resources and conservation using technological solutions (such as AI applications) that can be incentivised, protected and diffused by IP rights. The goal would be to investigate if the philosophical and ethical foundation of the SDGs can be reconciled with the theoretical justifications of IP-protectable AI solutions. The second research question would focus on the impact of various IP regulations (copyright, patent, trademark, design, GIs, plant variety, etc.) in developing AI in different industry fields that benefit the SDGs. Such a research question would require combining results from case studies on different IP portfolios in different geographical areas. The third research question explores how to use AI to combine complex datasets that can inform IP policies to advance the SDGs. Such a goal would require harmonising the measurement of the SDGs’ achievement level, for example, through agreed-on Key Performance Indicators (KPIs), to evaluate the IP portfolio’s performances in a specific time and space. The use of AI to assess IP performance through SDGs-related KPIs can help inform businesses, IP offices and policymakers (Fig. 4.2).

for IP offices and businesses

Policies informed by IP-SDGs related KPIs and AIelaborated data

AI methods to process, manage, and monitor SDGsrelated KPI

laws to incentivise sustainable innovation

AI

IP

Fig. 4.2  One of the potential lines of research on AI, IP, and SDGs

for IP offices and for businesses

KPIs to measure IP impact on SDGs

SDGs

4  The Intersections Between Artificial Intelligence, Intellectual Property… 47

48

F. Mazzi

6 Conclusions This chapter provided a snapshot of how AI, IP, and SDGs relate to each other as emerging from the literature. As illustrated, these fields are firmly connected, and the intersections among them show trade-offs and both challenges and opportunities for policymakers. However, the ethical, philosophical, and economic considerations on how these fields influence each other are limited and advancing the knowledge of their interactions is timely and desirable for both public and private stakeholders. Among the various intersections considered, the chapter illustrated some research questions that can inform IP policies oriented towards maximising positive IP externalities towards the SDGs through AI.

References Abbott, R. (2016). I think, therefore I invent: Creative computers and the future of patent law. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.2727884 Abbott, R. (2019). Everything is obvious. UCLA Law Review, 66(1), 2–53. Aristodemou, L., & Tietze, F. (2018). The state-of-the-art on intellectual property analytics (IPA): A literature review on artificial intelligence, machine learning and deep learning methods for analysing Intellectual Property (IP) data. World Patent Information, Advanced Analytics of Intellectual Property Information for TechMining, 55(December), 37–51. https://doi. org/10.1016/j.wpi.2018.07.002 Artificial Intelligence and IP: Copyright and Patents. (n.d.). GOV.UK. Accessed 6 Oct 2022. https:// www.gov.uk/government/consultations/artificial-­intelligence-­and-­ip-­copyright-­and-­patents. Bannerman, S. (2020). The world intellectual property organization and the sustainable development agenda. Futures, 122(September), 102586. https://doi.org/10.1016/j.futures.2020.102586 Barrera, A. G. (2020). Geographical indications for UN sustainable development goals: Intellectual property, sustainable development and M&E systems. International Journal of Intellectual Property Management, 10(2), 113–173. https://doi.org/10.1504/IJIPM.2020.108099 Cadogan, M. S. (2019). Using SDGs to leverage national intellectual property strategies, no. 2: 11. Chon, M., Roffe, P., & Abdel-Latif, A. (2018a). Charting the triple Interface of public–private partnerships, global knowledge governance, and sustainable development goals. In A. Abdel-­ Latif, M. Chon, & P. Roffe (Eds.), The Cambridge handbook of public-private partnerships, intellectual property governance, and sustainable development (Cambridge law handbooks) (pp. 3–26). Cambridge University Press. https://doi.org/10.1017/9781316809587.004 Chon, M., Roffe, P., & Abdel-Latif, A. (2018b). The Cambridge handbook of public-private partnerships, intellectual property governance, and sustainable development. Cambridge University Press. Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2021). A definition, benchmark and database of AI for social good initiatives. Nature Machine Intelligence, 3(2), 111–115. https://doi. org/10.1038/s42256-­021-­00296-­0 Craig, C. J. (2021). The AI-copyright challenge: Tech-neutrality, authorship, and the public interest. SSRN Scholarly Paper, Rochester, NY. https://doi.org/10.2139/ssrn.4014811 Davies, C. R. (2011). An evolutionary step in intellectual property rights – Artificial intelligence and intellectual property. Computer Law & Security Review, 27(6), 601–619. https://doi. org/10.1016/j.clsr.2011.09.006 de Bruin, J., Breimer, N., & Veenhuis, H. (2022). Commercialization and intellectual property of artificial intelligence applications in cardiovascular imaging. In C. N. De Cecco, M. van Assen,

4  The Intersections Between Artificial Intelligence, Intellectual Property…

49

& T. Leiner (Eds.), Artificial intelligence in cardiothoracic imaging (Contemporary medical imaging) (pp. 549–560). Springer. https://doi.org/10.1007/978-­3-­030-­92087-­6_51 Dong, Y., Wei, Z., Liu, T., & Xing, X. (2021). The impact of R&D intensity on the innovation performance of artificial intelligence enterprises-based on the moderating effect of patent portfolio. Sustainability, 13(1), 328. https://doi.org/10.3390/su13010328 Ebrahim, T. (2020). Artificial intelligence inventions & patent disclosure (SSRN scholarly paper ID 3722720). Social Science Research Network. https://papers.ssrn.com/abstract=3722720 Floridi, L. (2021). Introduction  – The importance of an ethics-first approach to the development of AI.  In L.  Floridi (Ed.), Ethics, governance, and policies in artificial intelligence (Philosophical studies series) (pp.  1–4). Springer International Publishing. https://doi. org/10.1007/978-­3-­030-­81907-­1_1 Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to design AI for social good: Seven essential factors. Science and Engineering Ethics, 26(3), 1771–1796. https://doi.org/10.1007/ s11948-­020-­00213-­5 Gao, X., Zhu, J., & He, B.-J. (2022). The linkage between sustainable development goals 9 and 11: Examining the association between sustainable urbanization and intellectual property rights protection. Advanced Sustainable Systems, 6(3), 2100283. https://doi.org/10.1002/ adsu.202100283 Gill, A. S., & Germann, S. (2021). Conceptual and normative approaches to AI governance for a global digital ecosystem supportive of the UN Sustainable Development Goals (SDGs). AI and Ethics, 1–9. https://doi.org/10.1007/s43681-­021-­00058-­z Hashiguchi, M. (2017). The global artificial intelligence revolution challenges patent eligibility laws. Journal of Business and Technology Law, 13(1), 1–36. Holzinger, A., Weippl, E., Tjoa, A.  M., & Kieseberg, P. (2021). Digital transformation for sustainable development goals (SDGs)  – A security, safety and privacy perspective on AI.  In A.  Holzinger, P.  Kieseberg, A.  M. Tjoa, & E.  Weippl (Eds.), Machine learning and knowledge extraction (Lecture notes in computer science) (pp.  1–20). Springer. https://doi. org/10.1007/978-­3-­030-­84060-­0_1 Hsu, M.-Y. (2007). Green patent: Promoting innovation for environment by patent system. In PICMET ‘07–2007 Portland international conference on management of engineering technology (pp. 2491–2497). https://doi.org/10.1109/PICMET.2007.4349585 Jacques, S. (2020). Patenting algorithms in an internet of things and artificial intelligence world: Pathways to harmonizing the patentable subject matters and evaluation of the novelty requirement. Japanese Institute of Intellectual Property. https://ueaeprints.uea.ac.uk/id/eprint/77062/ Liengpunsakul, S. (2021). Artificial intelligence and sustainable development in China. The Chinese Economy, 54(4), 235–248. https://doi.org/10.1080/10971475.2020.1857062 McDave, K. E., & Hackman-Aidoo, A. (2021). Africa and SDG 9: Toward a framework for development through intellectual property. US-China Law Review, 18(1), 12–29. Noto La Diega, G. (2019). Can artificial intelligence and the internet of things be governed to achieve the UN sustainable development goals? An intellectual property law perspective. SSRN Scholarly Paper. https://papers.ssrn.com/abstract=3505247 Prihastomo, Y., Kosala, R., Supangkat, S.  H., Ranti, B., & Trisetyarso, A. (2019). Theoretical framework of smart intellectual property Office in Developing Countries. Procedia Computer Science, The Fifth Information Systems International Conference, 23–24 July 2019, Surabaya, Indonesia, 161(January), 994–1001. https://doi.org/10.1016/j.procs.2019.11.209 Ramalho, A. (2018). ‘Patentability of AI-generated inventions: Is a reform of the patent system needed?’ SSRN scholarly paper. Social Science Research Network. https://doi.org/10.2139/ ssrn.3168703 Rimmer, M. (2018). A submission on intellectual property and the United Nations sustainable development goals. Senate Standing Committee on Foreign Affairs, Defence, and Trade. https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/ Foreign_Affairs_Defence_and_Trade/SDGs

50

F. Mazzi

Strowel, A., & Ducato, R. (2021). Artificial intelligence and text and data mining: A copyright Carol. In The Routledge handbook of EU copyright law. Routledge. Taddeo, Mariarosaria, and Luciano Floridi. 2018. How AI can be a force for good ( 361 (6404): 751–752. https://doi.org/10.1126/science.aat5991. Taddeo, M., & Floridi, L. (2021). How AI can be a force for good  – An ethical framework to harness the potential of AI while keeping humans in control. In L. Floridi (Ed.), Ethics, governance, and policies in artificial intelligence (Philosophical studies series) (pp.  91–96). Springer. https://doi.org/10.1007/978-­3-­030-­81907-­1_7 Tomašev, N., Cornebise, J., Hutter, F., Mohamed, S., Picciariello, A., Connelly, B., Belgrave, D.  C. M., et  al. (2020). AI for social good: Unlocking the opportunity for positive impact. Nature Communications, 11(1), 2468. https://doi.org/10.1038/s41467-­020-­15871-­z Truby, J. (2020). Governing artificial intelligence to benefit the UN sustainable development goals. Sustainable Development (Bradford, West Yorkshire, England), 28(4), 946–959. https://doi. org/10.1002/sd.2048 Vaio, D., Assunta, R. P., Hassan, R., & Escobar, O. (2020). Artificial intelligence and business models in the sustainable development goals perspective: A systematic literature review. Journal of Business Research, 121(December), 283–314. https://doi.org/10.1016/jjbusres.2020.08.019 Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., & Nerini, F. F. (2020). The role of artificial intelligence in achieving the sustainable development goals. Nature Communications, 11(1), 233. https://doi.org/10.1038/ s41467-­019-­14108-­y Visvizi, A. (2022). Artificial Intelligence (AI) and Sustainable Development Goals (SDGs): Exploring the impact of AI on politics and society. Sustainability (Basel, Switzerland), 14(3), 1730. https://doi.org/10.3390/su14031730

Chapter 5

Cyber Weapons and the Fifth Domain: Implications of Cyber Conflict on International Relations Joshua Jaffe

Abstract  Since the advent of the internet, governments endeavored to shape its use for their strategic advantage. This ambition extends to military applications where control of the cyber domain has become a primary element of modern conflicts. Increasingly, military doctrine features cyber weapons not only as a tool to strategically prepare the battlefield, but also as a critical component of battlefield tactics. The significance of cyber weapons on both national defense strategy and battlefield tactics is so significant that many military theorists have characterized it as a Revolution in Military Affairs (RMA), a tectonic shift in capabilities akin to the advent of manned flight or nuclear fission. This paper analyzes the modern uses of cyber weapons through the lens of recent RMAs, suggesting potential implications for policy and areas for further study by the academy. Keywords  Cyber warfare · Cyber conflict · International relations · Defense Revolutions in technology shape economies, state politics, and international relations (IR) often resulting in inter-state competition for advantage. Sometimes, technological advances result in new weapons that upend the strategic paradigms of the day. Writing in Geopolitical Futures, Jacek Bartosiak describes these kinds of advances as Revolutions in Military Affairs (RMA), a tectonic shift in capabilities that ushers in a new phase in the way wars are conducted.1 Other times, a revolutionary discovery in the natural world opens an entirely unexplored theatre for competition and conflict. Yet, consider for a moment, what would happen if such a revolutionary discovery and a game-changing RMA occurred simultaneously? We are living through such a time. The advent of the digital age ushered in what has come to be known as the Fourth Revolution, a fundamental change in the way  J. Bartosiak (2019).

1

J. Jaffe (*) Oxford Internet Institute, University of Oxford, Oxford, England e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_5

51

52

J. Jaffe

societies function as a result of Information and Communications Technologies (ICT).2 This creation of paradigm-shifting cyber weapons, as well as ongoing ­competition for the so-called fifth domain of conflict, cyberspace, makes this question practical, not theoretical.3

1 The Cold War: A Recent Lesson There is a range of classical IR theories that each attempt to explain competition for resources within the domains of military competition that existed at the time. How instructive are they for navigating competition and conflict in the cyber domain? Does the West need an updated, modern ‘grand strategy’ for our digital world? The Cold War provides an interesting case study for the evaluation of these questions. It extended a new domain of conflict brought about technological innovations, namely the advent of nuclear power and the first generation of atomic weapons. At the same time, advances in manned space flight opened an entirely novel domain of conflict in previously uncontested space. This novel domain of conflict challenged an age-old understanding dating back to the Peace of Westphalia. Since 1648, scholars had confined their theorizing to conflict in the terrestrial world and laid over it a framework built on an understanding of inviolability of borders and non-­interference by third party nations in the domestic affairs of a state. With the opening of space for competition and conflict, these assumptions no longer held. There existed a borderless theatre for conflict with no clear, shared understanding of where states defined their critical national interest. Combined with the RMA in atomic weapons and ballistic missiles, states had no mutually understood doctrine for the use of these new weapons should conflict arise, whether within the Westphalian world or outside it (Cimbala & McDermott, 2014). Described by the New York Times as ‘the Dean of the Cold War,’ John Lewis Gaddis’ writings on this subject are an appropriate lens through which to reflect on the history of the Cold War and the lessons it teaches for modern deterrence of cybercrime.4 In his seminal work on the Cold War, Strategies of Containment, Gaddis describes the approach taken by the two rival blocs as one of ‘strategic deterrence.’5 Yet, for your state’s weapons to have a deterrent effect, the other side’s population—and specifically its leaders—need to fear the power you possess. This logic led to the creation and public testing of weapons of such immense power that they constituted a show of force overwhelming enough to deter any rational actor. In sum, it is estimated that more than 2000 nuclear tests were conducted in the second half of the twentieth century. The largest of these was conducted in 1961

 L. Floridi (2017).  R. Clark (2019). 4  P. McMillan (1997). 5  J Gaddis (1982). 2 3

5  Cyber Weapons and the Fifth Domain: Implications of Cyber Conflict…

53

over the Mityushikha Bay, which resulted in an explosion with the destructive effect of more than 50 megatons, or 50 million metric tons, of TNT.6 Predictably, the escalation in the size of the weapons and the public nature of the tests led to an arms race between rival factions, with both sides being unwilling to incur the risk of possessing power insufficiently capable of deterring the action of the other. However, given the power of the weapons possessed by each side and the risk of misunderstanding, both sides engaged in a practice of publicly documenting their rules for use of these weapons in a framework that came to be known as an ‘escalation ladder.’7 The use of strategic weapons under this doctrine was largely confined to the protection of vital national interests. Conventional, not nuclear, forces were used to defend the periphery. Furthermore, asymmetrical tactics such as assassination, propaganda, and so-called low intensity conflict helped defend peripheral interests while also sowing doubt in the legitimacy and efficacy of the opposing power block.

2 Applying These Lessons to the Fifth Domain There are lessons to learn from the Cold War’s clear parallels with modern cyber challenges. First, the emergence of a new type of weapon which precedes the formation of any established doctrine for its use, combined with the modern ambiguity about the sources of a cyberattack, nearly mirrors the Cold War example above. Like nuclear weapons, cyber weapons are able to strike a state’s industry, critical infrastructure, and commerce without warning. Cyber defence is difficult, so deterrence theory is certainly relevant. The lack of a doctrine for the use of these new cyber weapons results in a similar ‘strategic ambiguity,’ where leaders can misunderstand the risk posed to them by a potential adversary. The second parallel is the need for a weapon’s capability to be understood by an adversary for it to have any deterrent effect. In the Cold War, weapons were tested in public. Today, cyber weapons are tested by proxi. Thus, we regularly see the use of deniable, low-intensity cyber conflict and propaganda by state-affiliated actors. Each act deniably probes an adversary’s digital borders, uncovering weaknesses, while simultaneously revealing more about where a state defines its national interest in digital space. Finally, states appear to be self-organizing into digital spheres of interest, mirroring the Cold War pattern of aligning into power blocks for the purpose of mutual security and advantage. Yet there are some limitations to this framing as well. Even in a well-understood global environment with long-established norms for state behaviour, the strategic actions of a state can never be entirely reduced to the predictable, mechanistic output of its leaders. There are a range of other actors within the system such as field

 C. Kelly (2014).  J Gaddis (1999).

6 7

54

J. Jaffe

commanders, political functionaries, leaders of industry, and the state’s population itself who exert significant influence on the actions of a state. This was certainly true during the Cold War and is even more true in cyberspace. Many IR theorists have tried to account for these other, more nuanced factors in their theorizing. Counterposed to the ‘grand strategy’ theory of statecraft is a tradition of more detailed micro-analysis focused on factors internal to the state as the main drivers of its behaviours. This approach not only adds a level of state introspection, but it also offers some important insights when applied to the question of the state’s role in cyber conflict. Sociologist Jack Goldstone’s work explained the role of social movements and how demography shapes states’ security strategies. In Goldstone’s Political Demography, human behaviour, and more importantly material human need, drives actions that are largely independent of national identity and state interest.8 Demography and resource scarcity create pressure in a social system that compels civil society toward innovation and consumption while simultaneously driving the state towards expansion. The actions of states are subordinate to the factors influencing their domestic population and the resource constraints within their geography.9 According to Goldstone, as pressure builds within their state system, they respond by pulling a range of the levers of state power. Often times, an initial step is the expansion of social services and subsidies, which provides momentary reprieve. As pressure builds, this gives way to repression, which also temporarily maintains the status quo but increases the systemic pressure. This is coupled with labor export through outmigration to relieve pressure and engagement in deniable asymmetric conflict to increase social cohesion, pointing internal discontent towards external adversaries. At times, this gives way to territorial expansion of borders for resource acquisition through conflict. These general conceptions remained true for virtually all the world’s history as wealth creation largely remained coupled with the ability to extract and consume resources. However, in the digital age, for the first time economies were able to create assets and resources that existed outside of the physical world. As argued by Andrew McAffee in his book More from Less, software, digital currency, and data operated in a new special paradigm.10 This changed, or offered the possibility of changing, the volume of the “pressurized container” of the state. Applied to the cyber domain of conflict, no longer is a state’s stability tied to the relationship between population and consumable, physical resources. Grand strategy can now contemplate taking digital resources through conflict without altering Westphalian borders. For example, reporting in Privacy Affairs shows clearly that criminal gangs accounting for large amounts of disruptive cybercrime originating

 Goldstone (2012).  Collins et al. (1999). 10  McAfee (2019). 8 9

5  Cyber Weapons and the Fifth Domain: Implications of Cyber Conflict…

55

from Russia and China target entities exclusively outside of those states.11 Cyberspace extends borderless digital theft, as well as protest through cybercrime and ‘hacktivism,’ unmoored from the constraints of proximity in the physical world. Repression too takes more than just physical forms and now includes digital propaganda, privacy infringement, and censorship (Robinson, 2020).

3 States Need a New Approach to Defence That Accounts for Both Cyber Weapons and Cyber Space? So what are we to make of these parallels between the Cold War and cyber warfare? They paint a full, though imperfect, picture of the problem posed by cyberweapons and explain the differing state approaches to incorporating cyber capabilities in their arsenal. They also reveal the blurring lines between the foreign and domestic, as well as the external and internal. Cyberconflict could happen in isolation or be inextricably linked with a broader state aim, resulting in far more complicated challenges. It often incurs damage in unintended, collateral geographies leading to the transnationalization of cyberspace and implicating spheres ranging from criminal justice, economics, security, and defence. This reality requires states to possess a clearly defined conception of their national interests in cyberspace. Aristotle is credited with saying, ‘nature abhors a vacuum.’ In the absence of state doctrine for use of cyber weapons and the collective enforcement of norms (as was the case with the Nuclear Non-Proliferation Treaty), human impulses will fill the void. As such, the treaty-based collective establishment of norms, as well as a stated doctrine for the state use of cyber weapons, seems a necessary step. The world would also benefit from a clear understanding of red lines and escalation ladders, reducing ambiguity and the likelihood of miscalculation. Moreover, where the parallels paint an imperfect picture, we should improve it. The framing of cyber both as zero-sum and an existential struggle for survival between two hegemons seems misplaced. The current world is multipolar, with the EU exerting significant pressure on the nature of digital sovereignty and data subject rights that is distinct from both the US and China. Also, corporations have a significant—maybe dominant—role in the development of cyber norms, the nature of defence, and the limitations of state power in cyberspace. The recognition of individual actors and their impact on the state is also key to framing the cyber picture. Individuals often act in their own interests in the commission of cybercrime, targeting both domestic and foreign enterprises without much regard for the crossing of Westphalian boundaries. This makes redress of cyber conflict difficult without cooperation. Interstate investments that both provide disincentives for cybercrime, as well as the treaty-based adjudication of claims against entities in other nations that violate these norms, would also be necessary. This level of 11

 A. Kramer (2021).

56

J. Jaffe

cooperation is presently lacking from global governance of cyberspace as the relics of geopolitical rivalries among actors still primarily define the landscape of international relations, inherited from the Cold War context. Finally, it seems that any model must go further and address the role played by corporations. In the current day, corporations develop most cyber infrastructure as well as the cyber defences that protect it. The cyber weapons created by even the most sophisticated states both rely upon and often attack the technology and architecture created by these corporations. Workable models of cyber strategy should recognize the incentive structures within which corporate actors operate and should also propose solutions in the state geographies where they are regulated. It is clear that a state’s critical national interest is no longer limited to territory and physical resources. A state’s strategy must also determine how it will protect digital resources and the companies that create them.

References Bartosiak, J. (2019). Revolution in military affairs. Geopolitical Futures. https://geopoliticalfutures.com/pdfs/the-­revolution-­in-­military-­affairs-­geopoliticalfutures-­com.pdf Cimbala, S. J., & McDermott, R. N. (2014). A new cold war? Missile defence, nuclear arms reductions, and cyber war. Comparative Strategy, 34(1), 95–111. Clark, R. (2019). The fifth domain: Defending our country, our companies, and ourselves in the age of cyber threats. Penguin Publishers. Collins, R., et  al. (1999). Macrohistory: Essays in sociology of the long run. Stanford University Press. Floridi, L. (2017). The fourth revolution, how the infosphere is reshaping human reality. Oxford University Press. Gaddis, J. L. (1982). Strategies of containment. Oxford University Press. Gaddis, J.  L. (1999). Cold war statesmen confront the bomb: Nuclear diplomacy since 1945. Oxford University Press. Goldstone, J. (2012). Political demography: How population changes are reshaping international security and national politics. Oxford University Press. Kelly, C. (2014). Atomic heritage. In US National Museum of Nuclear Science & History. 8 August 2014. Kramer, A.. (2021). Companies linked to Russian ransomware hide in plain sight. New York Times. https://www.nytimes.com/2021/12/06/world/europe/ransomware-­russia-­bitcoin.html McAfee, A. (2019), More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources. Scribner McMillan, P.  J.. (1997). Cold warmonger. The New  York Times. https://www.nytimes. com/1997/05/25/books/cold-­warmonger.html Robinson, J.. (2020). Cyberwarfare statistics: A decade of geopolitical attacks. Privacy Affairs. https://www.privacyaffairs.com/geopolitical-­attacks

Chapter 6

A Comparative Analysis of the Definitions of Autonomous Weapons Mariarosaria Taddeo and Alexander Blanchard

Abstract  In this article we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapon systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement around conditions of deployment and regulations of their use and, indeed, whether AWS are to be used at all. We build on the comparative analysis to identify essential aspects of AWS and then offer a definition that provides a value-neutral ground to address the relevant ethical and legal problems. In particular, we identify four key aspects – autonomy; adapting capabilities of AWS; human control; and purpose of use – as the essential factors to define AWS and which are key when considering the related ethical and legal implications. Keywords  Adapting capabilities · Autonomy · Autonomous artificial agents · Autonomous weapons systems · Artificial intelligence · Definition · Human control · Lethal autonomous weapons systems

This chapter is a reprint of (Taddeo and Blanchard, 2022b). M. Taddeo (*) Oxford Internet Institute, University of Oxford, Oxford, UK Alan Turing Institute, London, UK e-mail: [email protected] A. Blanchard Alan Turing Institute, London, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_6

57

58

M. Taddeo and A. Blanchard

1 Introduction The debate on the ethical and legal implications of autonomous weapons systems (AWS) dates back to the early 2000s, with some proponents (Arkin, 2009) defending the use of these systems and others calling for a ban (Sharkey, 2008, 2010; Sparrow, 2007). The debate has become much more active since 2012, when the US Department of Defence (DoD) published an executive order on AWS (US Department of Defense, 2012) which, along with the report from Human Rights Watch (Losing Humanity: The Case against Killer Robots, 2012), revamped the international debate on the ethical and legal problems posed by AWS. Since then, the debate has grown with contributions from scholars, military and policy experts, and the involvement of the International Committee of the Red Cross (ICRC), the UN Institute for Disarmament Research (UNIDIR), and the UN Convention on Certain Weapons (CCW), which established a Governmental Group of Experts (GGE) to discuss emerging technologies in the area of lethal autonomous weapon systems (LAWS). While the debate remains deeply polarised as to whether the use of AWS is ethically acceptable and legally sound, there is at least consensus as to what ethical and legal aspects are to be considered in making this call: respect of human dignity, International Humanitarian Law (IHL), and international stability. IHL is central to this debate, as there is consensus that AWS can be only deployed insofar as they abide by the IHL principles of necessity, proportionality, and distinction. These principles are uncontroversial; what is problematic is understanding whether, and to what extent, autonomous artificial agents enabling AWS can comply with them.1 For example, respecting the principle of distinction for AWS is problematic insofar as, at least in its current state of development, autonomous artificial agents are unable to analyse the context in which they operate with the necessary precision to distinguish what/who is a legitimate target (Sharkey, 2010, 2016; Amoroso & Tamburrini, 2020). The IHL principles define ‘operational’ requirements which, if not met by current models of AWS, might be met, at least in theory, in the future by more refined AWS. More fundamental problems emerge when considering AWS and human dignity. In this case the questions is how a person is killed or injured, the focus is on the process through which the decisions to injure or kill are made: if the decision to kill or injure a human being is taken by a machine, then the human dignity of those targeted is violated (Asaro, 2012; Docherty, 2014; Sharkey, 2019; Johnson & Axinn, 2013; Sparrow, 2016; O’Connell, 2014; Ekelhof, 2019). The impact of the use of AWS on human dignity is independent from the level of sophistication of the

 See [the following articles on the principles of distinction (Blanchard and Taddeo, 2022a), jus ad bellum  proportionality (Blanchard and Taddeo, 2022b), and necessity  (Blanchard and Taddeo, 2022c) in application to AWS. On the ethical principles for the use of AI in defence see Taddeo et  al. 2021 on the issues of moral responsibility for the actions of AWS (Taddeo and Blanchard, 2022a). 1

6  A Comparative Analysis of the Definitions of Autonomous Weapons

59

technology, for it questions the legitimacy of delegating the decision on the use of force, possibly lethal force, to machines (Lieblich & Benvenisti, 2016). It questions whether delegating this decision is compatible with the values upheld by our societies and refers back to the notions of humanity and public conscience, which are central to legitimacy of any weapons, not only AWS. As the ICRC report stresses ethical decisions by States, and by society at large, have preceded and motivated the development of new international legal constraints in warfare, including constraints on weapons that cause unacceptable harm. In international humanitarian law, notions of humanity and public conscience are drawn from the Martens Clause (International Committee of the Red Cross (ICRC), 2018, 1).

Ultimately, problems related to human dignity refer to human agency, the decisions and actions that human should and should not delegate, and the moral responsibilities linked to this agency and to the decision to use force. Ascribing moral responsibility for the actions performed by AI systems has proved to be extremely problematic in many domains, the case of AWS is not an exception. As argued by (reference removed for review purposes), whilst a responsibility gap is problematic in all the categories of use of AI within the defence and security domain – namely, sustainment and support, adversarial and non-kinetic, and adversarial and kinetic – the gap is particularly worrying when considering the adversarial and kinetic uses of AI, given the high stakes involved (Sparrow, 2007). Questions also arise with respect to the impact of AWS on international stability. On the one side, AWS may lead to an increased incidence of war and hamper international stability by ‘lowering the barriers’ to warfare (Enemark, 2011; Brunstetter & Braun, 2013). For instance, it may be the case that the widespread use of AWS would allow decision-makers to wage wars without the need to overcome the potential objections of military personnel or of a democratic populace more broadly (Steinhoff, 2013; Heyns, 2014). In the same vein, asymmetric warfare that would result from one side using AWS may lead to the weaker side resorting to insurgency and terrorist tactics more often (Sharkey, 2012). Because terrorism is generally considered to be a form of unjust warfare (or, worse, an act of indiscriminate murder), deploying AWS may lead to a greater incidence of unjust violence. Scholarly and policy efforts focusing on these topics have grown over time. However almost 10 years later from the DoD executive order and the Human Right Watch report, a shared international (let alone global) approach to address these problems has not yet been defined. The reasons behind this failure are multiple and range from political will, competing interests at the international level, and defence postures, all of which is compounded by a lack of a shared understanding of AWS and of their key features and related ethical and legal implications. As stressed in a UNIDIR report proponents and opponents of AWS will seek to establish a definition that serves their aims and interests. The definitional discussion will not be a value-neutral discussion of facts, but ultimately one driven by political and strategic motivations (UNIDIR, 2017, 22).

Indeed, our analysis identified 12 definitions of AWS proposed by States or key international actors—such as the ICRC and NATO. The definitions draw focus on

60

M. Taddeo and A. Blanchard

different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. Clearly, this approach is detrimental both in terms of fostering an understanding of AWS and in facilitating agreement around conditions of deployment and regulations of their use and, indeed, whether AWS are to be used at all. This becomes evident when considering the works of the CCW/GGE. Table 6.1 below summarises the key points of the discussion of this group between 2014 and 2019. It shows that while there is a consensus on the key aspects of AWS and on the ethical problems that they pose; a shared definition, and therefore a shared understanding, of AWS and of what aspects pose the most pressing ethical and legal problems is still lacking. Consider for example, how the points reported in Table 6.1 often conflate AWS with LAWS and the related ethical and regulatory problems. This article aims to fill this gap. We offer a comparative analysis of existing definitions of AWS with the goal of identifying the different approaches that underpin them, their similarities and differences, as well as their limitations. We draw from this analysis to identify essential aspects of AWS and then offer a definition that provides a value-neutral ground to facilitate efforts to address the relevant ethical and legal problems. In doing so, we aim to fill the gap identified by UNIDIR (UNIDIR, 2017, 22). In particular, we identify four key aspects—autonomy; adapting capabilities of AWS; human control; and purpose of use—as the essential factors to define AWS and which are key when analysing the related ethical and legal implications. Before moving forward with our analysis, we should clarify that, for the purpose of this article, we focus on AWS and consider LAWS as a subset of this category. LAWS are AWS with a specific purpose of use, i.e. deploying lethal force, as opposed to the wider set of purposes of use of AWS, e.g. anti-material, damage, and destruction. In terms of the scope of our analysis, this enables us to consider a wider set of technologies and purposes of use. It should be stressed that ethical problems related to AWS – e.g. issues of control, responsibility, predictability—apply a fortiori when considering LAWS. At the same time, LAWS pose specific ethical problems – e.g. respect of human dignity and of military virtue – related to the lethal purpose of their use.

2 Definitions of Autonomous Weapon Systems We identified 12 definitions of AWS or LAWS (Table 6.2) provided by States (either endorsed or retrieved from official documents) and by international organisations, like the ICRC and NATO.2 This plethora of definitions encroaches upon international debate on the ethical and legal implications of AWS. For example, it has been

 NATO offers a definition of autonomous systems and not specifically of AWS. Nonetheless, we include it here insofar as it refers to identifying characteristics of AWS. 2

6  A Comparative Analysis of the Definitions of Autonomous Weapons

61

Table 6.1  Key points of the discussions held at the CCW GGE between 2014 and 2019 CCW/ 2014 Many interventions stressed the fact that, even if the elaboration of a definition GGE was premature, some key elements appeared as pertinent to describe the concept of autonomy for LAWS, for example the capacity to select and engage a target without human intervention. Some experts highlighted the fact that autonomy should be measurable and should be based on objective criteria such as capacity of perception of the environment, and ability to perform pre-programmed tasks without further human action. Many interventions stressed that the notion of meaningful human control could be useful to address the question of autonomy. Other delegations also stated that this concept requires further study in the context of the CCW. The concept of human involvement in design, testing, reviews, training and use was discussed. The notion of predictability was also underlined by some delegations as a key issue. Convention on Certain Conventional Weapons (2014, 4) 2017 The need to improve shared understanding of autonomous weapon systems was recognised. The elaboration of a working definition of LAWS, without prejudice to the definition of systems that may be subject to future regulation, was encouraged. Consideration was given to the scope of a possible definition, including questions of systems already deployed, defensive versus offensive weapons, and the distinction between fully and semi-autonomous systems. The view that it was premature or unhelpful to begin work on definitions was also put forward. Convention on Certain Conventional Weapons (2017, 7) 2018 Technical characteristics related to self-learning (without externally-fed training data) and self-evolution (without human design inputs) have to be further studied. Similarly, attempting to define a general threshold level of autonomy based on technical criteria alone could pose difficulty as autonomy is a spectrum, its understanding changes with shifts in the technology frontier, and different functions of a weapons system could have different degrees of autonomy. In the context of the CCW, a focus on characteristics related to the human element in the use of force and its interface with machines is necessary in addressing accountability and responsibility. Convention on Certain Conventional Weapons (2018, 5) 2019 On the agenda item 5 (b) ‘Characterization of the systems under consideration in order to promote a common understanding on concepts and characteristics relevant to the objectives and purposes of the Convention’ the Group concluded as follows: (a) The role and impacts of autonomous functions in the identification, selection or engagement of a target are among the essential characteristics of weapons systems based on emerging technologies in the area of lethal autonomous weapons systems, which is of core interest to the Group; (b) Identifying and reaching a common understanding among High Contracting Parties on the concepts and characteristics of lethal autonomous weapons systems could aid further consideration of the aspects related to emerging technologies in the area of LAWS.’ (p. 5) ‘(b) Different potential characteristics of emerging technologies in the area of lethal autonomous weapons systems, including: self-adaption; predictability; explainability; reliability; ability to be subject to intervention; ability to redefine or modify objectives or goals or otherwise adapt to the environment; and ability to self-initiate.’ Convention on Certain Conventional Weapons (2019, 5)

62

M. Taddeo and A. Blanchard

Table 6.2  Twelve definitions of AWS and LAWS as provided by states or international organisation between 2012 and 2020 State/Organisation Canada

China

France

Date Definition 2018 “[S]ystems with the capability to independently compose and select among various courses of action to accomplish goals based on its [information] and understanding of the world, itself, and the situation. Whilst Canada has no ‘official’ definition, this is the definition used by the Department of National Defence (DND). Department of National Defence (2018); see also: Ariel Shapiro (2019). 2018 LAWS should include but not be limited to the following 5 basic characteristics. The first is lethality, which means sufficient pay load (charge) and for means to be lethal. The second is autonomy, which means absence of human intervention and control during the entire process of executing a task. Thirdly, impossibility for termination, meaning that once started there is no way to terminate the device. Fourthly, indiscriminate effect, meaning that the device will execute the task of killing and maiming regardless of conditions, scenarios and targets. Fifthly evolution, meaning that through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations China (2018, 1) This definition differs from the definition set out by the People’s liberation Army in 2011: [LAWS are] a weapon that utilizes AI to automatically pursue, distinguish, and destroy enemy targets; often composed of information collection and management systems, knowledge base systems, assistance to decision systems, mission implementation systems, etc. Kania (2018a, b) 2016 Lethal autonomous weapons are fully autonomous systems. LAWS are future systems: they do not currently exist. [...] LAWS should be understood as implying a total absence of human supervision, meaning there is absolutely no link (communication or control) with the military chain of command. [...] The delivery platform of a LAWS would be capable of moving, adapting to its land, marine or aerial environments and targeting and firing a lethal effector (bullet, missile, bomb, etc.) without any kind of human intervention or validation. [...] LAWS would most likely possess self-learning capabilities République Française (2016, 1–2) Additionally: Given the complexity and diversity of environments (particularly in urban areas) and the difficulty of building value-laden algorithms capable of complying with the principles of international humanitarian law (IHL), a LAWS would most likely possess self-learning capabilities, since it seems unrealistic to pre-program all the scenarios of a military operation. This means, for instance, that the delivery system would be capable of selecting a target independently from the criteria that have been predefined during the programming phase, in full compliance with IHL requirements. With our current understanding of future technological capacities, a LAWS would therefore be unpredictable République Française (2016, 2) (continued)

6  A Comparative Analysis of the Definitions of Autonomous Weapons

63

Table 6.2 (continued) State/Organisation Germany

ICRC

Israel

NATO1

Norway

Switzerland

The Netherlands

Date Definition 2020 LAWS [are] weapons systems that completely exclude the human factor from decisions about their employment. Emerging technologies in the area of LAWS need to be conceptually distinguished from LAWS. Whereas emerging technologies such as digitalization, artificial intelligence and autonomy are integral elements of LAWS, they can be employed in full compliance with international law Federal Foreign Office (2020, 1) 2016 Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention. International Committee of the Red Cross (2016, 1) 2018 In Israel’s view, the shared starting point for this discussion must be that all weapons, including LAWS, are and will always be utilized by humans. We should stay away from imaginary visions where machines develop, create or activate themselves – these should be left for science-fiction movies. As far as terminology is concerned, that means that LAWS should not be regarded as “deciding” anything. Humans are always those who decide, and LAWS are decided upon Yaron (2018, 2) Automated system: a system that, in response to inputs, follows a predetermined set of rules to provide a predictable outcome. Autonomous system: a system that decides and acts to accomplish desired goals, within defined parameters, based on acquired knowledge and an evolving situational awareness, following an optimal but potentially unpredictable course of action. NATO (2020, 16) 2017 Norway has not yet concluded on a specific legal definition of the term ‘fully autonomous weapons systems’. Generally speaking, however, in using the term, we refer to weapons that would search for, identify and attack targets, including human beings, using lethal force without any human operator intervening. These must be distinguished from weapons systems already in use that are highly automatic, but which operate within such tightly constrained spatial and temporal limits that they fall outside the category of ‘fully autonomous weapons’. Norway (2017, 1) Weapons systems that are capable of carrying out tasks governed by IHL in partial or full replacement of a human in the use of force, notably in the targeting cycle Switzerland (2016, 2) 2017 A weapon that, without human intervention, selects and engages targets matching certain predefined criteria, following a human decision to deploy the weapon on the understanding that an attack, once launched, cannot be stopped by human intervention. The Netherlands (2017, 1) (continued)

64

M. Taddeo and A. Blanchard

Table 6.2 (continued) State/Organisation United Kingdoma

US Department of Defence

Date Definition 2018 An autonomous system is capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state. It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present. Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not Ministry of Defence (2018a, 13). 2016 UK CCW GGE contribution UK understands such a system [fully autonomous LAWS] to be one which is capable of understanding, interpreting and applying higher level intent and direction based on a precise understanding and appreciation of what a commander intends to do and perhaps more importantly why […] Critically, this understanding is focused on the overall effect the use of force is to have and the desired situation it aims to bring about. From this understanding, as well as a sophisticated perception of its environment and the context in which it is operating, such a system would decide to take – or abort – appropriate actions to bring about a desired end state, without human oversight, although a human may still be present. The output of such a system could, at times, be unpredictable – it would not merely follow a pattern of rules within defined parameters. Foreign & Commonwealth Office (2016, 2) 2012 A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation. US Department of Defense (2012, 13–14)

The UK adopted the NATO definition of autonomous systems, but it did not abandon any of its previous definitions provided in 2016 and 2018. This is problematic insofar as UK definitions and the NATO definition set different requirements to identify AWS, which taken together may hamper attempts to define national, coherent, policy approaches to AWS

a

reported3 that as of August 2020, 30 states declared their endorsement of a pre-­ emptive AWS ban. However, without a shared understanding of what AWS are, it is hard to identify AWS to ban, let alone enforce any ban of AWS. China offers a good example of the case in point. Roberts et al. (2020) highlight that Chinese military officials express concerns about the use of AI for kinetic and aggressive purposes and that these concerns motivate the Chinese support to restrict the use of AWS, as expressed at the 5th Convention on CCW and, in the more recent call, supporting the banning of use of LAWS. However, they also stress that “the

 https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fullyautonomous-weapons-and

3

6  A Comparative Analysis of the Definitions of Autonomous Weapons

65

definition of autonomy embraced by China is extremely narrow, as it focuses only on fully autonomous weapons (Kania, 2018a, b emphasis added)” (p. 63) and leaves unaddressed AWS that may have lower levels of autonomy. This is the case with other definitions also focusing on full autonomy. Like the UK definition, which centres on fully autonomous systems “capable of understanding higher-level intent and direction”. The UK is ‘out of step’ for its primary focus on the ‘intention’ of the system, whilst its international partners focus on human (non)intervention with the system (Select Committee on Artificial Intelligence, 2018, 105). This point has been further affirmed in various meetings of the GGE and in a report by the House of Lord’s Select Committee on Artificial Intelligence.4 The definition refers to cognitive capabilities that AI systems do not possess currently and are very unlikely to gain in the future (Floridi, 2014; Wooldridge, 2020). Indeed, “capable of understanding higher-level intent and direction” defines an atypically high threshold for what is to be considered ‘autonomous’. France’s definition is provided in the same vein, it explicitly mentions that AWS as the ones it defines “do not currently exist”. Considered from a broader perspective, this approach has the effect of informing future directions of technological innovation by indicating limits to possible uses of AI technologies. In doing so, it may enable regulations to gain an advantage over technological innovation. But this approach rests on a paternalistic view of the role of regulations and regulator, which is problematic per se and may have the undesired effect of hampering technological innovation. When considering on AWS specifically, defining the governance of these systems by focusing on futuristic scenarios is detrimental for two reasons. First, focusing on systems that are not currently developed or whose characteristics are technologically unfeasible diverts focus from pressing ethical and legal problems posed by existing AWS and those that may be deployed in the foreseeable future. Second, it undermines regulations and declarations about banning AWS, insofar as these refer to hypothetical AWS with features that current and foreseeable systems do not have, for example ‘understanding’ and ‘intent’. In this case, the implication is that official declaration of banning AWS refers to systems which do not exist yet, and leaves unaddressed other systems, currently being developed. For example, Article36 stressed, that statements made by the UK such as “we have no plans to develop or acquire such weapons,” as reported by its definition (Table 6.2), “could appear progressive without actually applying any constraint on the UK’s ability to develop weapons systems with greater and greater autonomy” (Article36, 2018, 1).

 (Select Committee on Artificial Intelligence 2018, 105) Nb. On 24th April 2019 Lord Browne tabled a question in the HoL asking what representations the Government had received from the MoD regarding the recommended that the UK align its definition of AWS with that of international partners. The Government noted that it had received some representations but nevertheless pointed to the fact that “the UN Convention on Certain Conventional Weapons Group of Government Experts on Lethal Autonomous Weapons Systems is yet to achieve consensus on an internationally accepted definition or set of characteristics for autonomous weapons.” (House of Lords 2019) 4

66

M. Taddeo and A. Blanchard

Indeed, the high threshold established by the UK to identify AWS will, if unchanged, permit the UK ever-increasing use of AWS insofar as these do not show “understanding higher-level intent and direction”. The problem in this case is conceptual: the restrictive definition of AWS does not enable the correct categorization of these systems, which are autonomous, but that do not meet the high threshold posed by the UK definition. These systems either fall into a grey area between both categories or are mistakenly lumped into the more familiar ‘automatic’, missing the opportunities to consider and address the ethical and legal problems that they pose. To avoid these limitations, it is important to define AWS by focusing on their characterising aspects –e.g. autonomy  – and describe them following the understanding that scientific and technological research have of them. In this way, the definition can offer a rigorous tool to identify AWS and avoid the inclusion of unsubstantiated characteristics of these systems. The goal of the definition, as the ICRC states, is that it encompasses some existing weapon systems, [and so] enables real-world consideration of weapons technology to assess what may make certain existing weapon systems acceptable  – legally and ethically  – and which emerging technology developments may raise concerns under international humanitarian law (IHL) and under the principles of humanity and the dictates of the public conscience (International Committee of the Red Cross, 2016, 1).

This is for example the driving rational of the ICRC definition (see Table 6.2), and the outcome of the US definition, which considers autonomy on a function-based spectrum vis-à-vis human engagement so it can also encompass existing weapons systems (International Committee of the Red Cross, 2016, 1; US Department of Defense, 2012, 13–14). While being inclusive, however, it is also important to maintain some level of specificity to avoid too generic an approach that may then generate confusion in identifying AWS. This is the risk linked to the NATO definition (see Table 6.2). It is true that the definition is not meant to focus specifically on AWS but on autonomous systems in general, but it is too generic even for this purpose. For example, it refers to “desired goals” leaving unspecified whether these are the political, organisational, strategic or tactic goals or the specific goals that a system may have or acquire. Similarly, it refers to “situational awareness”, but it is unclear whether this is meant to be an understanding of the immediate context of deployment of the system or of the wider strategic scenario. From the analysis of the definitions reported in Table 6.2, four characteristics can be extracted as recurring more often in the reported definitions, namely: autonomy, adapting capabilities, human intervention and control, and purpose of use. While these characteristics point in the right direction when considering what AWS are, for example, they resonate with the definition of AI adopted in (reference removed for review purposes) of a form of autonomous, self-learning agency; the way in which they are described is, at times, conceptually misleading. The next three subsections analyse these characteristics to clarify their implications with respect to the ethical and legal debate on AWS.

6  A Comparative Analysis of the Definitions of Autonomous Weapons

67

2.1 Autonomy, Intervention, and Control Autonomy is a central element of all the definitions of AWS. In some cases, it is assumed to mean the ability of a system to operate successfully without human intervention. The German definition, for example, mentions machines that “completely exclude” humans from the decision-making process. In other cases, autonomy is conflated with the lack of human control. This is the case of the French definition, for instance, which qualifies human intervention as human supervision, meaning there is absolutely no link (communication or control) with the military chain of command (République Française, 2016, 1).

As we will see in Sect. 3.1., this assumption is misleading both conceptually and operationally. An artificial system can be, in principle, fully autonomous, insofar as it can operate independently from a human or of another artificial agent, and yet be deployed under some form of meaningful human control. The distinction between autonomy and control is important for three reasons. First, conceptual clarity: it avoids considering automation and human control as mutually exclusive concepts: automation makes human intervention unnecessary but does not make human control impossible. This is why the DoDD 3000.09 is correct in referring explicitly to ‘human-supervised autonomous weapons systems’5 and to distinguish them from ‘semi-autonomous weapon systems’, whose autonomy is circumscribed to “engagement related functions” but depend on a human operator for the target selection. Distinguishing autonomy from control brings a second and a third advantage, as it future-proofs the debate on AWS. Many of the problems posed by AWS do not concern the desirable level of autonomy of these systems, but the desirable level of control over these systems. The decision about control is in many ways normative, insofar as it is not only defined by the technological affordances (i.e. how much autonomy a system can have) but also, and more importantly, by the decisions and tasks that should be delegated to machines without envisaging human control. Separating the two concepts, enables a focus on normatively desirable forms of control irrespectively of the level of autonomy that these machines may acquire someday. The third advantage of this distinction, is that it pre-empts approaches that leverage the lack of existing examples of fully autonomous AWS to avoid discussing their regulation as claimed, for example, by the Russian Federation Certainly, there are precedents of reaching international agreements that establish a preventive ban on prospective types of weapons. However, this can hardly be considered as an argument for taking preventive prohibitive or restrictive measures against LAWS being a by far more complex and wide class of weapons of which the current understanding of humankind is rather approximate (Russian Federation, 2017, 2).

 Department of Defense (2012, 14).

5

68

M. Taddeo and A. Blanchard

2.2 Adapting Capabilities Of the 12 definitions considered in this review, only the French and the Chinese definitions stress the adapting capabilities, specifically the definitions mention learning capabilities of AWS as a key characteristic. The lack of focus on adapting capabilities in general in the definition of AWS is problematic, as these are a key feature of AI technologies, which increasingly underpin AWS. AWS can function without adapting capabilities. For example, they may rely on rule-based programming6 which enable an autonomous reaction to environmental triggers but do not allow for planning different behaviours when the environment changes. One can imagine a sensor detecting an incoming object and the algorithm triggering a response of the system, e.g. fire to destroy the object. However, systems based on rule-based algorithms are increasingly being replaced by AI-based system. Military institutions are investing in AI for a wide range of applications, for example significant efforts are already underway to harness developments in image, facial and behaviour recognition using AI and machine learning techniques for intelligence gathering and “automatic target recognition” to identify people, objects or patterns.7 Disregarding adapting capabilities in the definitions of AWS leads to disregarding key characteristic of these systems and hinders the debate on their ethical and legal implications. Crucially, these capabilities pose questions with respect to the predictability (Taddeo et al. 2022), and hence the trustworthiness (Taddeo 2017), of these systems (reference removed for review purposes; reference removed for review purposes) and with respect to the attribution of responsibilities of the actions that these systems perform as well as with the implementation of meaningful forms of control. The French definition stresses that learning capabilities would be necessary to adapt to the complexity of operation scenarios which cannot be foreseen and thus “pre-programmed” in the system. It also stresses that this means that the delivery system would be capable of selecting a target independently from the criteria that have been predefined during the programming phase, in full compliance with IHL requirements. With our current understanding of future technological capacities, a LAWS would therefore be unpredictable (emphasis added, République Française, 2016, 2).

A similar point is also highlighted in (International Committee of the Red Cross (ICRC), 2018),

 Rule-based systems are artificial systems showing autonomous responses to an input, however these systems operate following predetermined rules and are not able to change these rules, and hence their behaviour, to adapt to the environment in which they act. 7  See for example, https://www.sbir.gov/sbirsearch/detail/1413823; https://www.icrc.org/en/document/artificial-intelligence-and-machine-learning-armed-conflict-human-centred-approach; and https://blogs.icrc.org/law-and-policy/2019/03/19/expert-views-frontiers-artificial-intelligenceconflict/ 6

6  A Comparative Analysis of the Definitions of Autonomous Weapons

69

the application of AI and machine learning to targeting functions raises fundamental questions of inherent unpredictability (p. 2).

Learning capabilities, and the related unpredictability of outcomes, also pose problems with respect to Article 36 of Additional Protocol I to the Geneva Conventions on weapons review. As reported in (UNIDIR, 2017): From a technical perspective, any system that continues to learn while deployed is constantly changing. It is not the same system it was when deployed or verified for deployment. Some have raised questions about the legality of adaptive systems, particularly in regards to States’ Article 36 obligations (p. 10).

This is crucial, as remarked by ICRC The ability to carry out [an Article 36] review entails fully understanding the weapon’s capabilities and foreseeing its effects, notably through testing. Yet foreseeing such effects may become increasingly difficult if autonomous weapon systems were to become more complex or to be given more freedom of action in their operations, and therefore become less predictable (as reported in UNIDIR, 2017, 26).

For both ethical and legal reasons, hence, the focus on adapting capabilities of AWS is essential. It is the nature of the adapting process which raises both significant opportunities and challenges and sets AI-enabled systems apart from highly automated rules-based systems. Adapting capabilities qualify the latest and future generations of AWS. Focusing on them allows for further clarification of the distinction between automatic and autonomous systems (more on this in Sect. 3); and for identifying the source of a number of key ethical and legal implications of AWS. This is why, it is important that definitions of AWS mention these capabilities expressly, and it is problematic that even the two most comprehensive definitions – the US and the ICRC – of AWS fail to grasp this point, missing the opportunity to cast light on a key element of these systems.

2.3 Purpose of Deployment Most of the definitions qualify the purpose of deployment implicitly, by reference to ‘weapons’ and by the fact that AWS are deployed in kinetic contexts. These two elements indicate some form of destructive (whether anti-material or lethal) use of these systems. However, it is important to understand the range of possible uses with greater precision, for example considering the specific tasks that AWS may undertake within the context of kinetic operations. Of the definitions reported in Table 6.2, four (Canada, Israel, Germany, and UK) do not mention explicitly any specific purpose of deployment. The kinetic outcome of the use of AWS is somehow assumed in this case, leaving undefined for example whether AWS will be used for deliberate or dynamic targeting. Of the other eight definitions, one (NATO) does not mention any specific purpose (it should be stressed, however, that NATO definition is of autonomous systems in general and

70

M. Taddeo and A. Blanchard

not of AWS), the remaining definitions refer to the purposes of use of AWS as to deploy lethal force (China and France) or more specifically to select and engage targets (whether non-humans or humans) to be neutralised, damaged or destroyed (ICRC, Norway, Switzerland, The Netherlands, US). All the definitions leave unaddressed the specific steps of the tasks that are delegated to machines. These steps, however, are key when considering AWS. Consider for example criticisms posed by (Roff, 2014) to the US definition, Roff stresses that the meaning of ‘select’ in ‘select and engage’ is unclear, insofar it is not clear whether this also includes the detection of targets.8 As she clarifies, if detection is not included, then we may assume that it is carried out by a human, thereby obviating important ethical (and technical) questions. Roff’s criticism highlights the complexity of these tasks and of the processes underpinning the decision to deploy force. Consider for example the steps underpinning targeting decision as described in (Ekelhof & Persi Paoli, 2021). They outline a complex process, which extends across the decision and command chain when considering AWS. The process includes tasks and decisions spanning the tactical, operational, strategic and political levels, which are often interlinked. The complexity of the process requires a more specific approach when considering the tasks performed by AWS. This is achieved in two ways, by specifying explicitly the purposes of deployment—at a high Level of Abstraction (LoA)—indicating the destructive, whether lethal or not, goal for using these systems; and—at a lower LoA – by specifying which steps in the process of exerting force may be within the remit of the AWS and under which level of human control AWS may operate. The outcome of the ethical and legal analyses of AWS depends on these specifications.

3 A Definition of AWS We offer a value-neutral definition of AWS. In doing so we have a twin-goal of (i) defining the key characteristics that permit the identification of AWS; and (ii) specifying these characteristics so to clarify their relations—e.g. automation vs control, and their differences—e.g. automatic vs autonomous. To do so, we consider autonomy, adapting capabilities, and control as characteristics that can each be mapped on a continuum. AWS can have each of these characteristics to a greater or lower level. We are also inclusive with respect to the set of possible purposes of deployment, with the aim of clarifying what the range may be. Identifying the combination of the different levels and purposes, if any, that meet ethical and legal requirements is the tasks of ethical analyses, of policies and laws, this is why we leave this to the next step of our work. With this approach in mind, we define AWS as follows: Definition: an artificial agent which, at the very minimum, is able to change its own internal states to achieve a given goal, or set of goals, within its dynamic operating environment and

 Ariel Conn (2016).

8

6  A Comparative Analysis of the Definitions of Autonomous Weapons

71

without the direct intervention of another agent and may also be endowed with some abilities for changing its own transition rules without the intervention of another agent, and which is deployed with the purpose of exerting kinetic force against a physical entity (whether an object or a human being) and to this end is able to identify, select or attack the target without the intervention of another agent is an AWS. Once deployed, AWS can be operated with or without some forms of human control (in, on or out the loop). A lethal AWS is specific subset of an AWS with the goal of exerting kinetic force against human beings.

The next subsections will unpack this definition by focusing on the concepts of autonomy, adapting capabilities, and control. The purposes of deployment are less conceptually problematic and thus we will not delve into it. It is important, however, to remark here that the purpose of deployment have been identified as being those directly related to the goal to achieve, i.e. exerting force (reference removed for review purposes). Selecting targets and engaging (whether deliberate or dynamic) are directly linked to the purpose of deploying force. Hence, a system whose selecting and attacking functions are autonomous, but which is directed by another agent(s) for all its other purpose of uses, e.g. mobility, would still be considered an AWS.

3.1 Autonomous, Self-Learning, Weapons Systems A key question underpinning the definition of AWS is the distinction among ‘automatic’, ‘automated’, and ‘autonomous’ systems. Especially the distinction between ‘automated’ and ‘autonomous’ can prove to be difficult when considered from an ethical or a legal LoA. An ICRC report, for example, stresses that There is no clear technical distinction between automated and autonomous systems, nor is there universal agreement on the meaning of these terms […] (International Committee of the Red Cross, 2019a, 7).

In a similar vein, the joint concept note 1/18 on ‘Human-Machine Teaming’ published by the UK Ministry of Defence in 2018 started by remarking that There is no clear, definable and universally agreed boundary between what constitutes automation and what is autonomous,” it states, “because the assessment of autonomy and the term's use is subjective and contextual (Ministry of Defence, 2018b, 57).

While one may agree that the distinction between automation and autonomy is blurred, this is not because the assessment of autonomy of artificial agents is subjective or context-dependent. Within the field of computer science, and particularly of Agent Theory (Wooldridge & Jennings, 1995; Castelfranchi & Falcone, 2003), there is quite a clear understanding of the differences between these concepts. Let us consider ‘automatic’ agents first. These are agents whose actions are predetermined and will not change unless acted upon by pre-selected triggers and/or human intervention. Automatic agents are not teleological, they do not pursue a goal, but simply react to an external trigger. In this sense, they are ‘causal entities’

72

M. Taddeo and A. Blanchard

(Castelfranchi & Falcone, 2003). A landmine falls squarely in this category, for its action is causally determined by a specific trigger, such as someone stepping on it. AWS do not belong to this category insofar as their behaviour is not simply reactive to (caused by) the environment. AWS execute tasks to achieve goals (teleological agents), they can adjust their actions on the basis of the feedback that they receive from the environment (automated artificial agents), may also be able define plans (heuristic artificial agents) to achieve their goals, and may be able to refine their behaviour in response to the changes in the environment (adapting artificial agent). At this point, we can consider AWS as systems that at the very least are automated, teleological artificial agents, but we can be more specific and go a step further. For the purposes of the definition, it is important to consider what the minimum requirements are for an artificial agent to be autonomous. To do so we will refer to the definitions of autonomous artificial agent provided Castelfranchi’s and Falcone’s (Castelfranchi & Falcone, 2003) and Floridi’s and Sanders’ (Luciano Floridi & Sanders, 2004). The two definitions are given at different LoAs, the reader may consider one (Floridi’s and Senders’) a specification of the other (Castelfranchi’s and Falcone’s). According to Castelfranchi and Falcone, autonomous agents enjoy the following properties: […] their behaviour is teleonomic: it tends to certain specific results due to internal constraints or representations, produced by design, evolution, or learning, […]; […] they do not simply receive an input – not simply a force (energy) but information  – but they (actively) “perceive” and interpret their environment and the effects of their actions; […] they orient themself towards the input; in other words, they define and select the environmental stimuli; […] they have “internal states” with their own exogenous and endogenous evolution principles, and their behaviour also depends on such internal states (Castelfranchi & Falcone, 2003, 105). Internal states of an artificial agent can be described as the configuration of the agent (for example the layers, the nodes, the value and the weights of a neural network at a specific moment in time) when it is performing a given operation. Internal states are key in the definition of autonomy insofar as the transition from state0 to state1 corresponds to a change of behaviour of the system. How the transition is determined defines the difference between automated and autonomous systems. Indeed, internal states are also key to the definition offered by Floridi and Sanders. Accordingly, an autonomous artificial agent enjoys three characteristics Interactivity means that the agent and its environment (can) act upon each other. Typical examples include input or output of a value, or simultaneous engagement of an action by both agent and patient  – for example gravitational force between bodies.

6  A Comparative Analysis of the Definitions of Autonomous Weapons

73

Autonomy means that the agent is able to change state without direct response to interaction: it can perform internal transitions to change its state. […] Adaptability means that the agent’s interactions (can) change the transition rules by which it changes state. This property ensures that an agent might be viewed, at the given LoA, as learning its own mode of operation in a way which depends critically on its experience […] (Luciano Floridi & Sanders, 2004, 357). The ability of an artificial agent to change its internal states without the direct intervention of another agent marks (binarily) the line between automatic/automated and autonomous. A rule-based artificial system and a learning one both qualify as autonomous following this criterion. As mentioned in Sect. 2.1, adaptability is becoming a characteristic increasingly more common for AWS. It is the characteristic that underpins both their potential for dealing with complex, fast-pacing scenarios and the one that leads to unpredictability, lack of transparency, of control, and responsibility gaps related to the use of these agents (for an extensive analysis of the predictability of AI systems see Taddeo et al. 2022). Thus, it is important to include adaptability capabilities in the definition of AWS and to offer a clear – to some extent technical – specification of these capabilities to help avoiding anthropomorphising these agents and set a clear, binary, threshold below which one can say that an agent has no adaptability capabilities. This is why in the definition that we propose in this report we refer to an artificial agent endowed with some abilities for changing its transition rules to perform successfully in a changing environment.

3.2 Human Control The definition provided in Sect. 3 refers to human control as a mode of deploying AWS and not as one of their defining characteristics. This is because the autonomy of AWS is not defined with respect to human control but with respect to the intervention of another agent on the AWS.  There are different forms of control, for example Amoroso and Tamburrini (Amoroso & Tamburrini, 2020) identify three: First, the obligation to comply with IHL entails that human control must play the role of a fail-safe actor, contributing to prevent a malfunctioning of the weapon from resulting in a direct attack against the civilian population or in excessive collateral damages. Second, in order to avoid accountability gaps, human control is required to function as accountability attractor, i.e., to secure the legal conditions for responsibility ascription in case a weapon follows a course of action that is in breach of international law. Third and finally, from the principle of human dignity respect, it follows that human control should operate as a moral agency enactor, by ensuring that decisions affecting the life, physical integrity, and property of people (including combatants) involved in armed conflicts are not taken by non-moral artificial agents (p. 189).

One may disagree with this taxonomy or consider control better defined at a different LoA, for example focusing only on the technical specifications of AWS. However, the relevant literature converges on considering control of AWS as dynamic,

74

M. Taddeo and A. Blanchard

multidimensional and situation dependent and as something that can be exercised focusing on different aspects of the human-machine team. For example, the Stockholm International Peace Research Institute and the ICRC identify three main aspects of human control of weapon systems: the weapon system’s parameters of use, the environment, and human-machine interaction (Boulanin et al., 2020). More aspects can also be considered. Boardman and Butcher (2019) suggest that control should not just be meaningful but ‘appropriate’, insofar as it should be exercised in such a way to ensure that the human involvement in the decision-making process remains significant without impairing system performance. The discussion about what constitute meaningful human control of AWS and whether this can be exerted in an appropriate way does not fall within the scope of this article, as our goal here is to identify the key characteristics of AWS more than the normative conditions for their design, development and deployment. However, to the extent to which our analysis sheds light on these characteristics and their relation, it is important to stress that human control is not antithetical to the autonomy of AWS and can be exerted over AWS at different levels, from the political and strategic decisions to deploy AWS to the kind of tasks delegated to them. The question is which form of control is ethically desirable and should, ideally, be considered by decision- and policy-makers in designing the governance of AWS.

4 Conclusion The debate on AWS is shaped by strategic, political, and ethical considerations. Competing interests and values contribute to polarize the debate, while politically loaded definitions of AWS undermine efforts to identify legitimate uses and to define relevant regulations. These efforts are hindered even further when conceptual confusion is added to this picture. In a famous article laying down the foundation of computer ethics as an area of research Moore (1985) wrote: A difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis which provides a coherent conceptual framework within which to formulate a policy for action (p. 266).

In this article, we do not provide an ethical framework for the regulation of AWS. This is the next step of our work. Here, we aim to overcome the conceptual muddle around AWS, we take this as a necessary, preliminary step. We do so in two ways: the comparative analysis and the value-neutral definition. The comparative analysis of the official definitions helps in identifying key points of conceptual confusions, e.g. the distinction between automatic and autonomous or the one between autonomy and control. It also highlights a serious gap in these definitions, as to the reference to adapting capabilities of these systems. The value-neutral definition is not informed by policy or strategic aims, nor does it include normative aspects. It has been designed considering key technical

6  A Comparative Analysis of the Definitions of Autonomous Weapons

75

characteristics of these systems and with the sole purpose to enable the identification of AWS and to distinguish these systems from other weapon systems, like automatic ones. Irrespective of the next steps in our research, we believe that having a value-­neutral definition of AWS will help academic and policy debates on this topic, as it offers a shared ground on which different views can be confronted. Funding Information  Mariarosaria Taddeo and Alexander Blanchard have been funded by the Dstl Ethics Fellowship held at the Alan Turing Institute. The research underpinning this work was funded by the UK Defence Chief Scientific Advisor’s Science and Technology Portfolio, through the Dstl Autonomy Programme, grant number R-DST-TFS/D026. This paper is an overview of UK Ministry of Defence (MOD) sponsored research and is released for informational purposes only. The contents of this paper should not be interpreted as representing the views of the UK MOD, nor should it be assumed that they reflect any current or future UK MOD policy. The information contained in this paper cannot supersede any statutory or contractual requirements or liabilities and is offered without prejudice or commitment.

References Amoroso, D., & Tamburrini, G. (2020). Autonomous weapons systems and meaningful human control: Ethical and legal issues. Current Robotics Reports, 1(4), 187–194. https://doi.org/10.1007/ s43154-­020-­00024-­3 Arkin, R.  C. (2009). Ethical robots in warfare. IEEE Technology and Society Magazine, 28(1), 30–33. Article36. (2018). Shifting definitions – The UK and autonomous weapons systems July 2018. http:// www.article36.org/wp-­content/uploads/2018/07/Shifting-­definitions-­UK-­and-­autonomous-­ weapons-­July-­2018.pdf Asaro, P. (2012). ‘On banning autonomous weapon systems: Human rights, automation, and the dehu- manization of lethal decision-making’. International Review of the Red Cross, 94(886), 687–709. https://doi.org/10.1017/S1816383112000768 Blanchard, A., & Taddeo, M. (2022a). Autonomous weapon systems and Jus Ad Bellum. AI & SOCIETY, March. https://doi.org/10.1007/s00146-022-01425-y Blanchard, A., & Taddeo, M. (2022b). Predictability, distinction & due care in the use of lethal autonomous weapon systems. SSRN Scholarly Paper 4099394. Rochester, NY: Social Science Research Network. https://doi.org/10.2139/ssrn.4099394 Blanchard, A., & Taddeo, M. (2022c). Jus in Bello Necessity, The requirement of minimal force, and autonomous weapons systems. Journal of Military Ethics, 21(3–4), 286–303. https://doi. org/10.1080/15027570.2022.2157952 Boardman, M., & Butcher, F. (2019). An exploration of maintaining human control in ai enabled systems and the challenges of achieving it. STO-MP-IST-178. Boulanin, V., Carlsson, M.  P., Goussac, N., & Davidson, D. (2020). Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control. Stockholm International Peace Research Institute and the International Committee of the Red Cross. https://www.sipri.org/publications/2020/other-­p ublications/limits-­a utonomy-­w eaponsystems-­identifying-­practical-­elements-­human-­control-­0 Brunstetter, D., & Braun, M. (2013). From Jus Ad Bellum to Jus Ad Vim: Recalibrating our understanding of the moral use of force. Ethics & International Affairs, 27(01), 87–106. https://doi. org/10.1017/S0892679412000792 Castelfranchi, C., & Falcone, R. (2003). From automaticity to autonomy: The frontier of artificial agents. In H. Hexmoor, C. Castelfranchi, & R. Falcone (Eds.), Agent Autonomy (Multiagent

76

M. Taddeo and A. Blanchard

Systems, Artificial Societies, and Simulated Organizations) (pp. 103–136). Springer. https:// doi.org/10.1007/978-­1-­4419-­9198-­0_6 China. (2018). Convention on certain conventional weapons: Position paper submitted by China. In. Geneva. https://unog.ch/80256EDD006B8954/(httpAssets)/E42AE83BDB3525D0C12582 6C0040B262/$file/CCW_GGE.1_2018_WP.7.pdf Collopy, Paul, Valerie Sitterle, and Jennifer Petrillo. (2020). ‘Validation Testing of Autonomous Learning Systems’. INSIGHT 23(1): 48–51. https://doi.org/10.1002/inst.12285 Conn, A. (2016). The problem of defining autonomous weapons. Future of Life Institute. 30 November 2016. https://futureoflife.org/2016/11/30/problem-­defining-­autonomous-­weapons/ Convention on Certain Conventional Weapons. (2014). Report of the 2014 informal meeting of experts on Lethal Autonomous Weapons Systems (LAWS). CCW/MSP/2014/3. United Nations Office for Disarmament Affairs. https://undocs.org/pdf?symbol=en/ccw/msp/2014/3 Convention on Certain Conventional Weapons. (2017). Report of the 2017 group of governmental experts on Lethal Autonomous Weapons Systems (LAWS). CCW/GGE.1/2017/CRP.1. United Nations Office for Disarmament Affairs. https://www.unog.ch/80256EDD006B8954/(httpAssets)/B5B99A4D2F8BADF4C12581DF0048E7D0/$file/2017_CCW_GGE.1_2017_CRP.1_ Advanced_+corrected.pdf Convention on Certain Conventional Weapons. (2018). Report of the 2018 session of the group of governmental experts on emerging technologies in the area of lethal autonomous weapons systems. CCW/GGE.1/2018/3, United Nations Office for Disarmament Affairs. https://undocs. org/pdf?symbol=en/CCW/GGE.1/2018/3 Convention on Certain Conventional Weapons. (2019). Report of the 2019 session of the group of governmental experts on emerging technologies in the area of lethal autonomous weapons systems. CCW/GGE.1/2019/3. United Nations Office for Disarmament Affairs. https://undocs. org/pdf?symbol=en/CCW/GGE.1/2019/3 Department of Defense. (2012). Directive 3000.09 “Autonomy in weapons systems”. Department of Defense. https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf Department of National Defence. (2018). Autonomous Systems for Defence and Security: Trust and Barriers to Adoption. Innovation Network Opportunities. Government of Canada. 16 July 2018. https://www.canada.ca/en/department-­national-­defence/programs/defence-­ideas/ current-­opportunities/innovation-­network-­opportunities.html#ftn1 Docherty, B. (2014). Shaking the foundations: The human rights implications of killer robots. Human Rights Watch. https://www.hrw.org/report/2014/05/12/shaking-foundations/ humanrights-implications-killer-robots Ekelhof, M. (2019). Moving beyond semantics on autonomous weapons: Meaningful human control in operation. Global Policy, 10(3), 343–348. https://doi.org/10.1111/1758-5899.12665 Ekelhof, M., & Paoli, G. P. (2021). The human element in decisions about the use of force. INIDIR. Enemark, C. (2011). Drones over Pakistan: Secrecy, ethics, and counterinsurgency. Asian Security, 7(3), 218–237. https://doi.org/10.1080/14799855.2011.615082 Federal Foreign Office. (2020). German commentary on operationalizing all eleven guiding principles at a national level as requested by the chair of the 2020 group of governmental experts on emerging technologies in the area of lethal autonomous weapons systems within the convention on certain conventional weapons. https://documents.unoda.org/wp-­content/ uploads/2020/07/20200626-­Germany.pdf Floridi, L. (2014). The fourth revolution, how the infosphere is reshaping human reality. Oxford University Press. Floridi, L., & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379. Foreign & Commonwealth Office. (2016). United Kingdom of Great Britain and Northern Ireland statement to the informal meeting of experts on lethal autonomous weapons systems 11–15 April 2016. https://unog.ch/80256EDD006B8954/(httpAssets)/44E4700A0A8CED0 EC1257F940053FE3B/$file/2016_LAWS+MX_Towardaworkingdefinition_Statements_ United+Kindgom.pdf

6  A Comparative Analysis of the Definitions of Autonomous Weapons

77

Heyns, C.. (2014). Autonomous weapons systems and human rights law. In Presentation made at the informal expert meeting organized by the state parties to the convention on certain conventional weapons 13–16 May 2014. House of Lords. (2019). Autonomous weapons: Questions for ministry of defence UIN HL15333. UK Parliament. 24 April 2019. https://questions-­statements.parliament.uk/written-­questions/ detail/2019-­04-­24/HL15333 International Committee of the Red Cross. (2016). Views of the ICRC on autonomous weapon systems, November. https://www.icrc.org/en/document/views-­icrc-­autonomous-­weapon-­system International Committee of the Red Cross (ICRC). (2018). Ethics and autonomous weapon systems: An ethical basis for human control? Arms Control Today, 48, 38. International Committee of the Red Cross (ICRC). (2019a). Ethics and autonomous weapon systems: An ethical basis for human control?. Johnson, A. M., & Axinn, S. (2013). The morality of autonomous robots. Journal of Military Ethics, 12(2), 129–141. https://doi.org/10.1080/15027570.2013.818399 Kania, E. (2018a). China’s strategic ambiguity and shifting approach to lethal autonomous weapons systems. Lawfare. 17 April 2018. https://www.lawfareblog.com/ chinas-­strategic-­ambiguity-­and-­shifting-­approach-­lethal-­autonomous-­weapons-­systems Kania, E.  B. (2018b). China’s embrace of AI: Enthusiasm and challenges  – European council on foreign relations. ECFR (blog). 6 November 2018. https://ecfr.eu/article/ commentary_chinas_embrace_of_ai_enthusiasm_and_challenges/ Lieblich, E., & Benvenisti, E. (2016). The obligation to exercise disrection in warfare: Why autonomous weapons systems are unlawful. In N. Bhuta, S. Beck, R. Geiß, H.-Y. Liu, & C. Kreß (Eds.), Autonomous weapons systems: Law, ethics, policy. Cambridge University Press. Losing Humanity: The Case against Killer Robots. (2012). Human Rights Watch. 19 November 2012. https://www.hrw.org/report/2012/11/19/losing-­humanity/case-­against-­killer-­robots Ministry of Defence. (2018a). Unmanned aircraft systems (JDP 0-30.2). https://www.gov.uk/ government/publications/unmanned-­aircraft-­systems-­jdp-­0-­302 Ministry of Defence. (2018b). Human-machine teaming (JCN 1/18). https://www.gov.uk/ government/publications/human-­machine-­teaming-­jcn-­118 Moor, J.  H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275. https://doi. org/10.1111/j.1467-­9973.1985.tb00173.x NATO. (2020). AAP-06 Edition 2020: NATO glossary of terms and definitions. NATO Standardization Office. Norway. (2017). CCW group of governmental experts on lethal autonomous weapons systems: General statement by Norway. https://www.unog.ch/80256EDD006B8954/(httpAssets)/DF86 1D82B90F3BF4C125823B00413F73/$file/2017_GGE+LAWS_Statement_Norway.pdf O’Connell, M. E. (2014). The American way of bombing: How legal and ethical norms change. In M. Evangelista, H. Shue (Eds.). Ithaca: Cornel University Press. République Française. (2016). Working paper of France: “Characterization of a laws”. In Meeting of experts on Lethal Autonomous Weapons Systems (LAWS). République Française. https:// unog.ch/80256EDD006B8954/(httpAssets)/5FD844883B46FEACC1257F8F00401FF6/$f ile/2016_LAWSMX_CountryPaper_France+CharacterizationofaLAWS.pdf Roberts, H., Cowls, J., Morley, J., Taddeo, M., Wang, W., & Floridi, L. (2020). The Chinese approach to Artificial Intelligence: An analysis of policy, ethics, and regulation. AI & SOCIETY, June. https://doi.org/10.1007/s00146-020-00992-2 Roff, H. M. (2014). The strategic robot problem: Lethal autonomous weapons in war. Journal of Military Ethics, 13(3), 211–227. https://doi.org/10.1080/15027570.2014.975010 Russian Federation. (2017). Examination of various dimensions of emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposesof the convention. Submitted by The Russian Federation. In Item 6. Examination of various dimensions of emerging technologies in the Area of lethal autonomous weapons systems, in the context of the objective and purposes of the convention. Russian Federation. https://admin. govexec.com/media/russia.pdf

78

M. Taddeo and A. Blanchard

Select Committee on Artificial Intelligence. (2018). AI in the UK: Ready, willing and able? House of Lords. Shapiro, A. (2019). Autonomous weapon systems: Selected implications for international security and for Canada. 2019–55-E. Library of Parliament. https://lop.parl.ca/sites/PublicWebsite/ default/en_CA/ResearchPublications/201955E#txt9 Sharkey, N. (2008). Cassandra or false prophet of doom: AI robots and war. IEEE Intelligent Systems, 23(4), 14–17. Sharkey, N. (2010). Saying “no!” to lethal autonomous targeting. Journal of Military Ethics, 9(4), 369–383. https://doi.org/10.1080/15027570.2010.537903 Sharkey, N. E. (2012). The Evitability of autonomous robot warfare. International Review of the Red Cross, 94(886), 787–799. Sharkey, N. (2016). Staying in the loop: Human supervisory control of weapons. In C.  Kreβ, H.-Y. Liu, N. Bhuta, R. Geiβ, & S. Beck (Eds.), Autonomous weapons systems: Law, ethics, policy (pp. 23–38). Cambridge University Press. https://doi.org/10.1017/CBO9781316597873.002 Sharkey, A.(2019). Autonomous weapons systems, killer robots and human dignity. Ethics and Information Technology, 21(2), 75–87. https://doi.org/10.1007/s10676-018-9494-0 Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77. Sparrow, R. (2016). Robots and respect: Assessing the case against autonomous weapon systems. Ethics & International Affairs, 30(1), 93–116. https://doi.org/10.1017/S0892679415000647 Steinhoff, U. (2013). Killing them safely: Extreme asymmetry and its discontents. In B. J. Strawser (Ed.), Killing by remote control, by Jeff McMahan (pp.  179–208). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199926121.003.0009 Switzerland. (2016). Informal working paper submitted by Switzerland: Towards a “compliance-­ based” approach to LAWS. In Informal meeting of experts on lethal autonomous weapons systems. https://www.reachingcriticalwill.org/images/documents/Disarmament-­fora/ccw/2016/ meeting-­experts-­laws/documents/Switzerland-­compliance.pdf The Netherlands. (2017). Examination of various dimensions of emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposies of the convention. In CCW/GGE.1/2017/WP.2. group of governmental experts of the high contracting parties to the convention on prohibitions or restrictions on the use of certain conventional weapons which may be deemed to be excessively injurious or to have indiscriminate effects. United Nations Office for Disarmament Affairs. https://www.reachingcriticalwill.org/images/ documents/Disarmament-­fora/ccw/2017/gge/documents/WP2.pdf Taddeo, M. (2017). Trusting Digital Technologies Correctly. Minds and Machines, 27(4), 565–568. https://doi.org/10.1007/s11023-017-9450-5 Taddeo, M., & Blanchard, A. (2022a). Accepting moral responsibility for the actions of autonomous weapons systems—a Moral Gambit. Philosophy & Technology, 35(3), 78. https://doi. org/10.1007/s13347-022-00571-x Taddeo, M., & Blanchard, A. (2022b). A comparative analysis of the definitions of autonomous weapons systems. Science and Engineering Ethics, 28(5), 37. https://doi.org/10.1007/ s11948-022-00392-3 Taddeo, M., McNeish, D., Blanchard, A., & Edgar, E. (2021). Ethical principles for Artificial Intelligence in National Defence. Philosophy & Technology, 34(4), 1707–1729. https://doi. org/10.1007/s13347-021-00482-3 Taddeo, M., Ziosi, M., Tsamados, A., Gilli, L., & Kurapati, S. (2022). Artificial Intelligence for National Security: The Predictability Problem. London: Centre for Emerging Technology and Security. UNIDIR, United Nations Institute for Disarmament Research. (2017). The Weaponization of increasingly autonomous technologies: Concerns, characteristics and definitional approaches. UNIDIR Resources. US Department of Defense. (2012). DoD directive 3000.09 on autonomy in weapon systems. Autonomy in Weapon Systems. https://www.esd.whs.mil/portals/54/documents/dd/issuances/ dodd/300009p.pdf

6  A Comparative Analysis of the Definitions of Autonomous Weapons

79

Wooldridge, M. J. (2020). The road to conscious machines: The story of AI. Penguin UK. Wooldridge, M., & Jennings, N. R. (1995). Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2), 115–152. https://doi.org/10.1017/S0269888900008122 Yaron, M. (2018). Statement by Maya Yaron to the convention on Certain Conventional Weapons (CCW) GGE on Lethal Autonomous Weapons Systems (LAWS)’. Permanent Mission of Israel to the UN. https://www.unog.ch/80256EDD006B8954/(httpAssets)/990162020E17A5C9 C12582720057E720/$file/2018_LAWS6b_Israel.pdf

Chapter 7

English School on Cyberspace: Examining the European Digital Sovereignty as an International Society and Standard of Civilization Abid A. Adonis

Abstract  This research explores the potential theorization of the English School of International Relations (ES) in understanding the intersection between technology and International Relations. This research examines the case of the European Digital Sovereignty as a geopolitical vision by the European Union by instrumentalizing two important analytical frameworks introduced by ES theorists: International Society and Standard of Civilization. The analysis highlights how ES frameworks are helpful in capturing the constitutive nature and expansive character of the European Digital Sovereignty. Yet, this research also shows the limitations of the use of ES frameworks in understanding the case. This, subsequently, calls for further research avenue for ES, particularly in incorporating non-state actors and non-­ Great Power actors in theorization. Keyword  Digital sovereignty · European Union · International relations · English School of International Relations · International society · Standard of civilization

1 Introduction For decades, the role of English School (ES) has been debated extensively in International Relations (IR) in regard to other theories such as Realism, Liberalism, and Constructivism. Despite its historical and considerable contributions, English School has never gained equal traction as other mainstream theories in International Relations. As Barry Buzan (2014) claims, ES retains a strong but not dominant A. A. Adonis (*) Oxford Internet Institute, University of Oxford, Oxford, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_7

81

82

A. A. Adonis

position among other theories in IR. ES engages in various intellectual debates with other theories shaping the development of its own theories. ES’ engagement with realism largely discusses anarchy, international order, and balance of power (Little, 1995). ES’ conversation with liberalism spurs the discussion about institutions and normative dimensions in international politics (Cronin, 2003). Despite sharing some underpinning assumptions on ideational aspects and social construction, ES challenges constructivists’ scepticism of state actors and material powers (Reus-­ Smit, 2009). ES discussion with post-structuralism and critical theories leads to a better understanding of eurocentrism and the legacy of post-colonialism in International Relations (Seth, 2011). ES also has been criticized due to a lack of clarity in methodology, especially in regard to investigating the causality and laying out the rules of evidence (Finnemore, 2001; Spegele, 2005). Furthermore, the limited number of ES scholars also contributes to the lack of ES’ up-to-date engagement with contemporary international relations issues compared to other mainstream theories. One of the most important topics in international relations nowadays is cyberspace. Digital transformation in the last 20  years has brought various effects to international politics in which information technology plays an increasingly pivotal role. It affects the role of state and non-state actors, shapes a new global agenda, and, to some extent, redefines the interaction between actors (Susskind, 2020). On an empirical level, recent global events have been affected by cyberspace on different scales as demonstrated in the Arab Spring, EuroMaidan, Brexit, the rise of right-­ wing populism in the West, and, lately, the decoupling between the United States and China. These dramatic events drive intellectual discussions among IR theorists as they intend to make sense of the empirical dynamics caused by cyberspace with their respective theorizations. Realism and liberalism have been at the forefront of this theoretical race on cyberspace given the predominant scholarly writings largely drawn from these two camps. Realism (and its derivatives) put enormous efforts into understanding and formulating frameworks, particularly in the discussions of cyber warfare, cyber conflict, and cyber security (Clarke & Knake, 2012). Liberalism also lays out substantial frameworks on cyberspace in International Relations ranging from the digital economy, internet governance, to international law on cyberspace (Beaumier et  al., 2020). Constructivism contributes to the discussion by examining the role of identity in digital space, the flow of ideas on the internet, and narrative framing in cyberspace (Erikson & Giacomelo, 2014). Recently, the critical theory also join the debate by discussing surveillance capitalism and the marginalizing effect of the internet (Zuboff, 2019). Such development leaves ES far behind in the discussion of cyberspace as there are only three peer-reviewed academic writings by ES scholars on cyberspace. One is a three-page paper on cryptocurrency (Cacciatori, 2020) and two articles discuss cyber diplomacy (Barrinha & Renard, 2017; Hamonangan & Assegaff, 2020) Compared to other theories, the ES’ scarcity in understanding cyberspace is arguably concerning given the myriad potentiality of the topic not only in the present but also in the future of International Relations. This raises a criticism as well as challenge for English School, to what extent it can maintain its relevance among other IR theories in contemporary world politics.

7  English School on Cyberspace: Examining the European Digital Sovereignty…

83

This lacuna presents an opportunity for the English School to provide novel frameworks on cyberspace on its own terms. Given its focus on a systemic level, ES has plenty in its conceptual toolbox to contribute further as cyberspace has been giving enormous impacts to international politics globally in recent years. Over the past few years, one of the most major talking points on cyberspace and IR is the idea of Digital Sovereignty. Digital Sovereignty is generally understood as the ability of actors to control and govern digital information, infrastructure, and network in their favor (Adonis, 2019). This idea has gained traction in the past few years given several empirical junctures. The China-Russia alliance proactively promotes and enforces state-centered digital sovereignty in which state actors have profound authority over its digital space for the sake of security and economic concern (Budnitsky & Jia, 2018). The stellar rise of U.S.-based Big Tech Firms such as Google, Amazon, Facebook, Apple, and Microsoft (GAFAM) also challenges the traditional authority of state actors in managing and controlling information technology (Gueham, 2017). The European Union responds to the dichotomy of state-­ centered and market-centered digital sovereignty by proposing its own version--claiming to be a democratic, fair, and value-based kind of digital sovereignty (Celeste, 2021). The dynamics of digital sovereignty are also followed by IR scholars giving their verdicts and analyses on how this idea unfolds. Realist scholars support the notion of a state-centered type of Digital Sovereignty by insisting on the importance of state actors and security concerns caused by digital transformation (Diesen, 2021). Liberal scholars are generally optimistic about the role of non-state actors and market in shaping digital sovereignty (Deibert & Crete-Nishihata, 2012). Liberal scholars also address the issue of international law on cyberspace to better regulate the internet world (Deibert & Crete-Nishihata, 2012). Critical theorists engage with the discussion on how the idea of digital sovereignty should empower marginalized actors to emancipate their causes in international politics against imperialism and colonial legacies (Couture & Topin, 2019). The English School, however, is absent from the discussion. This absence presents an opportunity to not only investigate Digital Sovereignty from a new perspective but also to examine the relevance of the English School in the discussion of contemporary international relations, particularly on the topic of cyberspace. Thus, this paper aims to answer a research question: “To what extent the English School’s conceptual toolbox can examine the case of the European Digital Sovereignty?” This research has several significances apart from filling the lacuna of English School explanation on the issue of Digital Sovereignty and seeking a relevance of the theory in the current international relations phenomena. First, this research potentially discovers the strength and limitations of the English School conceptual toolbox and foundational assumptions in the era of cyberspace. This is important to answer several critiques towards English School as some scholars mention that ES is outdated and “trapped in the context of Cold War” (Mearsheimer, 2005). By discovering the strength and limitations of the conceptual toolbox, this research will potentially reveal what scholars can address in the future of English School scholarship. Second, the topic of digital sovereignty is in line with the classical nature of ES scholarship whose tendency is to deal with systemic force and macro level.

84

A. A. Adonis

Digital sovereignty is argued to have a global impact on the present and future of cyberspace as it is increasingly incorporated into various national agendas by state actors, especially great powers (Adonis, 2019). Digital sovereignty is also not an entirely new topic for ES scholars as historically they put heavy emphasis on the idea, practice, and development of sovereignty. Bringing the “digital” in the ES sovereignty discussion will also refresh the debate about sovereignty itself. Third, the case of the European Union’s digital sovereignty is also interesting to examine as the EU is one of the leading actors in promoting digital sovereignty with a significant underline on the ideational and normative dimensions as well as the realpolitik consideration. This fits with ES’ foundational assumptions that also underscore ideational and normative dimensions in international politics (Buzan, 2014). The case of European Digital Sovereignty also fits with the recent agenda of ES research in examining potential Eurocentrism either in its own theorization and on an empirical level (Buzan, 2014). These three rationales justify why there is a necessity to conduct this research. In this paper, I argue that the emergence of European Digital Sovereignty can be well-captured through two conceptual frameworks of the English School: international society and standard of civilization. By instrumentalizing qualitative ideal-­ type and interpretive methods, I further argue that the European Union’s intention to construct and promote the European Digital Sovereignty creates a new type of international society on cyberspace in which it reflexively institutionalizes new behaviors and practices inside and outside the cyberspace. Consequently, this brings a new idealization of cyberspace by creating a new set of norms that is believed to be morally superior. This normative consequence entails a new standard of civilization in cyberspace that is characterized by an expansionist nature. However, the English School conceptual toolbox still finds its limitations in understanding the nature of identity and unit in cyber space. The English School also finds its limitations to further explain the role of non-state actors on cyberspace in the context of digital sovereignty. This research is structured into three chapters after this introductory chapter. The first is the methodology that explains the choice of methods and the underpinning rationale. The second is the conceptual frameworks. In this chapter, I will explain and justify the use of two conceptual frameworks: international society and standard of civilization. The third chapter will analyze the case of European Digital Sovereignty and how it can (or can not) be examined through the concepts of international society and standard of civilization. The last chapter will be the conclusion and reflection on English School and cyberspace in International Relations.

2 Research Methods The English School of International Relations has been criticized for its methodological quietism as traditionally the ES disdains discussions of methodology. This stems from Hedley Bull’s early work which was strongly influenced by English

7  English School on Cyberspace: Examining the European Digital Sovereignty…

85

empiricism (1977). This was also pertinent in other ES theorists’ works like Charles Manning (1962), Adam Watson (1984) and Martin Wight (1991) who methodologically pay more attention to the historical context, subjective, and intersubjective understanding. Those classical ES theorists are also in the consensus that positivism fails to be sufficient in capturing social and political phenomena. Perceptions, deductions, and calculus on human behaviour according to classical ES tradition would fail to discern the complex reality of human beings and their interactions (Navari, 2009). This draws criticism particularly from American IR scholars who see clear-cut and causal methodology as condicio sine qua non to understand any social phenomena and be considered as a science (Navari, 2009). Such criticism drives a positive reaction from ES scholars who have rendered methodology as a research agenda for ES scholarship in the past decades (Navari, 2010). Despite defending methodological pluralism in ES, ES scholars formulate several alternatives to approach political phenomena via methodological rigor. Wheeler (2000) and Neumann (2001) demonstrate the use of legal and normative inquiry that is useful for examining state practices. Tristen Naylor (2018) undertakes an eclectic approach consisting of elite interviews, ethnography, and archival studies in understanding the sociological dimension of international grouping membership. Cornelia Navari’s (2009) book on ES’ methodology also extensively answers the recurring question of methodology in ES theories. Such efforts facilitate future research on ES to engage more with methodology, instead of being an antipathy. In that regard, this research will utilize ideal-type method to examine European Digital Sovereignty. The interpretive approach is a regular feature in English School writings as it focuses on understanding rather than explaining the causality of events (Lamont, 2015). This will underline historical reading, socio-political context, and languages used by actors in referring to the object, or in this case the European Digital Sovereignty. This also implies this research to highlight narratives, social meaning, and underlying political construction in regards to motives and practices of actors surrounding the object (Lamont, 2015). The interpretive approach will be interlinked with an ideal-type method as proposed by Edward Keene. The ideal-type method is influenced by Max Weber’s work on early Protestant Ethics and Capitalism which, according to Edward Keene (2009), utilises evaluative interpretation between reality and ideas. This is rooted in an assumption that social (and political) construction is shaped by a value orientation (Keene, 2009). One’s action can be captured through understanding the idealization and what reality and material context constrain (or facilitate) them to do so. The pinpoint between the idealization and reality-material context indicates the equal weight role of normative dimension and power-based consideration. Keene argues that, An ideal type is an attempt to capture the significance of an aspect of reality for us. An ideal type is therefore not particularly suitable for use as ‘a schema under which a real situation or action is to be subsumed as one instance’; on the contrary, it is a ‘limiting concept with which the real situation or action is compared and surveyed for the explication of certain of its signifificant components’. Keene (2009: 107)

86

A. A. Adonis

It is in line with this paper’s research question that puts international society and standard of civilization as an ideal type. By putting these two concepts as ideal types, it means that there is a set of assumptions that animate actors in promoting European Digital Sovereignty: to manage the balance of power, to defend social order, and realize moral order. In a practical sense, the conceptualization of international society and standard of civilization as delineated by ES scholars will be understood as criteria of ideals in this research. The reality of the European Digital Sovereignty then is identified with those criteria that play a role as a conceptual corridor. With this method, it is hoped to answer the question of to what extent English School can understand the case of European Digital Sovereignty. Thus, it is important to depart from the understanding of international society and standard of civilization as conceptual frameworks. Both will be further explained in the next paragraphs.

3 Conceptual Frameworks: International Society and Standard of Civilization In this chapter, I will explore and explain the conceptual frameworks I use for this paper. As previously mentioned, two ES concepts are utilized to examine the case of European Digital Sovereignty: international society and standard of civilization. Both concepts are arguably two of the most important conceptual frameworks introduced by ES scholars given both significant influences in shaping the theoretical foundations of English School. In general, there are three primary reasons why these two concepts are mobilized in this research. First, in line with the objective of this research, the intention to understand the extent of ES conceptual capability in making sense of cyberspace requires an effort to delve into traditional ES concepts and underpinning assumptions. Choosing international society as a conceptual point of departure in this research is reasonable since this concept is a centrepiece of ES intellectual development that consists of ES’ main assumptions in understanding international relations. Then, investigating an empirical case within the framework of international society enables an inquiry into the understanding of English School theory in general. It indeed will not be sufficient to merely rely on international society concept as the only gateway of examining ES theory as a whole in relation to cyberspace topic and taking that as an early conclusion. But, it is potentially sufficient enough to provide a general overview and research avenue to build upon in the relation between ES and cyberspace. Second, the two concepts are related to each other and potentially complementary in giving an overview of the empirical case. Starting from international society as a point of departure will open the possibility of investigating the nature of a phenomenon by incorporating historical context, normative dimension, sociological interaction, and institutional aspect, especially within specific actors/clubs bound by common interest, values, practices, and identities. This will be equipped with the relational aspect, particularly by paying attention to the potential expansion of an

7  English School on Cyberspace: Examining the European Digital Sovereignty…

87

international society after it emerges and enforces. This expansion can be framed with the concept of standard of civilization which provides analytical tools to investigate further. Third, the English School theory believes sovereignty is the prime organizing principle that constitutes inter-state relations (Bull, 1977). ES scholars situate the concept of sovereignty as one of the constitutive variables discussed in developing the frameworks of international society and standard of civilization. ES scholars like Charles Manning (1962), Hedley Bull (1977), Martin Wight (1977), and Andrew Linklater (2011) consistently bring the topic of sovereignty not only as a taken-for-granted reality and mere historical product to be scholarly examined, but also as a way to understand further how international society is shaped, expanded, and potentially developed. In fact, the discussion about sovereignty pushes ES forward by creating a debate between the solidarist and pluralist approaches (Buzan, 2014). The former interprets sovereignty as a responsibility with an ethical-action drive, the latter believes in the equality of sovereignty (Buzan, 2014). The discussion of digital sovereignty follows the trajectory of ES scholars in continuously redefining and refining the concepts as well as enriching the current debate. To elaborate further on the strength of international society and the standard of civilization, I will explain more in the discussion of these two concepts in the following paragraphs.

3.1 International Society International society is arguably the most important concept developed by ES scholars in parallel with two other concepts: international system and world order. The definition of international society varies due to the decades of debates within ES scholarship. However, most ES scholars refer to two important conceptualizations of international society by Charles Manning and Hedley Bull. Charles Manning is the first who explicates international society under the framework of ES. Manning (1962) defines international society as a socially constructed social reality assumed by sovereign states. In this constructed social reality, Manning (1962) argues that sovereign states interact simply just like human beings and are constrained by international law. Manning (1962) highlights that international law creates rules for the club to be followed by its members. Hence, to be part of an international society and recognized as a sovereign state by other members, a political entity must follow the rules of the club to be potentially admitted. In similar nuance, Wight (1977) also defines international society as “a social contract among societies themselves each constituted by their own social contract”. Manning and Wight categorically render domestic analogies to conceive the definition of international society. This domestic analogy is disputed by Hedley Bull (1977) who believes that international society should be understood together with realism tenet on balance of power and structure in international relations. Bull also widens the attributes of international society by including common interests and values in the definition as follows:

88

A. A. Adonis A society of states (or international society) exists when a group of states, conscious of certain common interests and common values, form a society in the sense that they conceive themselves to be bound by a common set of rules in their relations with one another, and share in the working of common institutions. (Bull, 1977: 13)

For Bull, international law is not the main attribute that defines international society. Rather, international law is one of the working institutions in which international society operates, together with other institutions such as diplomacy, war, and sovereignty. However, both Bull and Manning agree that sovereign states are primary actors in international societies. This understanding develops the body of literature upon which recent ES scholars build. These early definitions of international society from Hedley Bull, Charles Manning, and Martin Wight are further developed by recent scholars either by refining the concepts, widening the scopes, and/or incorporating recent development in IR theories. Buzan (2004) develops the concept of international society by adding the role of institutionalization. He mentions that international society “is about the institutionalization of shared interest and identity among states, and puts the creation and maintenance of shared norms, rules, and institutions at the centre of the IR theory” (Buzan, 2004: 7). Christian Reus-Smit (2009) and Timothy Dunne (1995) add instrumental contributions to interlink ES and constructivism by demonstrating the importance of the intersubjective dimension in international society. The questions of social construction and constitutional structures are addressed to international society underpinning assumptions such as sovereignty, legitimacy, and norms. Understanding international society, then, requires a deeper examination of social construction and constitutional structures of sovereignty. Suganami (2007) also offers a conceptual novelty by explaining the practice of sovereignty in the form of locating ultimate authority. The institutionalization, intersubjectivity, and practice, thus, become important variables to interrogate an international society. The above explanation also indicates a strong connection between international society and sovereignty. Apart from constituting an integral variable to understand international society, sovereignty confers a legitimacy for a political entity to be equally recognized as a supreme authority (Suganami, 2007). In ES, sovereignty is also understood to contain a set of intersubjectivities that differentiates between those included and excluded in international society (Suganami, 1984). This reinforces certain moral values, sociological understanding, and specific interactions, like stigmatization and socialization, among the actors as explained by Ayse Zarakol (2014). The possession of sovereignty, then, becomes a currency for a political entity to join an international society. Consequently, the inclusion and exclusion of political entities in an international society based on sovereignty lead to the creation of world hierarchy (Mattern & Zarakol, 2016). The hierarchy or stratification of political entities, historically based on sovereignty and its related subjectivities can be traced in European history and expanded beyond European civilization (Mattern & Zarakol, 2016). The different conceptions of sovereignty also spurred the internal debate between ES scholars: pluralist and solidarist camps. The pluralist camp believes in the equal sovereignty of states and narrow institutionalization of international society to enable the maintenance of international order (Buzan, 2014). In

7  English School on Cyberspace: Examining the European Digital Sovereignty…

89

opposition, the solidarist camp interprets sovereignty in international society with a humanitarian drive to ensure the progress of humanity and liberalization (Buzan, 2014). Therefore, taking international society as a conceptual framework might help examine the recent phenomenon of European Digital Sovereignty as historically international society has been used to understand sovereignty in Europe and spurred the debate within ES scholars. In this research, I will use the framework of international society by drawing three criteria based on the aforementioned explanation: (a) the institutionalization of rules based on common norms, interests, and identities; (b) the internal and external intersubjectivity of actors; and (c) the practice of sovereignty. The three criteria cover a general overview of an international society and are in line with the general consensus among ES scholars in defining what constitutes an international society. The three criteria are also chosen as they are potentially able to give polemical space for solidarist and pluralist camps of English School. In the next paragraph, I will explain another framework, standard of civilization, and how it relates to international society.

3.2 Standard of Civilization The concept of standard of civilization is significantly linked to international society. As previously mentioned, (European) international society historically implies membership to join the exclusive club of sovereign states with specific norms, identities, interests, and rules of conduct. The intersubjectivity of international society socially and culturally constructs a dichotomous understanding about who is civilized and barbarian. This understanding was legally manifested in the nineteenth centuries particularly in international treaties and legal documents (Buzan, 2014b). Not only implying membership of sovereign states, the understanding of civilization also differentiates mode of interaction, behavior, and interests (Buzan, 2014b). Gerry Gong is among the first ES scholar who extensively developed the idea of standard of civilization within the ES framework. Standard of civilization is understood to be a set of criteria in which social and political entities are categorized through moral hierarchy, political capacities, and perception of interaction (Gong, 1984). This categorization determines the degree to which a political entity integrates into an international society. Subsequently, it organizes and manages the relationship between those political entities within and outside an international society (Gong, 1984). The word civilization implies the active role of international society members in civilizing the rest of the world (Keal, 2003). The idea of the standard of civilization is also used to gate-keep international society membership by inclusion and exclusion mechanisms (Buzan, 2014b). Such mechanisms can be found, among others, in the forms of conditionality, discrimination, stigmatization, socialization, and social closure (Buzan, 2014b; Naylor, 2018). The adoption of identity, interest, governance, and/or religion is used to be pre-requirements to join a European international society. This practice is also utilized for practical purposes such as opening

90

A. A. Adonis

commercial access, gaining geopolitical influences, and even justification for colonialism (Buzan, 2014b). According to Gong (1984), the concept of standard of civilization drives a search for extraterritoriality and unequal relations from imperial powers in Europe to non-European subjects to gain material and political benefits. Moral hierarchy and pragmatic purpose, hence, are interlinked in the idea of standard of civilization. The discussion about the standard of civilization advances the ES theory on the expansion of international society. As the Western-dominated standard of civilization expands, the clash of civilizations is inevitable. The expansion of international society confronts non-western civilizations directly by pressuring them culturally and politically (Gong, 1984). The demand for conformity with western values and practices in the expansion of European international society results in complicated and dynamic interactions with non-western civilizations (Gong, 1984). This demand was institutionalized in international laws, particularly ones written by Western imperial powers in the past (Dunne & Reus-Smit, 2017). This includes the adoption of European-styled sovereignty as a mode of conduct in managing international relations. While the standard of civilization initially put stratification of political entities, it also brought the idea of universal equal sovereignty, especially after the Second World War where the decolonization process started across the globe (Buzan, 2017). The expansion of international society, thus, is also an expansion of universal sovereign equality. In recent years, the idea of standard of civilization has been further developed by several efforts made by ES scholars to investigate the legacy of colonialism and imperialism. This leads to a critical reassessment of Western-dominated agendas, values, and governance in world politics such as human rights, market liberalization, and democracy. Those agendas are promoted by creating civilizational categories and institutionalizations. These efforts redefine the membership of the global international society and, in turn, drive specific interactions. The roles of moral hierarchy and pragmatic purpose in those topics are generally investigated by emphasizing conditionality and conformity. Buzan (2014) believe that the standard of civilization, in other rhetorics or forms, persists in international relations as a practice. This concept, according to Buzan, can be used as an analytical tool for ES scholarship to reassess the claim of modernity and progress in international relations. This also advances the internal ES debate between pluralist and solidarist wings, especially whether the standard of civilization in the contemporary pushes a progressive agenda for humanity or a blatant attack on universal sovereign equality. The aforementioned explanation about the standard of civilization indicates the quality of the concept to better understand a contemporary international relations phenomenon. This research might benefit such conceptualization because it will cover the moral hierarchy and pragmatic purpose dimensions of European Digital Sovereignty. As the idea of standard of civilization is inherently related to sovereignty, this research would potentially add an analytical layer of understanding in the expansion of international society on cyberspace. This research will also interrogate the roles of conformity and conditionality in European Digital Sovereignty,

7  English School on Cyberspace: Examining the European Digital Sovereignty…

91

particularly to external actors. The potential expansionist nature of European Digital Sovereignty will also be investigated. By utilizing the standard of civilization in this research, I also intend to push further the debate between solidarist and pluralist camps in ES on cyberspace, a space where ES scholars still largely neglect. In the next chapter, I will begin to mobilize both frameworks in the case of European Digital Sovereignty.

4 Discussion The discussion of this research is structured into four section. First, I will explore the origin of digital sovereignty as an emerging idea in the politics of cyberspace. This chapter is essential to define what digital sovereignty actually means. Second, I will discuss the emergence of European Digital Sovereignty by focusing on its main attributes and dimensions. I will demonstrate how European Digital Sovereignty is socially and politically constructed by taking into account the EU’s official statements, regulations, and practices. This section will also identify important actors in the construction of European Digital Sovereignty. The third section will analyze European Digital Sovereignty through the lens of international society. I will examine the institutionalization, intersubjectivity, and practice of European Digital Sovereignty. The fourth section will investigate European Digital Sovereignty as a standard of civilization in cyberspace. This section will be discussed by highlighting particularly the expansionist dimension of European Digital Sovereignty and how it affects externally with other political entities.

4.1 The Emergence of Digital Sovereignty The exponential rise of information technology related to the internet has significantly influenced international politics in the last few decades by challenging the traditional authority of sovereign state actors. Traditional state actors’ exclusive authority on regulating information and externalities of technology is dramatically confronted as the internet provides anonymity and extraterritoriality beyond the physical realm in which state actors are used to regulating (Moore, 2018). State actors are also confronted with the unprecedented pace and scale of information. More than that, state actors also have to deal with the unprecedented pace and scale of the technology itself including the externalities. The rise of new actors related to cyberspace also adds another layer of complication in global politics (Susskind, 2020). Digital transformation, as it is usually defined to refer to all dynamics related to the internet, creates a new landscape in which various actors are involved and determined to redefine world politics on cyberspace, particularly in relation to sovereignty. As commonly understood by IR and political science scholars, sovereignty is typically referred to independence and supreme authority of an actor underpinned

92

A. A. Adonis

with non-interference principles (Jackson, 1990). Historically, sovereignty has been almost exclusively attributed to state actors as it has become the most essential prerequisite to define a state actor (Jackson, 1990). It provides legitimacy for state actors in the monopoly of formal authority and violence, enforcing rules and laws, and joining international society (Sorensen, 1999). Sovereignty is associated with the rule of laws, control over subjects, and territoriality (Sorensen, 1999). In that regard, digital transformation poses question marks to such understanding of sovereignty and subsequently, stimulates new debates empirically and theoretically. There are three ramifications in approaching the relations between the internet and sovereignty: cyber-exceptionalism, state-centred sovereignty, and multistakeholder governance. According to the first approach believes that the internet is a distinctively unique reality that can not be simply conceived with traditional notion of sovereignty. The virtual space on the internet is not only inherently transnational and globally connected, but it is also too complex to be regulated under national jurisdictions. The cyber-exceptionalists also argue that the internet has a pervasive, liberalizing, and globalizing nature that its scale, scope, and pace will never be sufficiently regulated by state actors (Pohle & Thiel, 2020). This camp also notes that the internet should inherently challenge traditional political authority by giving increasing authority and, in turn, sovereignty to more decentralized actors, such as individuals, business groups, and collective cyber groups (Pohle & Thiel, 2020). One of the most prominent manifestations of this idea can be found in the technology of blockchain and cryptocurrencies. The second approach, state-centered sovereignty refers to those who believe that the virtual realm on cyberspace must be effectively regulated by state actors (Wu, 1997). The concerns are chiefly justified by security and economic reasons in which state actors are mandated to regulate. This camp believes state actors should stay as the supreme authority on cyberspace to ensure the national interest (Wu, 1997). The problems of critical infrastructure, socio-cultural sensitivities, domestic economic protection and national unity are usually cited to justify this position. The third camp is multistakeholder governance that approaches the internet with an emphasis on governance, rather than governments. This camp proposes an approach in which cyberspace should be rendered by the involvement of multiple actors and modes of governance (Pohle & Thiel, 2020). This camp believes that the internet should not radically change traditional sovereignty, but rather creates new opportunities for different kinds of global decisionmaking with more inclusive actors and decentralized authorities (Pohle & Thiel, 2020). Those three camps drive different conceptions about what the definition of digital sovereignty is. It is usually also interchangeable with other terms such as technological sovereignty, cyber sovereignty, internet sovereignty, or data sovereignty (Couture & Toupin, 2019). The definition of digital sovereignty for those who lean toward cyber-exceptionalism would typically claim that digital sovereignty is the autonomy of an actor to render their own digital destiny (Pohle & Thiel, 2020). For a state-centred approach, the definition of digital sovereignty would refer to the capacity and regulation of state actors to regulate their cyberspace and all related technologies within territorial boundaries and subjects (Wu, 1997). The idea of

7  English School on Cyberspace: Examining the European Digital Sovereignty…

93

multistakeholder governance drives the definition of digital sovereignty to be a fragmented control over each actor’s digital destiny in relation with other actors on cyber space and related technologies (Pohle & Thiel, 2020). By drawing from these three ramifications, I will use my previous definition (Adonis, 2019) of digital sovereignty as “to what extent actors can control, govern, exercise, transfer, and use digital information, communication, and infrastructure.” This understanding will be complemented by Luciano Floridi’s (2020) emphasis on controlling power and legitimacy of authority in cyberspace. This research will understand Digital Sovereignty as the ability, power, and authority of an actor to control cyberspace and all its related technologies. This will include the contents, platforms, and infrastructures connected to the internet. This definition is helpful as it provides a common denominator between different approaches. It does not specify which actor holds the digital sovereignty, rather it focuses on their qualities as an actor in cyberspace. This is also in line with English School assumptions that can interpret ability as the practice and power and authority as intersubjectivity and institutionalization. This definition, then, is used to understand the European Digital Sovereignty that will be discussed in the next paragraphs.

4.2 The European Digital Sovereignty: Cyberspace with European Values The emergence of European Digital Sovereignty is not only pushed by the theoretical developments in scholarly writings as demonstrated in the previous paragraphs. It is driven by several political dynamics in the past few years related to cyberspace. The cases of Edward Snowden and Julian Assange bring further question marks on security and political issues (Siebert, 2021). Both cases rekindle the new debates between security and liberty in cyberspace particularly in the context of Western Democracies. At the same time, illiberal countries like China and Russia increasingly tighten their grip on cyberspace regulations (Carlin & Graff, 2018). China has been famously known for its Great Firewall that isolates China’s cyberspace from the rest of the world. It is added by its alliance with Russia to reform global internet governance to be more state-centred and less democratic (Deibert & Crete-Nishihata, 2012). Both China and Russia argue on the basis of national security and the primacy of states as the ultimate sovereignty in international relations (Deibert & Crete-Nishihata, 2012). It is also complicated by the fact that United States-based Tech Firms like Google, Amazon, Facebook, Apple, and Microsoft unrestrictedly benefit from the lack of cyberspace regulation in the world. This affects nation-­ states fiscally as it is harder for state actors to collect tax from those giant digital companies (Barker, 2020). Not only that but politically it is proven that those big tech firms enable political polarization. The disinformation, misinformation, and hoax on social media have influenced the political landscape, particularly in Europe (Moore, 2018). More social challenges like mental health crises and addictions stemming from minimum control of social media platforms also plague globally

94

A. A. Adonis

(Zuboff, 2019). Cryptocurrencies also challenge state actors as they work beyond physical territory and use blockchain, an even deeper technology beyond current state actors’ capability to regulate (Cacciatori, 2020). The Covid-19 crisis even pushes further the urgency of having a sense of control, power, and legitimacy in cyberspace as most economic activities in Europe require the internet in the pandemic era. This sets a background that leads to the necessity to construct European Digital Sovereignty. While the idea of formulating European Digital Sovereignty had its roots since 2015 when former President of the European Commission, Jean Claude Juncker proposed a Digital Single Market strategy (EC, 2015). It was further reinforced by the European Digital Strategy in 2018. However, the term “Europe’s Digital Sovereignty” entered an EU official document in 2020 when the President of the European Commission, Ursula von der Leyen mentioned it in her State of the Union speech (EC, 2020a). She also wrote an opinion stating that European Digital Sovereignty is “the capability that Europe must have to make its own choices, based on its own values, respecting its own rules” (EC, 2020b). It means that there must be control and capability of the European Union to have its own independence and autonomy in navigating its digital destiny based on European values. The European values as stated in the Treaty of establishing a constitution for Europe (TEU) 2005 refer to six values: human dignity, human rights, freedom, democracy, equality and the rule of law (EU, 2005). These values have been the moral underpinning of the European Union since its establishment. The word “rules” used by von der Leyen means that there is an intention from the European Union to formulate and enforce sets of unique European regulation in its own terms. In her official statement, Ursula von der Leyen elaborates that European Digital Sovereignty is structured in the European Digital Strategy. It has three objectives in defining European Digital Sovereignty: (a) technology that works for people; (b) a fair and competitive economy; and (c) an open, democratic, and sustainable society (EC, 2020a). These three are related to the three areas of European Digital Sovereignty: (a) data, (b) technology, and (c) infrastructure (EC, 2020b). In each area, the European Union consistently emphasizes two rationales: values and materials. In the area of data, the European Union underscores the importance of personal liberty, freedom, and human rights to be non-negotiable principles in personal data on cyberspace, and also the intention to enhance data economy for commercial purposes (EC, 2020b). For the technology, the EU wants to improve the capability of Artificial Intelligence and the privacy aspect of the technology (EC, 2020b). And in the infrastructure area, the EU aims to invest heavily in strategic information technologies and, in parallel, address the digital divide that is against equality in European values. This intention has been manifested in numerous EU policies since 2017. It starts from massive investments in supercomputers and cloud computing, training and standardization of Artificial Intelligence, to strengthening EU’s cyber agencies. Above all, the EU proactively promotes its norms and regulations on cyberspace chiefly via The General Data Protection Rights (GDPR) internally and externally (Goddard, 2017). Within the EU, the GDPR becomes the main reference in any EU member states’ policy on cyberspace. The GDPR regulates cyberspace with the

7  English School on Cyberspace: Examining the European Digital Sovereignty…

95

liberal principles enshrined in European values. It comprehensively addresses extensive issues in cyberspace ranging from personal data, digital economic activities, the right to be forgotten, to digital intellectual property rights (Goddard, 2017). It also covers the security aspect of cyberspace. The GDPR is famously known to be the most comprehensive cyberspace regulation that are also emulated beyond the European Union like Indonesia, Japan, and Brazil (Adonis, 2020). The GDPR is arguably the strongest and most visible form of European Digital Sovereignty given its quality and impacts, both internally and externally. European Digital Sovereignty is further detailed in several EU official documents, including but not limited to: European Digital Compass 2030, European Digital Vision 2020, Digital Decade 2030, and EU Commission document on Shaping Europe’s Digital Future (EC, 2021). Those documents explain further what European Digital Sovereignty means and what the action plans are. In those documents, there are three notable patterns that consistently appear. First, the EU’s strategy on Digital Sovereignty is pervasive. It stems from the fact that there are frequent interlinks between cyberspace/technology/digital issues with other issues: economy, security, social, culture, health, and environment. This displays the intention of the EU to not isolate the issue of cyberspace only about cyberspace itself. Second, there is a clear intention that the EU has a long term agenda and future-oriented plans in formulating and enforcing European Digital Sovereignty. This long term agenda shows that the EU will set European Digital Sovereignty as a present and future strategic interest. It potentially will drive the EU to take cyberspace issues as one of the most important agenda priorities. Third, related to the second pattern, it is frequently mentioned that there is an ambition of the EU to play a stronger role in the globalized world on the issue related to cyberspace. It signifies the intention that the idea of European Digital Sovereignty has an external dimension outside of the EU. This aforementioned exploration of European Digital Sovereignty already explained the general overview of this concept. In the following paragraphs, I will analyze to what extent the English School conceptual toolbox can understand the phenomenon of European Digital Sovereignty.

4.3 The European Digital Sovereignty as a Cyber International Society I argue that European Digital Sovereignty can be understood as a new emerging international society on cyberspace constructed by the European Union. The virtual realm of cyberspace has been understood as a lawless, normless space on non-­ physical territories with anonymous users where the actors and society had never been well defined in legal terms. The very essential feature of European Digital Sovereignty is the GDPR that defines the legality of internet users. It formalizes the legal subjects of the internet users, either its individual users, companies/firms, institutions, or state actors (Albrecht, 2016). Subsequently, it determines the actor

96

A. A. Adonis

in cyberspace as previously there was no comprehensive international law/regulation on cyberspace that successfully determined actors for its legal being. Determining actors as legal entities is critical because it reifies the actorness in the digital space where the territoriality and identity are different compared to the physical realm. The European Digital Sovereignty project also interlinks cyberspace with physical space in the sense that now there is a legal corridor in which activities in cyberspace can be transpired in the physical realm in a more extensive way, vice versa (Albrecht, 2016). More than reifying actors and transpiring cyberspace, the legality of cyberspace is a crucial variable as it constitutes the rules of conduct of society. Furthermore, I argue that this legality of European Digital Sovereignty paves the way for the emergence of European cyber international society. It is because legality is a way to institutionalize rules based on common norms, interests, and identities. GDPR does not only define the actorness, it also converges common norms, interests, and identities of the EU (and its member states and citizens) in cyberspace. GDPR is also not the only law regulating the digital realm in the EU. Other regulations such as the Digital Services Act and Digital Markets Act display the materialization of common norms, interests, and identities of the EU in the digital economy. These regulations under the European Digital Sovereignty project enhance the EU commitment to the protection of fundamental rights and subsequently serve their interests, values, and identities as stipulated by previous EU treaties and policies. One can argue that the common interests, values, and identities are indeed already pre-defined by the European Union. However, imposing that on cyberspace has different consequences. It creates a new internal and external intersubjectivity that defines the EU distinctively from other actors in cyberspace. In cyberspace, it is now possible to determine which actors have European values, identities, and interests. It is those who comply with GDPR, are subject to EU digital policies, and are involved in the rules of conduct under European Digital Sovereignty projects. With that understanding, EU member states and citizens are interestingly not the only ones who are part of this cyber international society. But anyone who conforms with the European Digital Sovereignty projects can be understood to be part of the European cyber international society. Consequently, it also brings intersubjective attributions to those who do not comply with European values on cyberspace. The negative connotation is easily attributed to those who do not comply. For example, EU stakeholders are increasingly far more critical of Chinese surveillance technology and put them under political scrutiny (Politico, 2019). The same goes for Big Tech Firms who are regularly challenged and criticized by European civil societies in the case of privacy (Petropoulos, 2021). Some also have to deal with the consequences of not complying with European values on cyberspace. This brings us to the next exploration of the practice of sovereignty. The practice of European Digital Sovereignty has considerably demonstrated the ability, power, and authority in line with European values. Still related to the previous paragraph, the punishing ability and power of the European Union are arguably proven in dealing with business companies who breach the GDPR. According to Tessian (2021), there have been 800 fines imposed on companies within the

7  English School on Cyberspace: Examining the European Digital Sovereignty…

97

European Economic Area and the United Kingdom since the GDPR was enacted in 2018. The EU also fined Big Tech companies like Google and Facebook. The ability and power to punish do not only emerge from the EU’s law enforcement capacity. More than that, it has to have credibility through legitimizing principles to be converted as an authority. This political authority with credibility and legitimacy enables the practice of sovereignty. In this case, European Digital Sovereignty benefits from the already established European values stipulated in their treaties and policies. The practice of European Digital Sovereignty also takes place in the form of ability and power to support. As previously mentioned, the EU has invested massively in strengthening its cyber security agencies, or ENISA (EC, 2021). The EU also invested more resources in Digital Skills and Artificial Intelligence. Other key technologies such as cloud computing, supercomputer, Big Data, and Blockchain are also developed by the EU under the idea of Digital Sovereignty (EC, 2020b). It is because sovereignty in cyberspace heavily depends on ability, rather than mere control of territory. By heavily investing in the key technologies, the EU demonstrates its commitment to managing its digital sovereignty through the possession of ability. Based on the aforementioned explanation, it shows that European Digital Sovereignty, thus, ticks all the ideal types of basic tenets of international society. Moreover, it uniquely creates a new international society in the virtual realm with its distinctive identities, interests, and values bound by specific rules. It presents the extent to which international society can help understand European Digital Sovereignty. However, I concede that there is still room for further critical examination in the relation between European Digital Sovereignty and international society. This discussion still assumes that there is a corporate and fixated European Digital Sovereignty ruled by the EU. Examining the diversity of actors’ involvement might give a more detailed understanding of European Digital Sovereignty through the lens of international society. Yet, the discussion already presents that in general it can be argued that European Digital Sovereignty is a new cyber international society constructed by the European Union.

4.4 The European Digital Sovereignty as a Standard of Digital Civilization In this section, I argue that European Digital Sovereignty has an expansionist nature in the form of creating a new cyber standard of civilization. This expansionist nature stems from the fact that the idea of European Digital Sovereignty inherently bears dual rationales: pragmatic purpose and moral hierarchy. It is caused by the idea that European Digital Sovereignty emerges from the necessity to regulate cyberspace in order to regain the European Union’s position in world politics, particularly in cyberspace. Given the enormous challenges related to cyberspace, the EU’s intention to create digital sovereignty can be interrogated with those two rationales. First,

98

A. A. Adonis

it has a pragmatic purpose related to the political and economic interests of the European Union. Given the enormous power of US-based tech firms, the EU has little competitive advantage in the digital economy (Barker, 2020). Chinese companies have become tech giants not only because of their commercial performance, but also their innovation in information technology as demonstrated by the likes of Huawei, Tencent, and Tiktok (Politico, 2021). There are only two European tech companies that rank among the fifty biggest digital companies in the world per 2020: Skype and Spotify (Politico, 2021). This indicates that Europe has miles to compete with China and the US in the digital economy. There is a necessity to protect the domestic digital economy ecosystem within the European Union and build competitiveness externally. The commercial growth and innovation entailed in the digital economy have been proven to be converted into political and security purposes (Politico, 2021). The cases of 5G and Artificial Intelligence have been situated at the centre of political debates in the European Union (Barker, 2020). Geopolitically, cyberspace matters, and the EU is considerably ranked third compared to the US and China. This creates anxiety for the European Union and its member states to react accordingly. It is added by the second challenge: a moral hierarchy. The geopolitical dynamics related to cyberspace have challenged European values and identities. The European values refer to the respect for human dignity, human rights, freedom, democracy, equality and the rule of law. The recent cyberspace-related phenomena have been confronting those European values directly. Externally, the China-Russia alliance on cyberspace explicitly advances illiberal agendas on global internet governance in opposition to freedom, democracy, and human rights. The unchecked Big Tech firms from the US and China also show a lack of the rule of law on cyberspace that detrimentally undermines the interest of the European Union. Internally, the rise of populism using social media to disrupt information and distribute hoaxes put European values at risk. Especially the rise of right-wing populism is regularly associated with racism, xenophobia, and discrimination--attacks for human dignity, human rights, and equality (Moore, 2018). These two recurring challenges stimulate the European Union’s intention to create its own conception of Digital Sovereignty by creating a moral hierarchy. This moral hierarchy puts the European Union at the top of the rank, that is being the only international actor to bear this moral responsibility and act accordingly. Such entitlement transforms the idea of European Digital Sovereignty to be used as the standard of civilization in cyberspace. This standard of digital civilization in the form of European Digital Sovereignty can be examined by identifying the conditionality and conformity patterns. Again, the case of GDPR is an interesting example of the standard of civilization taking place in digital space. GDPR requires anyone who intends to access the European market to comply with it (Albrecht, 2016). Given the size of the European market, this conditionality puts some pressure on companies around the globe to comply with the GDPR. While market access is effective in influencing non-EU companies to conform to the conditions, it is worth noting that it leaves little room for critical engagement from non-EU actors on GDPR.  It presents limited room to contend with GDPR and European Digital Sovereignty, especially when this regulation is

7  English School on Cyberspace: Examining the European Digital Sovereignty…

99

also framed with so-called virtuous moral underpinnings. This practice has been long associated with the European Union in conducting its external affairs (Manners, 2002). Conditionality and conformity on value-driven issues have been used in the exchange of market access and status/membership in international society (Manners, 2002). Interestingly, unlike in the past, possessing sovereignty in digital space is not required to join in European cyber international society. Rather, the European Digital Sovereignty itself becomes a cyber international society. Uniquely, both become synonymous and interchangeable with each other. Other kinds of digital sovereignty must conform to be included in this new cyber international society. The pursuit of status in the case of European Digital Sovereignty is connoted with how actors are appealed to be associated with the European values, from supporting freedom and democracy to protecting human rights in cyberspace. This sets a gate-keeping membership of a digitally civilized club in a European cyber international society that easily ascends an actor’s status. Furthermore, this understanding is reinforced by proactive efforts by the EU stakeholders and civil societies in demonizing actors who are deemed to be digitally uncivilized or not in line with European Digital Sovereignty values. This can be found in the case of EU stakeholders and civil societies’ condemnation of Russia due to alleged involvement in manufacturing hoax and disinformation across Europe in the past few years (Carnegie, 2020). It also sanctions China for allegedly launching cyber attacks on the West recently (Politico, 2021b). This also indicates the clash of (cyber) civilization as predicted by Gong. Gong believes that the expansion of international society in the form of the standard of civilization would lead to a conflictual clash between civilizations or international society. By pushing further the project of European Digital Sovereignty, it is inevitable to see potential clashes of civilization in cyberspace. Apart from conflictual relations, European Digital Sovereignty as a standard of (digital) civilization also affects constructively in international society. Some countries use the EU’s digital strategies and regulations as references in regulating their digital space. As previously mentioned, countries like Brazil, Thailand, Taiwan, and India emulate the EU’s digital strategy and regulation as it is typically deemed to present comprehensive provisions (Comforte, 2021). This emulation distinguishes the European version of digital sovereignty from others. It is still limited to know about other types of digital sovereignty like China and Russia to be modeled and implemented in other countries. This emulation also means that the cyber international society made by the EU expands. It repeats the narrative of the expansion of European international society in the past, only this one takes place in the modern era and on cyberspace. It is, then, worth anticipating what differences will unfold this time, whether it will create a new type of colonialism/imperialism, cooperation, or other kinds of interaction. Another difference that needs to be addressed is, again, the variety of actors. It is intriguing to examine non-state actors’ involvement with the European standard of digital civilization. The use of the standard of civilization as a framework, I argue, will still be relevant to examine, understand, and potentially predict it. Also, this concept still has a lot to offer in understanding political phenomena in cyberspace.

100

A. A. Adonis

5 Conclusion The aforementioned explanations have answered a research question of to what extent the English School’s conceptual toolbox can examine the case of the European Digital Sovereignty. By instrumentalizing the interpretive approach and ideal-types method, this research found that the ES framework of ‘international society’ is helpful in understanding the general nature of European Digital Sovereignty. The role of international law or rules of conduct in the foundational construction of international society can be identified in European Digital Sovereignty as it creates an unprecedented extensive legal framework on cyberspace that determines the actorness of cyberspace users. This paves a way for the European Digital Sovereignty to be interpreted as a new cyber international society in which there is a significant degree of institutionalization of rules bound by common interests, identities, and values. The project of European Digital Sovereignty also drives new intersubjectivities that categorize actors according to their identity, behavior, and compliance with norms. This also results in actions taken by the EU stakeholders to conduct the practice of digital sovereignty in the form of enforcing laws and giving connotative attributions. However, the practice of European digital sovereignty also renders the importance of technological ability by investing in key technologies and digital skills. The combination of these variables constitutes ideal types of European Digital Sovereignty to be understood as a cyber international society. This research also found that the concept of ‘standard of civilization’ is useful to further engage critically with the project of European Digital Sovereignty, particularly in understanding the expansion of international society. By interrogating the moral hierarchy and pragmatic purpose, I found the inherent constitutive elements of European Digital Sovereignty to resemble an expansionist standard of civilization. The loaded economic and strategic concerns in the project of European digital sovereignty intertwines with the moral categorization that gives EU ethical entitlements. Consequently, this leads to a clash of civilizations with other political entities. However, this also leads to a practice of emulation by other political entities. This emulation also becomes a gate-keeping membership of European Digital Sovereignty. By emulating European norms on cyberspace, other political entities benefit from status ascendancy in international society. It is also followed by market access to the European market. However, there are several limits in the English School conceptual toolbox in understanding European Digital Sovereignty. While it is significantly useful to investigate on the systemic level and interrogate the raison de systeme, the concept of international society has to assume a state actor as a fixated and unitary actor to draw common identities, values, and interests. The problem with cyberspace is that actors are far more diversified and hardly have a unitary and fixated unit. The basic assumption of ES that states are primary actors potentially needs to be revisited since the dynamics of cyberspace show the prominent roles of Big Tech firms in managing their platforms and potentially setting up their own digital sovereignty. The problem of actor diversity is also present in this research when I mobilized the

7  English School on Cyberspace: Examining the European Digital Sovereignty…

101

framework of standard of civilization. This research is still limited to touch non-­ state actors within the concept of standard of civilization. Nonetheless, this research manages to fill the absence of research on cyberspace from the English School perspective. This research also enriches the literature on digital sovereignty particularly by providing another perspective apart from the cyber-exceptionalist, state-centred, and multistakeholder approaches. Lastly, this research should not be an end in itself. Given the limited word count and scope, there is still room for research avenues in the future, especially by examining digital sovereignty not only in European, but also in the Global South. There are also other English School frameworks that can be mobilized to push further the discussion of cyberspace in International Relations. Disclaimer and Acknowledgement  This paper is an improved version of the author’s master’s dissertation at the London School of Economics and Political Science in 2020/2021 with some updates and revisions. I would like to thank Federica Bicchi for her supervision and guidance during the research process.

References Adonis, A. A. (2019). Critical engagement on digital sovereignty in international relations: Actor transformation and global hierarchy. Global: Jurnal Politik Internasional, 21(2), 262–282. Adonis,A. A. (2020). European digital sovereignty: EU’s projection of normative power. https://globalmedia.mit.edu/2020/09/09/european-­digital-­sovereignty-­eus-­projection-­of-­normative-­power/ Albrecht, J.  P. (2016). How the GDPR will change the world. European Data Protection Law Review, 2, 287. Barker, T. (2020). Europe technology Sovereignty’s Von der Leyen. https://foreignpolicy. com/2020/01/16/europe-­technology-­sovereignty-­von-­der-­leyen/. Barrinha, A., & Renard, T. (2017). Cyber-diplomacy: The making of an international society in the digital age. Global Affairs, 3(4–5), 353–364. Beaumier, G., et al. (2020). Global regulations for a digital economy: Between new and old challenges. Global Policy, 11(4), 515–522. Budnitsky, S., & Jia, L. (2018). Branding internet sovereignty: digital media and the chinese–russian cyberalliance. European Journal of Cultural Studies, 21(5), 594–613. Bull, H. (1977). The anarchical society: A study of order in world politics. Macmillan. Buzan, B. (2004). From international to world society? English school theory and the social structure of globalisation (Vol. 95). Cambridge University Press. Buzan, B. (2014). An introduction to English School in International Relations. Polity. Buzan, B. (2014b). The ‘standard of civilisation’as an English school concept. Millennium, 42(3), 576–594. Buzan, B. (2017). Universal sovereignty (pp. 227–247). The Globalization of International Society. Cacciatori, M. (2020). The English school, cryptocurrencies, and the technological foundations of world society. Cambridge Review of International Affairs, 33(4), 477–479. Carlin, J. P., & Graff, G. M. (2018). Dawn of the code war: America’s battle against Russia, China, and the rising global cyber threat. Hachette UK. Carnegie. (2020). EU’s role in fighting disinformation: Taking back initiative. https:// carnegieendowment.org/2020/07/15/eu-­s -­r ole-­i n-­fighting-­d isinformation-­t aking-­b ack-­ initiative-­pub-­82286

102

A. A. Adonis

Celeste, E. (2021). Digital sovereignty in the EU: Challenges and future perspectives. In Data Protection Beyond Borders: Transatlantic Perspectives on Extraterritoriality and Sovereignty (pp. 211–228). Clarke, R., & Knake, R. (2012). Cyber war: The next threat to national security and what to do about it. Ecco. Comforte. (2021). 13 countries with GDPR-like data privacy laws. https://insights.comforte. com/13-­countries-­with-­gdpr-­like-­data-­privacy-­laws Couture, S., & Toupin, S. (2019). What does the notion of “sovereignty” mean when referring to the digital? New Media & Society, 21(10), 2305–2322. Cronin, B. (2003). Institutions for the common good: international protection regimes in international society. Cambridge University Press. Deibert, R.  J., & Crete-Nishihata, M. (2012). Global governance and the spread of cyberspace controls. Global Governance, 18(3), 339–361. Diesen, G. (2021). Great power politics in the fourth industrial revolution: The geoeconomics of technological sovereignty. Bloomsbury Publishing. Dunne, T. (1995). The social construction of international society. European Journal of International Relations, 1(3), 367–389. https://doi.org/10.1177/1354066195001003003 Dunne, T., & Reus-Smit, C. (2017). The globalization of international society. Oxford University Press. EC (European Comission). (2015). Official document: European digital single market. https://eur-­ lex.europa.eu/legal-­content/EN/TXT/?uri=celex%3A52015DC0192 EC (European Commission). (2020a). Official speech: State of the union 2020. https://ec.europa. eu/commission/presscorner/detail/en/SPEECH_20_1655 EC (European Commission). (2020b). Shaping Europe’s digital future. https://ec.europa.eu/ commission/presscorner/detail/en/AC_20_260 EC (European Commission). (2021). Digital compass. https://digital-­strategy.ec.europa.eu/en/ policies/digital-­compass Eriksson, J. & Giacomello, G. (2014). International relations, cybersecurity, and content analysis: A constructivist approach. In The global politics of science and technology (Vol. 2, pp. 205–219). Springer. EU. (European Union). (2005). Official document: Treaty establishing a European constitution https://europa.eu/european-­union/sites/default/files/docs/body/ treaty_establishing_a_constitution_for_europe_en.pdf Finnemore, M. (2001). Exporting the English school. Review of International Studies, 27(3), 509–513. Floridi, L. (2020). The fight for digital sovereignty: What it is, and why it matters, especially for the EU. Philosophy & Technology, 33, 369–378. https://doi.org/10.1007/s13347-­020-­00423-­6 Goddard, M. (2017). The EU General Data Protection Regulation (GDPR): European regulation that has a global impact. International Journal of Market Research, 59(6), 703–705. Gong, G. W. (1984). The standard of ‘civilization’ in international society. Clarendon Press. Gueham, F. (2017). Digital sovereignty  – steps towards a new system of internet governance. Fondapol. Hamonangan, I., & Assegaff, Z. (2020). Cyber Diplomacy: Menuju Masyarakat Internasional yang Damai di Era Digital. Padjadjaran Journal of International Relations, 1(3), 311–333. Jackson, R.  H. (1990). Quasi-states, sovereignty, international relations and the third world. Cambridge University Press. Keal, P. (2003). European conquest and the rights of indigenous peoples: The moral backwardness of international society. Cambridge University Press. Keene, E. (2009). International society as an ideal type. In Theorising international society (pp. 104–124). Palgrave Macmillan. Lamont, C. (2015). Research methods in international relations. Sage. Little, R. (1995). Neorealism and the English school: A methodological, ontological and theoretical reassessment. European Journal of International Relations, 1(1), 9–34.

7  English School on Cyberspace: Examining the European Digital Sovereignty…

103

Manners, I. (2002). Normative power Europe: A contradiction in terms? JCMS: Journal of common market studies, 40(2), 235–258. Manning, C. A. W. (1962). The nature of international society. Macmillan. Mattern, J. B., & Zarakol, A. (2016). Hierarchies in world politics. International Organization, 70(3), 623–654. Mearsheimer, J. (2005). E.  H. Carr vs. Idealism: The battle rages on. International Relations, 19(2), 139–152. Moore, M. (2018). Democracy hacked: How technology is destabilising global politics. Simon and Schuster. Navari, C. (2009). Theorising international society: English school methods. Palgrave. Navari, C. (2010). English school methodology and methods. In Oxford research encyclopedia of international studies. Naylor, T. (2018). Social closure and international society: Status groups from the family of civilised nations to the G20. Routledge. Neumann, I.  B. (2001). The English school and the practices of world society. Review of International Studies, 27(3), 503–507. Petropoulos, G. (2021). A European union approach to regulating big tech. Communications of the ACM, 64(8), 24–26. Pohle, J., & Thiel, T. (2020). Digital sovereignty. Internet Policy Review, 9(4), 1–19. Politico. (2019) EU eyes privacy clampodwon on China’s surveillance. https://www.politico.eu/ article/european-­union-­eyes-­privacy-­clampdown-­on-­china-­surveillance-­huawei/ Politico. (2021). EU was a Big Tech Enforcer, not anymore. https://www.politico.eu/article/ eu-­big-­tech-­enforcer-­us-­china-­gdpr-­privacy-­competition-­apple-­google-­facebook-­amazon/ Politico. (2021b). EU, US condemnation on China cyberattacks. https://www.politico.eu/article/ europe-­us-­condemnation-­china-­state-­sponsored-­cyberattacks/ Reus-Smit, C. (2009). Constructivism and the English school. In C. Navari (Ed.), Theorising international society: English school methods (pp. 58–77). Palgrave. Seth, S. (2011). Postcolonial theory and the critique of international relations. Millennium, 40(1), 167–183. Siebert, Z. (2021). Digital sovereignty: EU contest for influence and leadership. https://www. boell.de/en/2021/02/10/digital-­sovereignty-­eu-­contest-­influence-­and-­leadership Sørensen, G. (1999). Sovereignty: Change and continuity in a fundamental institution. Political Studies, 47(3), 590–604. Spegele, R. (2005). Traditional political realism and the writing of history. In A. J. Bellamy (Ed.), International society and its critics (pp. 97–114). Oxford University Press. Suganami, H. (1984). Japan’s entry into international society. In H. Bull & A. Watson (Eds.), The expansion of international society (pp. 185–199). Oxford University Press. Suganami, H. (2007). Understanding sovereignty through Kelsen/Schmitt. Review of International Studies, 33(3), 511–530. Susskind, J. (2020). Future Politics. Oxford University Press. Tessian. (2021). Biggest GDPR fines in 2020. https://www.tessian.com/blog/ biggest-­gdpr-­fines-­2020/ Watson, A. (1984). European international society and its expansion. In H.  Bull & A.  Watson (Eds.), The expansion of international society (pp. 13–32). Oxford University Press. Wheeler, N.  J. (2000). Saving strangers: Humanitarian intervention in international society. Oxford University Press. Wight, M. (1977). In H. Bull (Ed.), Systems of States. Leicester University Press. Wight, M. (1991). In B.  Porter & G.  Wight (Eds.), International theory: The three traditions. Leicester University Press/Royal Institute of International Affairs. Wu, T. (1997). Cyberspace Sovereignty--The Internet and the International System. Harvard Journal of Law & Technology, 10(3), 647–666. Zarakol, A. (2014). What made the modern world hang together: Socialisation or stigmatisation? International Theory, 6(2), 311–332. Zuboff, S. (2019). The age of surveillance capitalism. Profile Books.

Chapter 8

Strategic Autonomy for Europe: Strength at Home and Strong in the World, Illusion or Realism Paul Timmers

Abstract  the EU is challenged to defend its sovereignty and strengthen its strategic autonomy in the international system of states given the threats of geopolitical conflicts and war, global challenges, and the disruptive nature of digital technologies. However, what is the meaning of EU sovereignty, especially in the digital age? Which forms of strategic autonomy are conceivable and desirable yet feasible? One approach is for the EU to show leadership in global collaboration in the common interest. A second is to settle for dependence on partnerships with ‘likeminded’ countries. A third would be to ‘just’ accept risks to European sovereignty. On the one hand, the EU has strengths, internationally and internally, in its democracy, citizen-state relationship, legislative acquis, and experience in dealing with diversity of interests. On the other hand and realistically, the EU will be forced to strike difficult compromises between these three approaches to strategic autonomy in the digital age, generally given limited understanding of co-constructing technology and society, and specifically given the EU’s weaknesses in digital capabilities, capacities and decision-making powers. Keywords  Strategic autonomy · Sovereignty · War · Global challenges · Values · European Union · EU · Brussels effect · International relations · IR · Digital age · Digital technologies · Platforms

This is a research-oriented adaptation of an essay for Èthos, Free University Netherlands (Timmers, Paul, 2022b) in (Buijs & Bosman, 2022). P. Timmers (*) Research Associate Oxford Internet Institute, University of Oxford, Oxford, UK KU Leuven, Leuven, Belgium e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_8

105

106

P. Timmers

1 Introduction In 2018, Anu Bradford published ‘The Brussels Effect’ (Bradford, 2020). She argued that the EU has a remarkable influence in the world with legislation and standards as companies and other countries worldwide conform to these. This influence would come about thanks to the size of the European market, at least, when the EU is the first to introduce rules, and she dubbed it the ‘Brussels effect’. Two recent examples from the digital world are the GDPR1 personal data protection legislation and the EU‘s Digital COVID Pass to which 64 countries around the globe with almost 1 billion inhabitants are complying (European Commission, 2021). Brussels policymakers are quite proud of this global influence. It puts the EU on the map, it shows it has a form of strategic autonomy and through this the EU can affirm and express its sovereignty (Floridi, 2020; Timmers, 2019). But other voices scorn this, saying that Europe may be good at regulating the world but that it is not good at creating companies with global impact. They say ‘referees don’t win’ (Wolff, 2020). Both positions hold a certain truth. The focus of this article is the question: to what extent can Europe achieve strategic autonomy, i.e., the means to safeguard and strengthen sovereignty? What are its strengths and what place in the world can the EU aspire to? More specifically strategic autonomy consists of capabilities such as knowledge and skills, capacities such as financial and manufacturing and defence resources, and control that are necessary to decide and act upon the own future in economy, society and democracy.2 This article does not seek to provide a complete answer to the question, but rather identifies elements of an answer and in doing so identifies a number of research questions. The answer depends firstly, on the kind of sovereignty that is aspired, and secondly, on the feasibility of achieving the necessary EU strategic autonomy to provide for such sovereignty. To make the reflection more concrete, the digital world will regularly be taken as an example. The digital world changes perceptions of sovereignty, and given its pervasiveness and disruptive nature exposes the need for and feasibility of strategic autonomy.

 General Data Protection Regulation.  The 3C of strategic autonomy, namely capabilities, capacities, control (Timmers, 2021). Strategic autonomy in digital matters is often, confusingly, called digital sovereignty (Moerel & Timmers, 2021). 1 2

8  Strategic Autonomy for Europe: Strength at Home and Strong in the World, Illusion…

107

2 Strategic Autonomy and Sovereignty On the one hand, Europe3 is proud of what it has achieved with its system of rules. These have allowed to create a large internal market, free movement of people, goods, capital and services; the euro and so on. On the other hand, Europe is anxious and feels threatened. Its rules help little against several external threats. President Putin is threatening nuclear weapons less than 1500 km from Brussels. Former US President Donald Trump and his supporters, not friends of the EU, continue to make themselves heard. President Xi Jinping seems keen with China’s money, diplomacy and military power to impose a different world order, a historic overturning (Sheikh, 2016). Let alone that Europe feels powerless in the face of global threats that do not respect its borders and laws such as cyber criminals, pandemic, and climate change. Europeans also feel weak vis-à-vis the digital giants, who ruthlessly overpower local decision-makers rolling out their massive data-centres and, moreover, seem to be taking control of both data and all-powerful artificial intelligence (AI) for data analytics. Techno-state and techno-economic dystopia loom at the horizon. European and national sovereignty is being undermined. No wonder that European Heads of State have strategic autonomy high on their agenda. There is no unique and widely accepted definition of state sovereignty. It’s like the proverbial elephant that needs to be described by a group of blindfolded people and indeed, perhaps it is an ‘essentially contested concept’ (Gallie, 1956). State sovereignty has aspects of identity, territoriality and power. It is about ‘who we are’, ‘our’ values and culture and ‘what belongs to us’ such as territory and borders and the riches in our soil as well as digital assets such as health and industrial data. Being a sovereign state presupposes recognition and respect by other countries, that is, an attribute of sovereignty is external legitimacy. State sovereignty is also about the relationship between state and citizen where there must be mutual recognition, rights and obligations and control, that is, another attribute of sovereignty is internal legitimacy (Biersteker, 2012). Clearly, sovereignty comes in degrees. Sovereignty of the state and the sovereignty of the individual are not to be confused, but they do relate to each other in internal legitimacy. Control over what belongs to us as individuals (our body and life, our thoughts, our preferences, our choices in social relations) can lead to tensions in the relationship between state and individual. Can personal freedom be curtailed in a pandemic? Does your identity as a citizen belong to you or to the state? Can it also be owned by Facebook or Google? Our sociological-political understanding of what sovereignty means in the digital age continues to evolve.4 On the one hand, digital technology does not respect geographic borders and the system of states. Issues that were once under the control of the state, such as information and identification, are increasingly in the hands of major digital platforms. Digital technology such as blockchain enables radical  Europe and EU will be used interchangeably unless otherwise indicated.  Glasze et al. (2022), Kello (2017).

3 4

108

P. Timmers

decentralization and distribution of power. Rogue actors cause rupture with digital technology in the international system of norms and values of state behaviour. On the other hand, attempts are increasingly being made to bring the digital world under sovereign control. This is happening in a democratic way, amongst others through regulation in Europe, with the US as a hesitant follower and in a dictatorial way in China and Russia. An important field of research therefore continues to be the meaning of sovereignty in the digital age.5 As in past research on sovereignty, several perspectives need to be joined up, from political sciences and international relations, law, economics, sociology, and ethics. Most important, these now need to be complemented by perspectives from digital technology, business economics and industrial policy.

3 Achieving Strategic Autonomy Europe no longer controls the seas and has allowed much of the control of cyberspace to slip away. Some challenges, such as climate, also go far beyond what an individual country or a regional block can deal with. How then can Europe achieve strategic autonomy? In most cases, Europe will work with others because it must and also because it wants to do so. A self-sufficient, autarchic Europe is neither realistic nor desirable.6 One way forward for Europe is to continue with compromises, with give-and- take, working from the assumption that this would be both best for an open global economy and also an overriding interest of all major geopolitical powers. In Europe, this risk management approach has been popular for the past 30 years, since the fall of the Wall and the opening of China. But… if this had worked, EU strategic autonomy would not be Chefsache today. It is clear that other approaches are needed. More realistic then is to collaborate exclusive arrangements with other countries, in strategic partnerships with like-­ minded countries. Within the EU, certain joint EU-level regulation and European (industrial) initiatives are clearly seeking to contribute to strategic autonomy and some have even succeeded in creating global standards and global industrial players (such as Airbus or the Galileo satellite system). Reinforcement of strategic autonomy is certainly also possible through collaboration with like-minded non-EU countries. As a prominent and recent example, Europe is painfully vulnerable in the area of semiconductors. Car factories already had to close because their chips were not delivered from abroad. The EU proposed in 2022 a strategic autonomy initiative consisting of legislation and investment, the EU Chips Act. This semiconductors initiative is wide open to the US and other  Sovereignty in the digital age is not ‘digital sovereignty’, the latter being a misnomer as usually this rathere means digital strategic autonomy, i.e. capabilities, capacities and control in the digital world in order to realise sovereignty. 6  There are exceptions. Generally countries must keep exclusive control of the encryption of their most sensitive information, such as in defense and diplomatic relations. 5

8  Strategic Autonomy for Europe: Strength at Home and Strong in the World, Illusion…

109

like-minded countries such as Japan and Korea. The American chip manufacturer Intel has already pledged to invest €80 billion in the EU. However, would cooperation with only friendly countries not be a too limited and possibly even risky strategy, reinforcing a counterproductive ‘us against them’ perspective? Or is it perhaps the winning strategy after all? President Reagan held firm to the Soviet Union with his axis of evil. Some believe that his uncompromising black-and-white position contributed significantly to the collapse of the Soviet bloc. The liberal market democracy led by the US won, isn’t? End of history, as coined by (Fukuyama, 2006). Nothing can be farther from the truth. Most of the world’s population is letting President Putin have his way to destroy democracy and freedom (see Fig. 8.1). The Western model, even though it is not flawlessly democratic and respectful of human rights, is not the winner at all. A third approach to strategic autonomy is to pursue globally-shared interests and build globally shared assets. A good example is the hole in the ozone layer. This was addressed in 1987 by the Montreal Protocol, which banned the ozone-destroying CFK gases. It came about through cooperation between scientists and politicians. Now, 30 years later, we see that the hole is starting to close. The ozone layer is an example of a globally shared resource, a global commons. In the digital world we may learn from this and apply, based on further research, insights from Elinor Ostrom, among others, albeit with the necessary care and nuance (Ostrom, 2015).7 In 2009, Ostrom was awarded the Nobel Prize in Economics for her research into common pools of natural resource (commons), which, subject to conditions, allows successful management with self-organization of stakeholders. There are also opportunities in the digital world for global stakeholder cooperation for the common good and interest, while strengthening sovereignty. This is Fig. 8.1  The UN resolution on Russia’s aggression against Ukraine had only 42% in favor

 One nuance is that Ostrom’s empirical research is about common pool resources of a smaller scale than global and that some global commons (a term used among others by the UN) are rather common goods or public goods than commons. 7

110

P. Timmers

already done for the management of Internet domain names by ICANN and for Internet protocols by IETF.8 Other digital examples include building and maintaining data on public health (e.g., COVID data), combating online child pornography, and security against cyber criminals. Is pursuing globally-shared interests and building globally shared assets possible? Yes. Is it hard? Yes. Tackling a problem at global level is smart and sensible for several reasons. Paradoxically, each country’s sovereignty can be strengthened by broad cooperation of countries. This is the case when threats or challenges exceed the capacity of a single state and when they can share the burden of investments and thereby spend scarce resources on other sovereignty matters (such as shoring up defence or protecting core government information). An example is where the national security of each country is enhanced by jointly tackling cyber criminals. No country can do that on its own. The pursuit of the commons is then a case of self-interest properly understood, as the great political philosopher Alex De Tocqueville put it (Tocqueville, 1864). Even if joint global management does not come for free.

4 Strengths and Weaknesses One strength on which Europe can build if it wants to play its part in the world is its large internal market. The EU represents 20% of the world economy with 450 million consumers and 20 million businesses. Further strengths are the EU Treaties and their basis in values,9 extensive specific legislation and the European judiciary. As regards the latter, appeals to the Court in Luxembourg are not a futile exercise. On more than one occasion, it has been shown that European judges have remarkable power to guarantee a ‘Europe of values’. This has become clear in the repeated rejection of the exchange of personal data with the US on the grounds of inadequate guarantees, the Schrems cases I and II. No doubt the court’s verdicts have led to some teeth grinding in data-analytics companies. Europe is also showing its legal power in the future digital world. When Elon Musk announced that he intended taking over and privatizing Twitter on the grounds of ‘freedom of speech’, the European Commission said that he too will have to comply with the recent laws on access to and content on digital platforms (Espinoza, 2022).10 As said, the EU is built on European values. They play a major role in the relationship between a citizen-state and the organization of the distribution of power. But it would be naive to say that values in Europe are uniform across the continent, across centuries, and across generations (Biolcati & Ladini, 2022). The historical  Internet Corporation for Assigned Names and Numbers, respectively. Internet Engineering Task Force. 9  Article 2 of the Treaty on the European Union: ‘The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities.’ 10  Referring to compliance with the Digital Markets Act, DMA, and Digital Services Act, DSA. 8

8  Strategic Autonomy for Europe: Strength at Home and Strong in the World, Illusion…

111

development, at least in Western Europe, is that the state exists by the grace of the citizens and through democratic approval in elections, with political parties as a way of channelling social conflicts into democratic decision-making (Bickerton et al., 2022). Consequently, the lack of a strong party democracy at European (i.e., EU) level is a worrying weakness that should be researched. The state must guarantee civil and fundamental rights. But citizenship also exists by the grace of the state. It is the state that has the right to determine ‘who belongs’ and who does not. It is the state that is allowed to restrict freedoms, as is regularly the case in economic affairs, c.f., the extensive ruleset of the EU internal market, and in exceptional ways during the COVID pandemic and in times of war. It is the extensive system of independent powers, trias politica, media and civil society, that maintains a common understanding of power in society, i.e., foundational sovereignty (Bickerton et  al., 2022), that safeguards rights, in order to continue and evolve a relationship between individual sovereignty and state sovereignty - internal legitimacy – as ‘we’ want it. Even if its values are not uniform in Europe, mutual legitimacy in the relation of state and citizen is different in Europe than in the US or China. In the US, grossly simplified, the symbol of legitimacy is the unity around the flag, the star-spangled banner. In China it is the Party. Neither of these is what, generally, Europeans aspire, but it is easier to say what this balance between the state and the citizen should not be than what it should be for Europe. That is one reason why internal legitimacy cannot easily be called a strength. A positive wording would help to make this foundation of European sovereignty, the democratic relationship between the state and the citizen, explicitly a strength, internally and thereby also externally. An Ansatz will be given at the end of this article. What then are weaknesses that can undermine strategic autonomy in Europe? It has often been said that Europe is internally divided and that, as a result, it organizes its institutional sovereignty in far too complex ways,11 that it is too often half-­ hearted, too slow and risk-averse. As for the latter, as an illustration relevant for strategic autonomy, venture capital which is indispensable to finance strategic companies is far less available in the EU than in the US. The EU is strong in research and innovation but weak in scaling up smaller companies. European start-ups are often bought up by foreign parties with deep pockets. When this happens Europe’s strategic autonomy leaves by the front door. Has anything changed recently, now that sovereignty is threatened? It appears to be the case. First of all, those internal divisions. Europe has handled the problem of 5G security (whether or not to allow Huawei equipment into our telecommunications networks), COVID recovery funding and most recently, its support to Ukraine, in remarkable unity. Moreover, in all these cases, the European mandate and therefore joint EU action is not self-evident. 5G security is about national security which

 Which leads to compromise concepts such as differentiated integration and two-speed Europe, (EU IDEA consortium, n.d.) 11

112

P. Timmers

is, by the EU Treaties, the prerogative of each Member State individually.12 The COVID Recovery Fund broke the taboo over European-level loans with shared repayment. Providing weapons to Ukraine with EU money is a case of unprecedented strong common defence policy action. There seems to be a trend then towards more cooperation at EU level and this even – or notably – on issues where the EU mandate is de jure rather limited. In addition, more (implementing) power is being transferred to the European Commission, such as for the regulations on digital platforms, DMA and DSA and the common digital infrastructure for an EU COVID-pass without borders. There appears to be a growing EU-level de facto implementing competence. Also striking is that decision-making seems to be accelerating. The COVID Recovery Fund was set up within a few weeks (Van Middelaar, 2021). Decisions on Ukraine were a matter of a week. Even difficult and ground-breaking legislation like DMA and DSA has been decided faster than Europe is used to do, namely in just over one year while negotiations often take at least two to three years. Finally, where the EU mandate (or rather EU law as a primus) is being undermined, the sinners, Hungary and Poland, must answer to the European courts. These days we therefore see rather more Europe than less. Recently, the former President of the European Council, Herman Van Rompuy, stated that European sovereignty is the way forward in this time of successive and almost existential crises. ‘European sovereignty’ is a term that nowadays can be used. Only four years ago, this was quite different. In 2018, when Jean-Claude Juncker, then President of the European Commission, gave his State of the Union speech the title “The hour of European sovereignty” (Juncker, 2018). Many commentators in Europe lambasted him… The EU’s digital agenda also reflects this trend to strengthen European sovereignty. In the last two years initiatives have been launched on semiconductors (the so-called EU Chips Act) and digital identity (European Digital Wallet) that are explicitly driven by the desire for more European strategic autonomy. Other EU legislation, for digital platforms (Digital Market Act and Digital Services Act) and cybersecurity (Network and Information Security Directive) is more implicit or partly motivated by strategic autonomy. European investment in quantum technology, supercomputers and data spaces is increasingly moving towards strategic autonomy. Nevertheless, there is reason to be concerned about Europe’s weaknesses. The fact that EU countries are working together now does not mean that they will be working together tomorrow. Speeding up now could quickly end with more divisions such as about the war against Ukraine. To create more independence as Europe today does not mean that there will not be new dependencies tomorrow. It is quite well possible that soon Europe becomes more dependent US because of its military, digital-economic and now also energy might. In the digital domain, the EU should not harbour illusions. Even though Europe is strong in some areas (ASML is often

12

 Art. 4(2), Treaty on European Union.

8  Strategic Autonomy for Europe: Strength at Home and Strong in the World, Illusion…

113

cited as an example of Dutch and European strength in semiconductors) it is and remains highly dependent on the US and China for digital products and services. A recent study by the Konrad Audenauer Stiftung shows that over the past decade the Digital Dependency Index for Europe, despite all initiatives, remains consistently high (Mayer & Liu, 2022).

5 Conclusion: Putting Europe on the Map, Internally and Externally What then is a positive but nuanced vision of strategic autonomy for Europe, a vision that brings strength at home and strength in the world? From the preceding analysis, two possible contributions emerge: Europe’s Democracy  Europe undeniably has many weaknesses, amongst them those mentioned earlier: lack of scale-up venture capital, a gap in bringing innovation to the market, digital dependencies, and above all in its democratic and decision-­ making functioning, not to mention its well-known and persistent fragmentation. Strengths that give Europe a place in the world were also mentioned, including its internal market, legal system, values, and research capabilities. One more strength can be the European way of dealing with the citizen-state relationship. In Europe, it is accepted and even invited to question the balance of citizen and government, in any aspect of economics, society and democracy. The European way to democracy includes to ultimately finding a solution, even if it may take a long time and much compromising. Increasingly, Europeans take into account that they are not alone in the world. This characteristic approach to democracy contributes to external and internal legitimacy. Education could strengthen this underemphasised aspect of European identity take upon itself to explain to young people how the permanent democratic discourse fits with European sovereignty. Further research is needed to understand the actual and potential interplay of education and sovereignty in the digital age. This should address amongst others the risks of political capture and dangers of populism, the role of ‘digital in education’ in terms of socialization, and ‘educationality’,13 i.c., the influence of digital platforms companies on power and norms in education. Leading in General Common Interest  Europe is probably more credible than the US or China to put global interest on the agenda. Europe has learned to live with cooperation in situations of a large diversity of interests and cultures, and has developed a wide range of forms of cooperation. Working together for a global interest may more naturally be associated with Europe than any other geopolitical bloc. A leadership role for Europe could manifest itself in building or maintaining selected global commons and pursuing specific global interests. As argued, doing so may at 13

 In analogy to governmentality (Cohen, 2019).

114

P. Timmers

the same time be helpful for strategic autonomy. This does not only hold in certain digital matters, but applies also to climate and health and safety. In the digital world, however, it helps that there is still at least as much to develop as what we inherit from the short digital past. Consider developments in AI, quantum, robotics, and so on, all emerging areas in which global interests can be formulated and advocated. Despite the optimistic tone of the preceding paragraphs, several reality checks are in place. For instance, Europe’s approach to strategic autonomy would certainly be based on a combination of strategic partnerships with like-minded partners, working together on global agendas, and otherwise in-depth risk management. However, what a balanced combination of these approaches is, is not evident at all. Still lacking is a systematic analysis framework for strategic autonomy approaches, that takes into account both desirable as well as undesirable strategic dependencies and allows for comprehensive joined-up policy development. This is an important topic of research, much needed by policy makers. Another set of reality checks follows from the following final paragraph. Limits to Social and Technological Construction  Developing sovereignty in the digital age is both social construction and technological construction (Timmers, 2022a, b). Early 2000, Lawrence Lessig coined the phrase “code is law” (Lessig, 1999). At that time, the ‘code’ was Internet as software which de facto functioned as law and imposed conditions on potential Internet regulation. Today we would add: ‘law is code’, because social constructs such as law and values increasingly condition technology. Positioned as both ‘law’ and ‘code’, as combining social norms and values with technology, sovereignty in the digital age is a complex and evolving construct. This construct and the related construction processes call for more research. As a case in point, Julie Cohen drew our attention to the mutual and sometimes perfidious influence of technology/business and the legislator (Cohen, 2019). While such research may provide more insight for policy-making in European strategic autonomy, it may also painfully expose the limits of such policy-making and that sovereignty stays vulnerable - in line with Nick Bostrom’s warning that we are developing technology that we no longer can control (Bostrom, 2019). Europe will have to understand and recognize the limits of its power and ability to deliver strategic autonomy.

References Bickerton, C., Brack, N., Coman, R., & Crespy, A. (2022). Conflicts of sovereignty in contemporary Europe: A framework of analysis. Comparative European Politics, 20(3), 257–274. https:// doi.org/10.1057/s41295-­022-­00269-­6 Biersteker, T. (2012). State, sovereignty and territory. In I. W. Carlsnaes, T. Risse, & B. A. Simmons (Eds.), Handbook of international relations. SAGE Publications Ltd.. Biolcati, F., & Ladini, R. (2022). On values as they evolve: A presentation of the world values survey and the European values study. Intercultura, 105(II Trimester), 11–18.

8  Strategic Autonomy for Europe: Strength at Home and Strong in the World, Illusion…

115

Bostrom, N. (2019). The vulnerable world hypothesis. Global Policy. https://doi. org/10.1111/1758-­5899.12718 Bradford, A. (2020). The Brussels effect: How the European Union rules the world. Oxford University Press. Buijs, G., & Bosman, P. (Eds.). (2022). Ontwaken uit de geopolitieke sluimer | Govert Buijs & Paul Bosman (red.) (Vol. 2). Eburon. https://eburon.nl/product/ ontwaken-­uit-­de-­geopolitieke-­sluimer/ Cohen, J. E. (2019). Between truth and power: The legal constructions of informational capitalism. Oxford University Press. de Tocqueville, A. (1864). Comment Les Américains Combattent L’individualisme Par La Doctrine De L’intérêt Bien Entendu. In De la démocratie en Amérique, tome II (pp. 198–203). Michel Lévy. Espinoza, J. (2022, April 26). EU warns Elon Musk over Twitter moderation plans. Financial Times. https://www.ft.com/content/22f66209-­f5b2-­4476-­8cdb-­de4befffebe5 EU IDEA Consortium. (n.d.). EU IDEA project. EUIDEA. Retrieved July 4, 2022, from https:// euidea.eu/ European Commission. (2021, October 18). The EU Digital COVID Certificate: EU has set a standard. Press Release. https://ec.europa.eu/commission/presscorner/detail/en/ip_21_5267 Floridi, L. (2020). The fight for digital sovereignty: What it is, and why it matters, especially for the EU. Philosophy & Technology, 33(3), 369–378. https://doi.org/10.1007/s13347-­020-­00423-­6 Fukuyama, F. (2006). The end of history and the last man (reissue edition). Free Press. Gallie, W. B. (1956). Essentially contested concepts. Proceedings of the Aristotelian Society, 56, 167–198. Glasze, G., Cattaruzza, A., Douzet, F., Dammann, F., Bertran, M.-G., Bômont, C., Braun, M., Danet, D., Desforges, A., Géry, A., Grumbach, S., Hummel, P., Limonier, K., Münßinger, M., Nicolai, F., Pétiniaud, L., Winkler, J., & Zanin, C. (2022). Contested Spatialities of digital sovereignty. Geopolitics, 1, 1–40. https://doi.org/10.1080/14650045.2022.2050070 Juncker, J-C (Director). (2018, September 12). State of the Union 2018, The hour of European sovereignty. https://ec.europa.eu/info/priorities/state-­union-­speeches/state-­union-­2018_en Kello, L. (2017). The virtual weapon and international order. Yale University Press. Lessig, L. (1999). Code: And other laws of cyberspace. Basic Books. Mayer, M., & Liu, Y.-C. (2022). Digital autonomy? Measuring the global digital dependence structure, Konrad Adenauer Stiftung (p. 29). Konrad Audenauer Stiftung KAS. Moerel, L., & Timmers, P. (2021). Reflections on digital sovereignty—EU cyber direct. Research in Focus. https://eucyberdirect.eu/research/reflections-­on-­digital-­sovereignty Ostrom, E. (2015). Governing the commons: The evolution of institutions for collective action (Canto classics ed.). Cambridge University Press. Sheikh, H. (2016). De opkomst van het Oosten. Boom uitgevers Amsterdam. https://www.boomfilosofie.nl/product/100-­400_De-­opkomst-­van-­het-­Oosten Timmers, P. (2019). Ethics of AI and cybersecurity when sovereignty is at stake. Minds and Machines, 29(4), 635–645. https://doi.org/10.1007/S11023-­019-­09508-­4/FIGURES/3 Timmers, P. (2021, October 1). Opinie | Hoe Europa naar zelfstandigheid streeft—NRC. NRC Handelsblad. https://www.nrc.nl/nieuws/2021/10/01/ hoe-­europa-­naar-­zelfstandigheid-­streeft-­a4060379 Timmers, P. (2022a). The technological construction of sovereignty. In Perspectives on digital humanism (pp. 213–218). Springer. https://doi.org/10.1007/978-­3-­030-­86144-­5_28 Timmers, P. (2022b). Strategische autonomie voor Europa: Sterkte in eigen huis en een eigen plaats in de wereld –– illusie of realisme. In Geopolitiek en de EU. Centrum Ethos/Thijmgenootschap. Van Middelaar, L. (2021). Een Europees pandemonium: Kwetsbaarheid en politieke kracht. Historische Uitgeverij. Wolff, G. (2020, February 17). Europe may be the world’s AI referee, but referees don’t win. POLITICO. https://www.politico.eu/article/ europe-­may-­be-­the-­worlds-­ai-­referee-­but-­referees-­dont-­win-­margrethe-­vestager/

Chapter 9

Saving Human Lives and Rights: Recommendations for Protecting Human Rights When Adopting COVID-19 Vaccine Passports Emmie Hine, Jessica Morley, Mariarosaria Taddeo, and Luciano Floridi

Abstract  The SARS-CoV-2 (COVID-19) pandemic has caused social and economic devastation. As the milestone of two years of ‘living with the virus’ approaches, governments and businesses are attempting to develop means of reopening society whilst still protecting public health. However, developing interventions  – particularly technological interventions  – that find a safe, socially ­acceptable, and ethically justifiable balance between these two seemingly opposing demands is extremely challenging. There is no one right solution, but the current most popular ‘solution’ is the so-called ‘COVID-19 Vaccine Passport’ (also known as COVID-19 passes or certificates), the use of which may be supported by both human rights and international public health law if they are designed and implemented appropriately.  (We use the term ‘Vaccine Passport’ because it has been adopted by the popular press. Though it has been used in many ways, here we use it generically to refer to a document that certifies that an individual has been vaccinated against COVID-19 and on that basis grants the bearer more liberties than to those who have not been vaccinated. Later, we will discuss why it is necessary to move beyond considering only vaccinations, which informs our preferred term of Authors Emmie Hine and Jessica Morley have contributed equally to this chapter. E. Hine · L. Floridi Oxford Internet Institute, University of Oxford, Oxford, UK J. Morley (*) Oxford Internet Institute, University of Oxford, Oxford, UK Nuffield Department of Primary Care Health Sciences, University of Oxford, Radcliffe Observatory Quarter, Oxford, UK e-mail: [email protected] M. Taddeo Oxford Internet Institute, University of Oxford, Oxford, UK Alan Turing Institute, British Library, London, UK © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_9

117

118

E. Hine et al.

‘COVID-19 Status Pass’.) We set out to answer the following questions: how should governments and businesses assess the risks in light of human rights, public health ethics, and digital ethics concerns which emerge from developing and deploying COVID-19 Vaccine Passports? What design decisions should businesses make when developing COVID-19 Vaccine Passports to help ensure they respect human rights, and both public health and digital ethics? Do the implications for human rights, public health, and digital ethics vary depending on where and when COVID-19 Vaccine Passports are used? What are the rights and powers of the individual to object to or seek remedy for the use of COVID-19 Vaccine Passports? How can the risks of inequalities and social division derived from the deployment of COVID-19 Vaccine Passports be avoided or mitigated? We conducted a literature review and documentary analysis, supplementing our findings with news articles where appropriate. The following pages report our results, concluding with a series of actionable recommendations for businesses, national governments, and supranational organisations. Keywords  COVID-19 · Digital ethics · Human rights · Public health · Vaccine passport

1 Introduction: Saving Human Lives and Rights More than 5.4 million people have died since the first case of SARS-CoV-2 (COVID-19) was reported to the World Health Organisation in December 2019. Many of the 300 million infected now live with additional morbidity from ‘Long Covid’. Worldwide, healthcare systems are struggling to deal with the knock-on impacts on ‘routine care’ and an increase in demand for mental health services. And many businesses, particularly hospitality, entertainment, and travel businesses, have been dramatically affected by lockdown and social distancing measures. As the milestone of two years of ‘living with the virus’ passes, governments and businesses are keen to develop technological interventions that may enable the reopening of society whilst still protecting public health. Proposed interventions include: thermal scanning, contact tracing apps, and – most recently – COVID-19 ‘Vaccine Passports’ (also known as COVID passes, COVID certificates, or other similar terms), which demonstrate an individual’s COVID-19 vaccination status to allow for access to certain spaces. Governments hope that COVID-19 Vaccine Passports may be used to facilitate international travel and allow for more civil liberties, for example, access to public venues, large gatherings, or return to work, without compromising personal safety and public health (Osama et al., 2021). The public health principle of least infringement supports the use of such technologies, and suggests that their use is consistent with human rights by implying that if an individual is deemed to be a low infection risk and thus a low threat to public health, continuing to impose

9  Saving Human Lives and Rights: Recommendations for Protecting Human Rights…

119

restrictions on their civil liberties would be an infringement of their human rights and unethical (Persad & Emanuel, 2020). However, there is a legitimate concern that, if COVID-19 Vaccine Passports are introduced and become necessary for most parts of everyday life, they could be interpreted as making COVID-19 vaccination compulsory, or perceived as being overly coercive (Schlagenhauf et  al., 2021), which could undermine various rights, including autonomy and the right to a private family life (Navin & Attwell, 2019), or be manipulative of an individual’s right to make a free and informed choice to consent to vaccination (Nilsson, 2021). However, if designed and used correctly, such interventions have the potential to save many lives and much suffering, as demonstrated by precedent-setting examples such as the International Certificate of Vaccination or Prophylaxis (ICVP), often called the ‘Yellow Fever Certificate’, which has been in place for decades to help prevent the spread of Yellow Fever across international borders. Designing and deploying interventions in a socially acceptable and ethically justifiable way is challenging. Luckily, human rights, public health ethics, and digital ethics1 provide some assistance through frameworks that can inform policymakers’ decisions about complex health issues (Nixon & Forman, 2008). In this article, we focus on Vaccine Passports as a case study to illustrate the types of decisions that policymakers in both government and business contexts may have to make, and set out to answer the following questions: 1. How should governments and businesses address the ethical and social risks posed by the development and deployment of COVID-19 Vaccine Passports? 2. Do the implications for human rights, public health, and digital ethics vary depending on where and when COVID-19 Vaccine Passports are used? 3. What design decisions should businesses make when developing COVID-19 Vaccine Passports to help ensure they respect human rights, and both public health and digital ethics? 4. How can the risks of inequalities and social division derived from the deployment of COVID-19 Vaccine Passports be avoided or mitigated?

 Human rights are meant to guide Government action and apply broadly to the protection of human interests; public health ethics guide the actions and work of bodies and individuals engaged in the protection of public and individual health (Gruskin & Dickens, 2006); digital ethics is the study of moral problems relating to digital technologies and inputs (Floridi, 2018). The three are complementary and all crucial to consider, as Vaccine Passports are digital technologies in the public health sphere that may impact human rights. 1

120

E. Hine et al.

2 The Importance and Influence of Human Rights and International Health Regulations International human rights law, represented for this discussion by the Council of Europe Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR), is designed to protect people through the safeguarding of a range of civil, political, economic, social, and cultural rights including: the right to freedom of movement; the right to liberty and security; the right to freedom of thought, conscience, and religion; the right to freedom of expression; the prohibition of discrimination; and the right to privacy and a private life. In ordinary circumstances, rights are considered to be both equal and inalienable (Nixon & Forman, 2008). In other words, no one right is more important than any other, and none can be taken away. Nevertheless, in exceptional circumstances (e.g., in time of war), it is also recognised and accepted that some trade-offs might be unavoidable and need to be made. The ECHR acknowledges this necessity in Article 15, which permits Member States to derogate from the ECHR in “time of war or other public emergency threatening the nation” for the purpose of protecting the right to life, which itself cannot be derogated (European Court of Human Rights, 2010). Notably, this applies at a level of abstraction that considers groups and the common good, not just individuals and all their individual interests. For example, since at least 1905 and the landmark US Supreme Court decision in Jacobson v. Massachusetts, it has been clear that, in the United States, the Government has the right to interfere with individual liberty if there is a valid public (i.e., group-wide) health reason to do so (Lantos & Jackson, 2013). Indeed, the Court ruling states: The liberty secured by the Constitution of the US to every person within its jurisdiction does not impart an absolute right in each person to be, at all times and in all circumstances, wholly freed from restraint. There are manifold restraints to which every person is necessarily subject for the common good (Buchanan, 2008).

This ruling has been globally influential. It is widely considered the cornerstone of public health ethics as a field (Buchanan, 2008). Following its logic, the ECHR explicitly permits the curtailment of the rights to liberty and security for the prevention of spreading disease. It states that the freedoms of movement, association, expression, thought, conscience, religion, and private life can be temporarily restricted for the protection of health or the rights of others (European Court of Human Rights, 2010). Consequently, restrictions on these rights in the name of public health are justifiable, provided they meet the Siracusa Principles, introduced to human rights law in the 1980s to indicate when rights can be limited in service of public health: • The restriction is provided for and carried out in accordance with the law; • The restriction is in pursuit of a legitimate objective of general interest; • The restriction is strictly necessary in a democratic society to achieve the objective;

9  Saving Human Lives and Rights: Recommendations for Protecting Human Rights…

121

• There are no less intrusive and restrictive means available to reach the same objective; • The restriction is based on scientific evidence and not drafted or imposed arbitrarily, i.e., in an unreasonable or otherwise discriminatory manner. Within this framework, measures like mandatory quarantine are permissible under the ECHR, as are some uses of COVID technologies that apply without discrimination,2 like thermal scanning or automatic mask detection (provided accommodations are made for those unable to wear masks). Technologies that may otherwise be considered privacy violations are also permissible, like the aggregation of cell phone location or Bluetooth data for contact tracing3 or to enforce quarantines (Doffman, 2020). Indeed, one may argue that, precisely because preserving citizens’ lives is implicitly the top priority of States, governments have an obligation to use COVID technologies that can protect the health and thus the lives of citizens. Not doing so would constitute an infringement of the human right to life and a reprehensible ethical omission. In addition to this support from the ECHR, International Health Law  – most notably the International Health Regulations 2005 (IHR) – clearly provides support for mandatory vaccination and the use of Vaccine Passports in specific instances. The IHR, which is grounded in human rights law and public health ethics, requires “full respect for the dignity, human rights and fundamental freedoms of persons” but states that its goal is the “protection of all people of the world from the international spread of disease”, which foreshadows the possibility to impose temporary infringements on some human rights sometimes (Fidler & Gostin, 2006). The IHR provides for the curtailment of international travel during public health crises and for proof of vaccination for international travellers, provided the measures respond to a pressing public or social need and accord with the Siracusa Principles mentioned above (Fidler & Gostin, 2006; World Health Organization, 2005). Note that the principle of least intrusion implies a time limit to the exceptional measures, which cannot be in place for longer than strictly necessary. The overall approach permits measures like the International Certificate of Vaccination or Prophylaxis (ICVP), often called the ‘Yellow Fever Certificate’, to be used for international travel for the purpose of protecting people travelling to areas endemic for Yellow Fever and reducing the risk of its introduction into non-endemic areas (Pavli & Maltezou, 2021). The IHR also empowers the Director General of the WHO to issue temporary recommendations informed by the advice of an emergency committee, which can include requirements for vaccination. This happened, for example, in 2014, when the declaration of polio as a public health emergency of international

 Confinement may disproportionately impact women, with several countries reporting a 30% rise in domestic violence during the pandemic. To attempt to curtail this, France created pop-up counselling centres in supermarkets, but these measures are likely insufficient (Lebret, 2020). 3  For a fuller discussion on the nuances of the ethics of contact tracing technology, see (Morley et al., 2020). 2

122

E. Hine et al.

concern resulted in a requirement for proof of polio vaccination for people travelling from areas with high numbers of cases (Wilson et al., 2016). Thus, according to the ECHR and the IHR, the use of COVID-19 Vaccine Passports appears legally permissible and ethically justifiable, despite the fact that their use may be interpreted as a strong incentive to vaccinate (even if they do not make COVID-19 vaccines mandatory) because of how they gatekeep access to spaces. However, it is important to note that both the ECHR and the IHR require that public health measures be implemented in a non-discriminatory fashion (World Health Organization, 2005) in order for this legal, social, and ethical justifiability to be maintained.

3 The Risk of Discrimination If COVID-19 Vaccine Passports cannot be designed, developed, and used in a way that minimises the risk of discrimination, then both the human rights arguments and the public health arguments no longer apply. This should be a major consideration for governments and businesses responsible for developing COVID-19 Vaccine Passports. However, finding practical solutions that do not discriminate against the poor, ethnic minorities, the less technically literate, specific religious groups, children and young people, and people from low- and middle-income countries is not straightforward (Osama et al., 2021). This is because inequalities can arise at the macro- (international), meso- (domestic), and micro- (individual) levels: • At the macro-level: the primary risk for discrimination stems from the fact that access to vaccines, and to vaccines that are equally effective, is uneven. This means that, if COVID-19 Vaccine Passports become mandatory for international travel, citizens of countries with less access to COVID-19 vaccines will be effectively confined to their home countries until vaccine access becomes more equitable. Additionally, if different vaccines are treated differently for the purposes of a COVID-19 Vaccine Passport, this may entrench inequality among those who have no choice in what vaccine they receive. This imposes significantly stricter restrictions on the freedom of movement on these populations – a restriction that may not always be proportionate, especially when other options like quarantining and testing upon arrival remain available. Furthermore, as new virus variants emerge and debate over whether ‘booster’ doses are necessary to be considered ‘fully vaccinated’ grows, the disparity in vaccine access could potentially grow larger. Some workplaces and universities in the United States are taking definitions into their own hands and requiring employees and students to get boosters (Anthes & Weiland, 2021). Meanwhile, concerns are growing that vaccine supply to low- and middle-income countries will be further disrupted as high-income countries consume more doses than previously expected (Banco et al., 2021). • At the meso-level: different implementations by various social actors within a State may cause inequality in access to spaces. If COVID-19 Vaccine Passports are used as the ‘key’ to access too many public and quasi-public spaces, indi-

9  Saving Human Lives and Rights: Recommendations for Protecting Human Rights…

123

viduals who are unable to be vaccinated for medical reasons (e.g., those who are allergic), or have not yet been offered a vaccine (e.g., younger members of society, including children, who are typically not prioritised by vaccine roll-out programmes) may find themselves effectively ‘locked out’ of large parts of society. Regarding the latter group, as vaccine roll-out programs are often age-stratified, younger members of society, including children, are especially vulnerable. Furthermore, many countries have not yet approved vaccines for children, creating international inconsistency (Reuters, 2021). At its most extreme, this could result in society, domestically and internationally, being stratified into the ‘immunoprivileged’ and the ‘immunodeprived’ (Liew & Flaherty, 2021). The risks of this divide emerging along existing fault lines related to ethnicity and religion are particularly high. Vaccine hesitancy is typically more pronounced amongst ethnic minorities, yet ethnic minorities are also at greater risk of severe COVID-19 (Nilsson, 2021). There is considerable risk that without regulation, unethical or discriminatory businesses could implement COVID-19 Vaccine Passports with the explicit aim of excluding patrons from within these groups with the excuse of concerns for public health. • At the micro-level: if the passports are made available only via digital technologies, access to specific technologies may be a significant barrier to using digital COVID-19 Vaccine Passports. This may be due to the cost of smartphones or other factors affecting device and wireless penetration, lower levels of digital literacy, or potentially discriminatory differences in the affordances offered by digital devices to different members of the population (for example, facial recognition has been shown to work less effectively for those with darker skin). Apps that only work on specific devices, especially those that only work on the latest models of specific devices and/or those that rely on mobile internet connectivity (e.g., 3, 4, or 5G) are problematic, are those that rely on biometrics for identity verification. Any businesses or governments relying solely on digital technologies are thus unlikely to meet the requirement of ‘equal treatment’ necessary to justify their use under the ECHR and public health law. Overall, the merits of COVID-19 Vaccine Passports are undeniable, but ethical concerns remain about their potential to result in problematic social divides. Fortunately, as Tanner and Flood (2021) argue, good policy and use of COVID-19 Vaccine Passports need not be flawless; those responsible for passport design and implementation strategy can take initiatives to ensure that their use is as fair and unoppressive as possible. Those responsible for making design decisions should always aim to make decisions that result in the least infringement, to ensure that the restriction is proportionate. They should also aim to achieve a balance between maximising public benefit whilst minimising individual harm and, however possible, to minimise the extent to which the design is discriminatory. For instance, COVID-19 Vaccine Passport apps should always seek to abide by the data minimisation principle in the GDPR, by capturing only the data needed; they should not track individuals’ movements; and not try to sell users other products either through endorsements or advertisements. In contrast, they should be device-agnostic; interoperate; make use of

124

E. Hine et al.

privacy-­preserving technologies; prioritise cybersecurity; and log all data access requests in real-time  – alerting users to when their vaccination record has been checked, by whom, and providing them with a mechanism for querying why this request was made. They should also have non-digital counterparts that those without access to smartphones can use, and ‘accept’ all WHO-approved vaccines to minimise opportunities for proxy-discrimination based on the origin of specific vaccines. However, for these principles to be observed, the design must go beyond the conventional understanding of ‘Vaccine Passports’. So that those who are unable to obtain a vaccination are not excluded from accessing spaces, the design should allow users to ‘prove’ their COVID-19 status through other means (e.g., lateral flow test, PCR test, proof of quarantine, or antibody test).4 This informs our preferred term of ‘COVID-19 Status Pass’, as these documents attempt to indicate the user’s current COVID-19 infection status rather than vaccination status.5 These design decisions can help ensure COVID-19 Vaccine Passports meet the criteria set out in the ECHR and IHR. Crucially they can help ensure businesses are building the right ‘infraethics’ (Floridi, 2017) for the use of COVID-19 Vaccine Passports, i.e., the pro-ethical infrastructure that helps embed the right values in their design and use (values that maximise benefit and treat people equally (Persad & Emanuel, 2020)), whilst avoiding the negative consequences of autonomy-­ violating levels of coercion or ‘nudging’ that could result from the improper design of COVID-19 Vaccine Passports. For example, features such as whether apps track, pass data onto third parties, include advertisements, include rewards for downloading (e.g., free cinema tickets), include information about why vaccination is important, and more, all influence the extent to which the Passport in question is perceived to be coercive and, importantly for this discussion, the extent to which its mandatory use infringes on people’s liberties, for instance, their privacy. In January of 2021, French President Emmanuel Macron triggered a backlash after saying that he wanted to remove the testing option from the French COVID-19 Vaccine Passport to “piss [the unvaccinated] off”, showing the risk of creating COVID-19 Vaccine Passports that are perceived to be overly coercive (‘Covid’, 2022). All this does not mean that COVID-19 Vaccine Passports cannot be used to encourage the uptake of vaccination. We have already argued that the use of COVID19 Vaccine Passports (and indeed mandatory vaccination) can be in keeping with human rights and public health law. In contexts where COVID-19 Vaccine Passports are the least restrictive mechanism for protecting public health and saving lives, the

 It is worth noting that none of these methods of ‘proving’ COVID-19 Vaccine status are faultless. All have issues of reliability, therefore, it is impossible to completely eradicate the risk of virus transmission. 5  For instance, from October 15th all workers, school staff, and those using train stations, cinemas, restaurants, gyms or swimming pools in Italy will need to show a COVID-19 ‘Green pass’ certificate to demonstrate proof of vaccination, a negative test or recovery from the virus. The pass is available both digitally and on paper (Covid: Italy to Require All Workers to Show ‘Green Pass’ Certificate, 2021). 4

9  Saving Human Lives and Rights: Recommendations for Protecting Human Rights…

125

decision not to use them might be unjustified, harmful, and unethical. It is, however, how COVID-19 Vaccine Passports are used to encourage vaccination that matters. This is because there is a clear distinction between intolerable paternalism and tolerant paternalism. The former operates at the structural level of a choice architecture, leaving no room for flexibility and pushing people in one direction in an overly coercive manner. The latter operates at the informational level of a choice architecture, giving individuals a degree of flexibility depending on the context and thus are more supportive of their autonomy and less infringing on their other civil liberties. Think of the difference between a speed bump and a speed camera. Both aim to slow cars down on the road, but one provides no flexibility – a speed bump will slow even an ambulance – whilst the other can be ‘ignored’ when necessary; an ambulance can speed past a camera when required (Floridi, 2016). COVID-19 Vaccine Passports that abide by this principle still paternalistically alter an individual’s behaviour but do it transparently (for example, by only being used in  locations where the justification is clear); where individuals have a free choice not to go if they wish not to be vaccinated (for example not insisting on their use in essential locations like supermarkets); and by still offering individuals a choice about how to obtain the passport (for example, testing negative or testing for antibodies). Value flexibility is an essential tenet of public health and digital ethics (Kass, 2001), and so businesses and governments designing and using COVID-19 Vaccine Passports should design them to encourage, wherever possible, vaccination through informational channels  – those that encourage uptake of vaccination (e.g., because it is easier than getting tested repeatedly) – rather than structural channels that overly limit an individual’s choices and should deploy them alongside other options, e.g., quarantining, social distancing, and testing. Below, we set out a series of recommendations specifically designed to operationalise the concept of tolerantly paternal COVID-19 Vaccine Passports. These take the following form: ‘[this specific type of organisations] should do [this specific thing] so that [this specific outcome] can be achieved’. In instances where there is no supranational body to engage in the development process, recommendations to supranational bodies can be seen as recommendations to the Government engaged in developing a COVID-19 Vaccine Passport or, as we discuss, Status Pass.

4 Recommendations for Designing, Developing, and Deploying COVID-19 Vaccine Passports 1. Actors, including supranational bodies, governments, and businesses, designing or developing COVID-19 Vaccine Passports should ensure that they comply with privacy and data protection legislation, and make appropriate use of privacy-­enhancing technologies.

126

E. Hine et al.

2. Actors designing or deploying COVID-19 Vaccine Passports should take appropriate cybersecurity precautions to ensure that individuals’ health data are not compromised or misused. 3. Actors designing or developing COVID-19 Vaccine Passports should ensure that they can be issued to individuals with proof of full vaccination, or of a recent COVID-19 test, or of quarantine, or of other necessary mitigation measures, so that what should more accurately be called COVID-19 Status Passes6 can be issued to those who have not yet been offered a vaccine and to those who are unable to get a vaccine. Passes issued on the basis of tests should be time-­ limited according to the accuracy of the test.7 Ideally, governments should work in parallel to increase and improve testing capacity. Supranational bodies should help coordinate the availability of testing internationally. 4. Actors developing COVID-19 Vaccine Passports should ensure that digital and paper versions are available to mitigate the risks of a ‘digital divide’. Paper versions could include security-enhanced stickers that can be placed on existing IDs, or certificates issued following identity verification for those without a form of physical ID. 5. National governments should, as suggested by the ECHR, ensure that there are mechanisms in place for individuals harmed by the use of COVID-19 Vaccine Passports (or other COVID-19 technology) to seek redress. 6. Supranational bodies should engage governments to agree about how public health ethics should be embedded in a COVID-19 Vaccine Passport system so that there is room for value pluralism, but as little room as possible (ideally none) for manipulating the system in ways that result in the direct surveillance or control of populations or individuals. This process should involve the engagement of stakeholders from citizens to businesses and governments in a form of international deliberative democracy. 7. Actors designing or developing COVID-19 Vaccine Passports should design them to be ‘trustworthy’ by avoiding inappropriate structural nudging and fostering trust in vaccines, for example, by embedding facts about the vaccine and the passport in the User Interface. 8. Actors designing, developing, or deploying COVID-19 Vaccine Passports, and businesses providing them, should be prevented from designing them to act as advertising, tracking, or targeting platforms, as this would increase the extent to which they infringe on people’s human rights. 9. Actors designing or developing COVID-19 Vaccine Passports should ensure that they ‘accept’ all vaccines approved by the WHO – and should keep this list updated – to minimise opportunities for misuse of COVID-19 Vaccine Passports for reasons of international politics. They should also take into account the  We will continue to use the term Vaccine Passport, but it should be understood as encompassing the more inclusive elements of a Status Pass. 7  These durations should be decided upon in concert with public health officials. As an example, New York’s Excelsior Pass is good for 3 days following a PCR test, and 6 h following an antigen test (Excelsior Pass | COVID-19, 2021). 6

9  Saving Human Lives and Rights: Recommendations for Protecting Human Rights…

127

public health situation and vaccine dose accessibility when considering whether to require boosters and what boosters are accepted for Vaccine Passport eligibility. 10. Actors deploying COVID-19 Vaccine Passports should ensure that the only motivation behind their creation is the protection of public health, not insurance or liability protection. In particular, COVID-19 Vaccine Passports should not be used to recoup economic costs incurred during the pandemic. This would damage public trust and create an incentive for prolonging the use on a basis other than public health. 11. Businesses providing COVID-19 Vaccine Passports for use by other entities should consider where their ‘apps’ are advertised online and seek to minimise the risk that they are weaponised or politicised by ‘anti-vax’ campaigners. 12. Actors deploying COVID-19 Vaccine Passports voluntarily (that is, if it is not a legal requirement) should publish (for example, clearly on their website), an outline of why they have decided to introduce the use of the passport. They should keep this under review and update it when appropriate. 13. Supranational bodies (e.g. the EU, WHO, and UN) should establish a standard set of principles to evaluate COVID-19 Vaccine Passports in order to ensure that individual rights are respected across borders. Since paternalistic behaviour that limits individual freedoms must be justified, the Siracusa Principles – which clarify when derogations of human rights are acceptable in the name of public health – are a likely candidate for this evaluation. 14. Supranational bodies should agree on a standardised set of use cases for COVID-19 Vaccine Passports based on a comprehensive evaluation of human rights, public health ethics, and digital ethics, and involving all key stakeholders – including the public – through deliberative democracy processes, to support equality. 15. Supranational bodies should establish clear standards for the continued use of COVID-19 Vaccine Passports to avoid variations across countries. The standards should include what metrics are being used to justify requirements for keeping COVID-19 Vaccine Passports in use and should consider the different roles of businesses that design, provide, and/or deploy COVID-19 Vaccine Passports. The standards should be based on a clear set of criteria balancing risks and benefits and be designed to ensure that COVID-19 Vaccine Passports are used only when this is proportionate. This may mean that COVID-19 Vaccine Passports are not in use in all countries at all times, so care should be taken to ensure that their use accords to the principle of least restriction to avoid a disproportionate impact on affected countries. Lessons can be learned from the way the WHO assesses the need for and use of Yellow Fever Certificates. 16. National governments should issue regulations to ensure the standardisation of usage across a country. Specifically, contexts in which the use of COVID-19 Vaccine Passports may be advisable include:

128

E. Hine et al.

• International travel; • Public and semi-public spaces where there may be vulnerable individuals, for example, hospitals and nursing homes; • Discretionary spaces where there may be no vulnerable individuals, but other less restrictive measures (especially social distancing, masking, and remote access) may be difficult to implement or insufficiently efficacious; for example, gyms, nightclubs, pubs, and restaurants; • Essential spaces where there may be no vulnerable individuals, but other less restrictive measures (especially social distancing, masking, and remote access) may be difficult to implement or insufficiently efficacious; for example, some public transports at peak time, such as undergrounds, or certain places of work, like hairdressers. 17. National governments should establish mechanisms through which members of the public can report the improper use of COVID-19 Vaccine Passports, for example, if they are being used as a means of barring specific groups access to venues. 18. National governments, especially public health bodies, should establish a referent responsible for monitoring the long-term impact of COVID-19 Vaccine Passports (and other COVID-19 technologies). When the balance between benefit and harm shifts, and there is no longer a justifiable need to continue using Vaccine Passports, their use should be unilaterally and immediately discontinued. Similarly, if it is identified that Vaccine Passports are having significant unforeseen and unjustifiable negative impacts on specific groups of the population, there should be mechanisms in place to rectify this situation as quickly as possible.

5 Conclusion In the context of a global public health emergency, COVID-19 Vaccine Passports are ethically and legally permissible under relevant human rights and international health regulations, provided policymakers ensure they are designed, developed, deployed, and used in accordance with the least infringement principle and the following values: “the maximisation of benefit; priority to the least advantaged; and treating people equally” (Persad & Emanuel, 2020). This requires them to be designed to act as what we call a ‘Status Pass’, and we recommend the discussion move beyond the misleading term ‘Vaccine Passport’. If this can be achieved, then their use is ethically sound and advisable, as it will help speed the return to normality whilst minimally infringing on human rights. It should be borne in mind that the complexity of COVID-19 vaccinations and the ever-fluctuating nature of the pandemic situation mean that some inequitable outcomes from the use of COVID Status Passes may be inevitable, given that resolving the underlying social, economic, and health inequalities between individuals

9  Saving Human Lives and Rights: Recommendations for Protecting Human Rights…

129

and nations is not a short-term project (Tanner & Flood, 2021). This puts significant pressure on supranational bodies, national governments, businesses, and individuals to weigh up a complex set of interacting factors and make difficult decisions about appropriate trade-offs. To help those responsible for making such decisions, we have set out 18 recommendations designed to help ensure that the use of COVID-19 Status Passes is ‘pro-ethical’, or at least make the decision-making process more consistent and transparent. Although the recommendations should help, they are far from perfect, and cannot capture all potential harms that may result from the adoption of COVID-19 Status Passes. Thus, technical development must occur in tandem with a regular legal and ethical review to ensure that COVID-19 Status Passes are the least restrictive way to reopen society and do not adversely impact alreadymarginalised populations, and do not create newly marginalised groups (Wilson & Flood, 2021). They should be part of the solution, not of the problem. Acknowledgments  Mariarosaria Taddeo serves as non-executive president of the board of directors of Noovle Spa. Funding  JM’s research on health data is funded by a Wellcome Trust doctoral fellowship. JM and EH received partial funding for this project from the Vodafone Institute. MT and LF received no specific funding for this project.

References Banco, E., Cancryn, A., & Owermohle, S. (2021). Biden Officials Now Fear Booster Programs Will Limit Global Vaccine Supply. POLITICO. https://www.politico.com/news/2021/12/31/ biden-novavax-production-covid-omicron-526283 Buchanan, D. R. (2008). Autonomy, paternalism, and justice: Ethical priorities in public health. American Journal of Public Health, 98(1), 15–21. https://doi.org/10.2105/AJPH.2007.110361 Doffman, Z. (2020, March 27). COVID-19 phone location tracking: Yes, It’s happening now— Here’s what you should know. Forbes. https://www.forbes.com/sites/zakdoffman/2020/03/27/ covid-­19-­phone-­location-­tracking-­its-­moving-­fast-­this-­is-­whats-­happening-­now/ European Court of Human Rights. (2010). Convention for the protection of human rights and fundamental freedoms. 34. Excelsior Pass | COVID-19 Vaccine. (2021). Retrieved 1 July 2021, from https://covid19vaccine. health.ny.gov/excelsior-­pass Fidler, D. P., & Gostin, L. O. (2006). The new international health regulations: An historic development for international law and public health. The Journal of Law, Medicine & Ethics, 34(1), 85–94. https://doi.org/10.1111/j.1748-­720X.2006.00011.x Floridi, L. (2016). Tolerant Paternalism: Pro-Ethical Design as a Resolution of the Dilemma of Toleration. Science and Engineering Ethics 22(6), 1669–1688. https://doi.org/10.1007/ s11948-015-9733-2 Floridi, L. (2017). Infraethics–on the conditions of possibility of morality. Philosophy & Technology, 30(4), 391–394. https://doi.org/10.1007/s13347-­017-­0291-­1 Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy & Technology, 31, 1–8. https://doi.org/10.1007/s13347-­018-­0303-­9 Gruskin, S., & Dickens, B. (2006). Human rights and ethics in public health. American Journal of Public Health, 96(11), 1903–1905. https://doi.org/10.2105/AJPH.2006.099606

130

E. Hine et al.

Kass, N. E. (2001) An Ethics Framework for Public Health. American Journal of Public Health 91(11), 1776–1782 Lantos, J. D., & Jackson, M. A. (2013). Vaccine mandates are justifiable because we are all in this together. American Journal of Bioethics, 13(9), 1–2. Scopus. https://doi.org/10.1080/1526516 1.2013.815021 Lebret, A. (2020). COVID-19 pandemic and derogation to human rights. Journal of Law and the Biosciences, 7(1). https://doi.org/10.1093/jlb/lsaa015 Liew, C. H., & Flaherty, G. T. (2021). Immunity passports to travel during the COVID-19 pandemic: Controversies and public health risks. Journal of Public Health, 43(1), e135–e136. https://doi.org/10.1093/pubmed/fdaa125 Morley, J., Cowls, J., Taddeo, M., & Floridi, L. (2020). Ethical guidelines for SARS-CoV-2 digital tracking and tracing systems. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3582550 Navin, M. C., & K. Attwell (2019) Vaccine Mandates, Value Pluralism, and Policy Diversity. Bioethics 33(9):1042–1049. https://doi.org/10.1111/bioe.12645 Nilsson, A. (2021). Is Mandatory Vaccination Against COVID-19 Justifiable Under the European Convention on Human Rights?, April. https://portal.research.lu.se/portal/en/publications/ismandatory-vaccination-againstcovid19-justifiable-under-the-european-convention-on-humanrights(d9d71c57-e712-449b-8a63-b67114efbf8c).html Nixon, S., & Forman, L. (2008). Exploring synergies between human rights and public health ethics: A whole greater than the sum of its parts. BMC International Health and Human Rights, 8(1), 2. https://doi.org/10.1186/1472-­698X-­8-­2 Osama, T., Razai, M. S., & Majeed, A. (2021). Covid-19 vaccine passports: Access, equity, and ethics. BMJ, n861. https://doi.org/10.1136/bmj.n861 Pavli, A., & Maltezou, H. C (2021). COVID-19 Vaccine Passport for a Safe Resumption of Travel. Journal of Travel Medicine, May. https://doi.org/10.1093/jtm/taab079. Persad, G., & Emanuel, E. J. (2020). The ethics of COVID-19 immunity-based licenses (“immunity passports”). JAMA, 323(22), 2241–2242. https://doi.org/10.1001/jama.2020.8102 Reuters. (2021). Factbox: Countries Vaccinating Children against COVID-19. Reuters, December 2, 2021, sec. Healthcare & Pharmaceuticals. https://www.reuters.com/business/ healthcare-pharmaceuticals/countries-vaccinating-childrenagainst-covid-19-2021-06-29/. Schlagenhauf, Patricia, Dipti Patel, Alfonso J. Rodriguez-Morales, Philippe Gautret, Martin P. Grobusch, and Karin Leder. 2021. “Variants, Vaccines and Vaccination Passports: Challenges and Chances for Travel Medicine in 2021.” Travel Medicine and Infectious Disease 40:101996. https://doi.org/10.1016/j.tmaid.2021.101996. Tanner, R., & Flood, C. M. (2021). Vaccine passports done equitably. JAMA Health Forum, 2(4), e210972. https://doi.org/10.1001/jamahealthforum.2021.0972 Wilson, K., Atkinson, K. M., & Bell, C. P., (2016). Travel Vaccines Enter the Digital Age: Creating a Virtual Immunization Record. The American Journal of Tropical Medicine and Hygiene 94(3):485–488. https://doi.org/10.4269/ajtmh.15-0510 Wilson, K., & Flood, C.  M. (2021). Implementing digital passports for SARS-CoV-2 immunization in Canada. Canadian Medical Association Journal, 193(14), E486–E488. https://doi. org/10.1503/cmaj.210244 World Health Organization. (2005). International Health Regulations (2005) Third Edition. ­https:// www.who.int/publications-­detail-­redirect/9789241580496

Chapter 10

In Defense of Sociotechnical Pragmatism David Watson and Jakob Mökander

Abstract  The current discourse on fairness, accountability, and transparency in machine learning is driven by two competing narratives: sociotechnical dogmatism, which holds that society is full of inefficiencies and imperfections that can only be solved by better algorithms; and sociotechnical skepticism, which opposes many instances of automation on principle. Both perspectives, we argue, are reductive and unhelpful. In this chapter, we review a large, diverse body of literature in an attempt to move beyond this restrictive duality, toward a pragmatic synthesis that emphasizes the central role of context and agency in evaluating new and emerging technologies. We show how epistemological and ethical considerations are inextricably intertwined in contemporary debates on algorithmic bias and explainability. We trace the dialectical interplay between dogmatic and skeptical narratives across disciplines, merging insights from social theory and philosophy. We review a number of theories of explanation, ultimately endorsing a sociotechnical pragmatism that combines elements of Floridi’s levelism and Mayo’s reliabilism to place a special emphasis on notions of agency and trust. We conclude that this hybrid does more to promote fairness, accountability, and transparency in machine learning than dogmatic or skeptical alternatives. Keywords  Algorithms · Bias · Epistemology · Explainability · Fairness · Pragmatism · Social theory · Transparency

D. Watson (*) Department of Informatics, King’s College London, London, UK e-mail: [email protected] J. Mökander Oxford Internet Institute, University of Oxford, Oxford, UK e-mail: [email protected] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0_10

131

132

D. Watson and J. Mökander

1 Introduction In March 1811, a group of workers in Nottingham, England began destroying textile machinery in protest against high unemployment and low wages (Frey, 2019). The practice quickly spread across the industrial regions of the north. By 1812, government officials were sufficiently concerned about the growing unrest that “machine breaking” had become a capital offense. The Luddite Rebellion would last until 1816, when 12,000 British Army troops – a force not much smaller than the 15,000 Wellington led into battle against Napoleon in the Peninsular War just a few years earlier – suppressed the movement in a series of violent skirmishes. Today, the word “Luddite” has become a more or less derogatory term – despite efforts by some authors to reclaim the label (Jones, 2006; Sale, 1996) – suggesting a backward and reactionary stance toward technological innovation. The negative connotations are not entirely fair. As several commentators have noted, the Luddites took no issue with technology as such, but were instead focused on ensuring better conditions for workers in a rapidly industrializing economy with no semblance of an organized labor movement. In an influential essay on the topic, Hobsbawm observes that “collective bargaining by riot was at least as effective as any other means of bringing trade union pressure, and probably more effective than any other means available before the era of national trade unions to such groups as weavers, seamen and coal-miners” (1952, p. 66). On this view, the Luddite Rebellion is perhaps best understood as an early instance of a long running tension between forces intent on promoting greater automation (primarily if not exclusively for financial gain) and those who resist this impulse (typically out of concern for the potential injustices that will result). We shall refer to these two camps as the sociotechnical dogmatists and skeptics, respectively. Neither group is entirely homogeneous, and the particular arguments advanced by their proponents inevitably vary from case to case. However, the struggle between these two ideal types is a persistent and instructive feature in the history of technological development. In modern times, the focus of this dialectic has shifted from industrial to digital technologies – especially to paradigm shifting advances in machine learning (ML) algorithms, which are increasingly pervasive in both the public and private spheres. ML systems not only mediate our experience of news (Newman et al., 2019), entertainment (Morris, 2015), and one another (Turkle, 2017); they also guide decisions in healthcare (Grote & Berens, 2020), recruitment (Sánchez-Monedero et al., 2020), criminal justice (Završnik, 2019), cybersecurity (Taddeo et al., 2019), and personal finance (Lee et al., 2021), to cite just a few prominent examples. Over and above their sector-specific applications, ML algorithms have also powered the rise of so-­ called “tech giants”. As of 2020, eight of the ten most valuable public companies in the world, with a cumulative market capitalization of some $11 trillion, are technology firms who actively fund further research and development into ML, thereby solidifying their market advantage. This state of affairs provides ample ammunition for both the sociotechnical dogmatists, who marvel at the value creation facilitated

10  In Defense of Sociotechnical Pragmatism

133

by these algorithms, and the skeptics, who watch aghast as state and corporate interests use this technology to consolidate power and automate historical injustices. In this chapter, we will both build up and tear down this purported dichotomy, which we argue is ill-equipped to conceptualize the opportunities and challenges posed by fairness, accountability, and transparency in ML. In Sect. 2, we survey a range of critical data studies (CDS) scholarship, tracing the interplay between sociotechnical dogmatists and skeptics from Marx up to the present day. In Sect. 3, we turn to philosophical theories of explanation, so central to contemporary debates on interpretable ML. In doing so, we draw an extended analogy between these competing epistemologies and the dialectics found in the CDS discourse. We conclude in Sect. 4, ultimately embracing a hybrid brand of sociotechnical pragmatism that incorporates elements of Floridi’s levelism and Mayo’s reliabilism. Because sociotechnical pragmatism – as outlined in this chapter – emphasizes the essential roles agency and trust play in not only understanding but also driving social change, it provides a more constructive basis for governing ML algorithms than either dogmatism or skepticism.

2 The Politics of Algorithms CDS is a broad, inherently interdisciplinary undertaking. In a special issue of Big Data & Society devoted to the topic, Iliadis and Russo (2016) define CDS as a “nascent field…a formal attempt at naming the types of research that interrogate all forms of potentially depoliticized data science and to track the ways in which data are generated, curated, and how they permeate and exert power on all manner of forms of life” (p. 2). This is a wide remit. In this Section, we shall focus specifically on issues of algorithmic fairness, accountability, and transparency  – collectively acronymized as FAccT, which doubles as the name of an annual conference on the subject organized by the Association for Computing Machinery (ACM) that began meeting in 2018.1 We review the sociolegal landscape, survey a range of relevant literature, and describe a growing rift between sociotechnical dogmatists and skeptics in FAccT ML.

2.1 The Sociolegal Landscape The regulatory debates around ML algorithms inevitably start by acknowledging their large and growing social impact. Controversial applications of this technology include but are not limited to recidivism risk assessments (Angwin et  al., 2016),

 In 2018, the conference was called FAT; in 2019, it was rebranded FAT* (pronounced “FAT star”). As of 2020, it goes by the current name, FAccT, which we will use henceforth. 1

134

D. Watson and J. Mökander

predictive policing (Browning & Arrigo, 2021) job hiring (Upadhyay & Khandelwal, 2018), credit scoring (Mendes & Mattiuzzo, 2022), student admissions (Hao, 2020), clinical medicine (Topol, 2019), military threat evaluation (Nasrabadi, 2014), and cybersecurity (Taddeo, 2019). In each case, adoption of ML threatens to automate injustices already present in society. The issue often stems from a failure to properly screen training datasets for biased inputs. For instance, studies have indicated that algorithmic profiling consistently shows online advertisements for higher paying jobs to men over women (Datta et al., 2015); that prominent recommender systems have erroneously suppressed content with homosexual themes as “adult” (Gillespie, 2014); that natural language processing algorithms learn offensive, misogynistic stereotypes (Bolukbasi et  al., 2016); and that facial recognition software is often trained on predominantly white subjects, making them inaccurate classifiers for black and brown faces (Buolamwini & Gebru, 2018). A recent study in Science found evidence of significant racial bias in a widely used healthcare screening algorithm that affects millions of Americans (Obermeyer et al., 2019). Simulations suggest that rectifying the disparity would nearly triple the number of black patients receiving medical attention. Concern over the potential harms posed by opaque ML models is evident in both existing and forthcoming regulations. Take the European Union (EU) as an example. In 2018, the General Data Protection Regulation (GDPR) came into force, which a stated purpose to protect individual’s fundamental rights and freedoms with respect to their personal data. Unfortunately, there is little consensus as to what the law in fact says regarding some of its most celebrated provisions. Consider the so-­called “right to explanation”. Some commentators find the relevant protection in Article 22, which affords data subjects the right to contest algorithmic decisions (Goodman & Flaxman, 2017); or else in Articles 13–15, which guarantee the right to “meaningful information about the logic involved” in algorithmic decisions (Selbst & Powles, 2017). Others have challenged both readings, arguing that the text of the GDPR is too restrictive and unclear (Edwards & Veale, 2017), not to mention full of loopholes that firms can easily exploit (Wachter et  al., 2017). No matter who is correct  – the issue will likely remain undecided until a relevant case is brought before the European Court of Justice – there is no question that EU policymakers are beginning to seriously consider the social impact of ML, and even take preliminary steps towards regulating the industries that rely on such technologies (HLEGAI, 2019; OECD, 2019). While the GDPR focuses on regulating the storage, access and use of data, the Artificial Intelligence Act (AIA) proposed by the European Commission in 2021 aims to regulate the design and use of algorithms. The AIA is still only a proposal and will likely be subject to change. However, for our purposes, three defining characteristics are worth highlighting. First, the material scope of the AIA is broad by any standard, covering not only ML systems but also symbolic expert systems that have been in use for many decades. This indicates that at least some of the technical risks and popular discontent the proposed legislation seeks to address are not unique to ML systems but apply to automated decision-making systems more broadly. Second, the AIA, which attempts to protect EU citizens from the risks associated with algorithms, was proceeded by – and formulated to

10  In Defense of Sociotechnical Pragmatism

135

be consistent with – a second document, namely the Policy and investment recommendations for Trustworthy AI (European Commission). Collectively, these two policy documents suggest that the EU takes a pragmatic stance that both seeks to foster the development and employment of ML systems for socially beneficial purposes while taking concrete measures to identify and mitigate technology related risk. For example, while the use of ML systems in “high-risk” use cases (such as medical diagnostics or recruitment) is generally encouraged, the AIA mandates not only that such systems undergo “conformity assessments” before they are put on the market but also that their outputs are logged and monitored over time (Mökander et al., 2022). Lawmakers in the US, meanwhile, have been less eager to update their regulatory frameworks. A friend of order might note that the US congress is currently debating a bill titled The Algorithmic Accountability Act of 2022. However, while the bill seeks to address growing public concerns about the widespread use of ML systems, it is unlikely to pass into legislation in its current form (Mökander et al., 2022). As of today, despite a plethora of initiatives to harmonize the fragmented landscape of American tech policy, textual guidance on matters of algorithmic bias is found primarily in Title VII of the 1964 US Civil Rights Act, the Fair Credit Reporting Act of 1970, the Equal Credit Opportunity Act of 1974, and the Equal Employment Opportunity Commission’s 1978 Uniform Guidelines. Barocas & Selbst (2016) have argued that these laws may permit algorithmic discrimination if the statistical associations a model learns to exploit are sufficiently informative with respect to a target variable. In a subsequent follow up article (2018), the authors shift their focus from fairness to explainability, which they acknowledge is a prerequisite for judging an algorithm’s reliance on protected attributes. Selbst & Barocas criticize current disclosure laws in the US and the EU for failing to distinguish between predictions that are inscrutable (i.e., incomprehensibly complex) and those that are nonintuitive (i.e., surprising and unobvious). Since “intuition serves as the unacknowledged bridge between a descriptive account and a normative evaluation” (p. 1086), regulations that focus exclusively on algorithmic inscrutability cannot in principle determine whether a particular algorithm’s predictions are ethically defensible. For instance, explanations may fail to provide recourse, as legally required by laws on credit scoring in the US, if they do not recommend actionable steps data subjects can take to change their algorithmic predictions. The dangers are no less acute under the GDPR, which does not regard inferences drawn from statistical models as personal data. This significantly curtails the rights of individuals to exercise control over such inferences, no matter how unreasonable or even discriminatory they may be (Wachter & Mittelstadt, 2019). The issue of unreasonable or counterintuitive associations in ML models is difficult if not impossible to parse without post-hoc explanation tools, highlighting the unexpected intersection between explainability and data privacy, understood as both an individual and group right (Floridi, 2014; Mittelstadt, 2017). A whitepaper by an interdisciplinary team of legal scholars, software engineers, and cognitive scientists based at Harvard’s Berkman Klein Center argues that new tools for explanation are required to meet the regulatory challenges ahead, and expresses optimism that ML solutions should be technically feasible (Doshi-Velez & Kortz, 2017).

136

D. Watson and J. Mökander

2.2 Framing the Debate The proliferation of ML in both the public and private sectors has led to a surge in literature on the topic, not just in academic journals but also in the popular press and trade publishing. A number of book-length works highlighting the potential of ML systems to solve complex problems, unlock economic growth, and contribute to human flourishing have garnered particular attention. For example, in The Creativity Code (2019), Oxford mathematician Marcus Du Sautoy argues that algorithms can do anything humans can do – only better. Through concrete examples, ranging from domains as diverse as music and finance, Du Sautoy attempts to disprove the notion that machines cannot be creative. After all, even creative works follow patterns, and that is just what ML systems do as well. Note, however, that Du Sautoy is talking about doing – not experiencing. As sociologist Elena Esposito argues in Artificial Communication (2022), if algorithms appear “intelligent”, this is not because they have learned how to think like us but because we have learned how to communicate with them in ways that advance our purposes. With such powerful problem-solving engines at hand, it is tempting to fall into sociotechnical dogmatism. Mayer-­ Schönberger and Ramge argue that ML and big data are Reinventing Capitalism (2018), creating increasingly efficient and equitable markets that will result in more stable, productive societies. This optimistic yet naïve strand found its clearest articulation in Diamandis and Kotler’s book Abundance: The Future Is Better Than You Think (2013). Simplified, the argument goes like this. Thanks to technological innovation in general, and the emergence of highly potent computational techniques like ML systems more specifically, it will be possible to address complex optimization problems like food production, disease prevention, and reductions in carbon emissions. This, the story goes, means that human societies are better positioned than ever to lift all of the world’s population to a first-world standard of living whilst managing environmental problems. Yet there have also been strong reactions against this narrative. One of the earliest instances was Morozov’s To Save Everything, Click Here! (2013), in which the author argues against a growing ideology of “technological solutionism” (dogmatism by another name), whereby all manner of human affairs are deemed ripe for optimization. In The Black Box Society (2015), Pasquale highlights the dangers of unregulated algorithms in tech, finance, and government surveillance. He argues that intellectual property (IP) laws have been cynically deployed by powerful actors to promote their own interests and avoid external oversight. In Weapons of Math Destruction, O’Neil (2016) extends the analysis to education, advertising, and criminal justice, demonstrating how algorithms implement pernicious feedback loops that disproportionately impact vulnerable communities. Eubanks examines the effects of such technologies on poor Americans in Automating Inequality (2018), while Noble provides an intersectional critique of Google search results in Algorithms of Oppression (2018). Broussard (2018) coins the term “technochauvinism” (dogmatism by yet another name) to describe our irrational reliance on digital solutions for human problems. Zuboff (2019) argues that tech giants have

10  In Defense of Sociotechnical Pragmatism

137

inaugurated The Age of Surveillance Capitalism, in which human experience is systematically processed into behavioral data and used to develop prediction products that undermine autonomy and democracy. In a similar vein, in Privacy is Power (2020) Carissa Veliz exposes how individual (data) privacy is being eroded by big tech and governments before outlining alternatives for how to design and adopt privacy-friendly alternatives to Google, Facebook and other online platforms. Most recently, in the Atlas of AI (2021), Kate Crawford of Microsoft Research traces the “oppressive logic” that underpins ML systems beyond the digital realm, to the exploitation of data workers in developing countries and to the environmentally damaging extraction of rare earth minerals needed to produce the hardware on which algorithms run. These monographs track a similar trend in academic publications over the last decade, where the focus gradually shifted away from “big data” (the buzzword of the 2000s) and toward “artificial intelligence” (the buzzword of the 2010s). For instance, a 2016 special issue of Philosophical Transactions of the Royal Society A was devoted to the ethical impact of data science. In the introductory article, Floridi & Taddeo (2016) highlight how algorithms pose unique moral challenges independent of their informational inputs or practical implementations. An influential review on the ethics of algorithms by Mittelstadt et al. (2016) identifies six types of ethical concerns raised by algorithms: (1) inconclusive evidence; (2) inscrutable evidence; (3) misguided evidence; (4) unfair outcomes; (5) transformative effects; and (6) traceability. The first three are rooted in epistemic, data-centric issues, while items (4) and (5) are more obviously normative and algorithmic. The sixth is an “overarching concern” (p. 4), which demands that we follow the entire inferential pathway of a given prediction, from preliminary data gathering to model training and deployment. The authors argue that only through this cautious and painstaking procedure can moral responsibility be properly apportioned in complex sociotechnical systems. Of course, the goal of algorithmic traceability, which unifies ethical concerns (1)–(5), can only be realized if we have the proper tools to explain predictions and models with sufficient clarity to interrogate their epistemic and normative consequences. A recent follow up article from the same research group updates the original review within the same sixfold framework (Tsamados et al., 2021), extending the analysis to new opportunities (e.g., AI for social good) and challenges (e.g., environmental costs of deep learning models). A 2017 special issue of Information Communication and Society was devoted to the social power of algorithms. In the introductory article, Beer similarly highlights the inescapable role of explainability. “[W]e need to understand what algorithms are and what they do in order to fully grasp their influence and consequences” (2017, p. 3), he concludes. This goal is complicated by several obstacles. Burrell (2016) famously identifies three sources of algorithmic opacity: (1) intentional corporate or state secrecy; (2) technical illiteracy; and (3) inherent complexity. The first of these echoes Pasquale’s concerns. The second amounts to a call for more widespread education in computer science, which will be essential both to foster informed debate about appropriate regulations and to ensure that a diverse community of

138

D. Watson and J. Mökander

programmers is engaged in designing these powerful tools. In what follows, our focus will be primarily on item (3) in Burrell’s list. Inherent complexity has long been used as an excuse to dismiss any critical discussion of algorithmic explainability. This theme emerges time and again in popular and scholarly analyses, which almost unanimously insist upon a natural, inevitable tension between predictive accuracy and human intelligibility in ML. The typical framing runs something like this. The most impressive results from the recent surge in AI research and funding have come from so-called “black box” algorithms like deep neural networks (Goodfellow et  al., 2016) and gradient boosted forests (Schapire & Freund, 2012). These sophisticated, high-performing regression and classification techniques are notoriously difficult to comprehend, often resulting in models with copious parameters and hyperparameters that do not admit of any easy interpretation. While we may feel at home with our neat linear models and simple rule lists, the expressive power of such algorithms is severely limited in comparison with more exotic architectures and learning ensembles. Complex problems require complex solutions, and we should therefore abandon all hope of ever understanding our best ML models. Not all commentators are resigned to this brand of statistical defeatism. Rudin (2019) argues forcefully against this narrative, which she suggests is grounded in some mix of anecdotal evidence and corporate opportunism. She observes that science has long shown a preference for parsimony, elevating Occam’s Razor from a rule of thumb to an organizing principle. She rejects the purported accuracy-­ interpretability trade-off, summarizing her message in the article’s blunt, memorable title: “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead.” The plea is founded on an important if somewhat subtle lesson from statistical learning theory. In a seminal article, Breiman (2001) introduced the notion of a “Rashomon set”, named after the 1950 Kurosawa film in which four characters give very different accounts of a samurai’s untimely death in eighth century Kyoto. Just as multiple witnesses to the same crime may provide inconsistent testimony, Breiman shows that ML models can approximate the same functional relationship through divergent learning strategies. The resulting models form a so-called “Rashomon set” of predictors, which perform similarly on test data but trace different paths from input to output, e.g. relying on different feature subsets to compute conditional expectations. Rudin draws two lessons from this parable. First, we should not trust post-hoc explanation methods, which may appear to mimic the target function but in fact converge on similar predictions via unrelated reasoning. Second, in any Rashomon set of sufficient size and complexity, we should expect to find at least one (globally) interpretable model. If we must use ML in high-stakes applications like healthcare and criminal justice, she concludes, then there is no reason to rely on opaque algorithms like deep neural networks. She backs up her claim with numerous instances of models optimized for transparency, such as SLIM (Ustun & Rudin, 2019), which computes risk scores as a linear sum of just a few integer weights, and CORELS (Angelino et  al., 2018), which makes provably optimal predictions under certain conditions via short sequences of if-then statements.

10  In Defense of Sociotechnical Pragmatism

139

Rudin makes some compelling points, but there are two key problems with her analysis. First, there is no general guarantee that some interpretable algorithm will outperform black box competitors or even be in the Rashomon set of high-­ performing models for any given prediction problem. Of course, it may well be worthwhile to search for one, but algorithms like SLIM and CORELS earn their final form at the cost of substantial assumptions and considerable processing time. They do not scale well to large problems or continuous spaces, and can require expert tuning to achieve good results in practice. Perhaps this price is worth paying when stakes are sufficiently high. But this argument cuts both ways – if, say, lives hang in the balance, then we may rationally choose to value predictive performance above all. Such a choice would tend to weigh against an a priori commitment to models drawn from just a small handful of “interpretable” function classes. Second, as Pasquale and Burrell have argued at length, algorithmic opacity is not just a byproduct of mathematical complexity, but of institutional incentives. The modern digital economy places an enormous premium on data and algorithms, which may well be amongst the most valuable assets on the books for many firms, and not just those explicitly devoted to information technology. History has shown that these institutions are willing to go to extraordinary lengths to protect their IP, not just from potential competitors but from regulators and indeed any form of external scrutiny. Even if a firm were using an interpretable model to make its predictions, the model architecture and parameters would likely be subject to strict copyright protections. Some have called for the creation of a third-party agency tasked with the responsibility of auditing data and code under non-disclosure agreements (Floridi et al., 2018; Wachter et al., 2017), a sort of digital ombudsman to advocate for data subjects and protect consumer rights. Whatever the merits of this proposal, no such body currently exists under EU, UK, or US law. Until such legislation is enacted – perhaps in the final versions of the EU’s AIA or the US’s AAA – analysts and regulators will probably have no choice but to treat the underlying technology as a black box.

2.3 Fairness and Its Discontents Given the potential harms posed by algorithmic bias in an increasingly automated information society, it should come as no surprise that fairness is a primary concern for CDS scholars. But how do we define fairness? A substantial subgenre of the FAccT ML literature is devoted to formalizing criteria in an explicit effort to answer this question. Prominent examples include: • Fairness through unawareness. A model is fair if sensitive attributes A are not included in the training data. • Demographic parity. A model is fair if predictions Yˆ are independent of sensitive attributes, i.e. Yˆ ⊥ A .

140

D. Watson and J. Mökander

• Equality of opportunity. A model is fair if predictions are independent of sensitive attributes after conditioning on the true outcome Y, i.e. Yˆ ⊥ A| Y . • Calibration. A model is fair if outcomes are independent of sensitive attributes after conditioning on predictions, i.e. Y ⊥ A| Yˆ . These definitions have been widely studied (albeit with frustratingly inconsistent nomenclature), and many common learning algorithms satisfy at least one of them. A thorough analysis of the advantages and disadvantages of these and other fairness criteria is beyond the scope of this chapter. For a comprehensive and multifaceted discussion, see (Barocas et al., 2019). For our purposes, it suffices to observe that while each arguably captures some intuitive notion of fairness, the sheer multitude of proposals is somewhat disconcerting. A tutorial held at the 2018 ACM FAT conference surveyed no fewer than 21 competing definitions of algorithmic fairness (Narayanan, 2018). Others have emerged since (e.g., Kusner et al., 2017; Kim et al., 2018; Romano et al., 2019; Sharifi-Malvajerdi et al., 2019). Impossibility theorems have shown that many of the most popular formal criteria are mutually incompatible except in trivial cases (Chouldechova, 2017; Friedler et al., 2016; Kleinberg et al., 2017b). These results suggest that while mathematical formulae may help to clarify the trade-offs inherent in any socially sensitive decision-making context, they cannot in principle “solve” the problems posed by algorithmic fairness. The impulse to machine learn our way out of fundamental social problems like systematic discrimination and structural inequality is a clear example of sociotechnical dogmatism in action. Yet this response is hardly unique to a twenty-first century information society. Weber (2002) describes a similar impulse, which he dubs rationalization, and famously argues that it characterizes a distinctly modern set of cultural and institutional practices, from bureaucratic administrative states to capitalist modes of production. The idea also has roots in technological determinism, which holds that human relations and social structures are driven primarily or even exclusively by material development. Naturally, technological determinism comes in different shapes and sizes. For example, Allan Dafoe (2015) stresses that one does not need to accept a teleological understanding of determinism to concede that specific micro-sociological contexts (e.g., military or economic competition) can constrain sociotechnical evolution to deterministic paths. A related yet distinct version of technological determinism is defended by Oxford sociologist Ralph Schroeder. In Rethinking science, technology, and social change (2007), Schroeder demonstrates through comparative historical analysis how the introduction of specific technologies (like the car) led to similar social developments even different, culturally distinct, societies. Taking a step back, it is worth noting that determinism is often associated with Marx (1990, 1992),2 who views bourgeois factory owners as early proponents of sociotechnical dogmatism, eager to promote a narrative of inevitable  As an exegetical aside, we observe that there is some dispute over the true extent to which Marx was in fact a technological determinist, at least in the uncompromising sense that the label is occasionally employed by modern authors. See (Bimber, 1990). 2

10  In Defense of Sociotechnical Pragmatism

141

industrialization while wringing every last drop of surplus value out of their workers and production lines. These critiques can be traced from their origins in Marx and Weber through the writings of Frankfurt School philosophers – notably, Horkheimer and Adorno (1947) and Habermas (1981)  – who forcefully challenge so-called Enlightenment ideals. They argue that these principles, which inaugurated an era of scientific rationalism (not to mention revolutionary politics) in Europe, ultimately engender a liberal mythology every bit as dangerous as the sociopolitical order it displaced. This backlash against the received dogma of science  – particularly its behaviorist and positivist tendencies  – arguably reached its apex with the social constructivism of Latour and Woolgar (1979) and the “strong programme” in the sociology of science championed by Bloor (1976) and his colleagues at the University of Edinburgh. These early proponents of science and technology studies question the privileged status of scientific realism and highlight the irreducibly social origins of all knowledge claims. Scholars in the social construction of technology tradition extend this skepticism to technology as well (Bijker et al., 1987), arguing that it is human relations and social structures that shape the development of new technologies – not the other way around. It may not be immediately obvious what this intellectual history has to do with FAccT ML. However, these dialectics provide an informative backdrop for ongoing CDS debates regarding the extent to which it is appropriate or desirable to deploy algorithms in high-risk settings. Some sociotechnical dogmatists explicitly endorse the so-called “end of theory” perspective on big data, which clearly echoes the behaviorism of the early twentieth century. In a notorious Wired cover story from 2008, the magazine’s then-Editor in Chief Chris Anderson makes the case with aplomb: This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behavior, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves. (Anderson, 2008)

This brand of radical instrumentalism elevates predictive accuracy over explanatory insight as the ultimate goal of inquiry. Whereas the traditional scientific method requires structural reasoning to generate hypotheses and carefully designed experiments to isolate cause and effect, the rise of big data and artificial intelligence has allegedly inaugurated a new era of automated discovery requiring little or no human input (Hey et al., 2009). Understanding why the model works is as impossible as it is irrelevant. The math behind these algorithms is too sophisticated, the datasets too vast for the mortal mind to possibly comprehend. This line of reasoning is lazy and dangerous. As the examples from Sect. 2.1 vividly demonstrate, blind faith in the objective accuracy of algorithms is misplaced. The mythology of data-driven omniscience runs deep in the modern psyche, but the truth is that numbers can never “speak for themselves”. Datasets are invariably gathered and curated by humans, encoding our assumptions, biases, and oversights at every turn (Boyd & Crawford, 2012). Modern critics of sociotechnical dogmatism,

142

D. Watson and J. Mökander

such as Hoffmann (2019), argue that the logic of computation is fundamentally unfit to address problems of social injustice, as both are founded on the same rationalist mode of hierarchical labelling and sorting. The problem formulation itself  – in terms of variables and averages, optimizing metrics for prespecified groups – fails to question or even acknowledge how the very categories that algorithmic fairness seeks to protect are themselves socially constructed and potentially reductive (Hanna et al., 2020). Data science inspires what McQuillan calls a “machinic neoplatonism” (2018) that privileges mathematical order over lived experience. Greater “transparency” does little to resolve these issues, and may even exacerbate them by normalizing neoliberal modes of agency in which end users must navigate a complex marketplace of algorithmic alternatives with limited information (Ananny & Crawford, 2016). If these critics are right, then the effort to advance technical solutions to algorithmic injustices will only add more fuel to the fire. Rather than formalize some new and improved fairness criteria or maximize social harmony with clever objective functions, these authors encourage us to rethink our relationship with technology and one another. They caution against any brand of sociotechnical dogmatism – be it rationalist, determinist, or solutionist – that prioritizes quantitative modes of psychosocial organization over the lived experience and shared reality of qualitative engagement.

2.4 The Pragmatic Turn These objections are provocative and perspicacious – but they are not the last word on this debate. In most cases, the alternative to algorithmic decision systems is human decision systems, and it is far from obvious whether this latter option generally promotes more just outcomes. Studies have demonstrated time and again that even the most well-intentioned of people are prone to implicit biases against historically disadvantaged groups (Greenwald & Krieger, 2006). Identifying and mitigating such biases with computational methods is not just a speculative ideal. Kleinberg et al. (2017a, b) show that replacing or supplementing judicial bail decisions with a high-performance ML model results in huge welfare gains along practically any metric of interest. Simulated results find crime reductions of nearly 25% with no change in jailing rates, or jailing rate reductions of over 40% with no increase in crime rates. All gains are achieved while reducing racial disparities. In a subsequent paper, Kleinberg et  al. (2018) make a strong case that increased automation can reduce discrimination by inaugurating rigorous, objective procedures for auditing and appealing ML predictions. While it is exceedingly difficult under current laws to prove that a person has engaged in discriminatory behavior, it is relatively straightforward to test and even correct for algorithmic bias given minimal access to the target model and/or a sufficient training dataset (provided we have settled on some formal definition of fairness).

10  In Defense of Sociotechnical Pragmatism

143

The Kleinberg et al. (2017a, b) study is somewhat unusual for highlighting a case in which automation boosts both accuracy and fairness. Most literature in this area tends to start from the (not unreasonable) assumption that these two desiderata are in natural tension with each other. Even in such cases, there is nothing to be gained by ignoring the trade-off. We may describe the situation in terms of a Pareto frontier.3 Imagine a two-dimensional space with axes for accuracy and fairness. Given some formal definition of each, we may score decisions (human or otherwise) along both axes and thereby locate them within this coordinate system (see Fig. 10.1). We say that system A Pareto-dominates system B if and only if it is strictly better along at least one axis and no worse along any other. The Pareto frontier is constituted by the set of points that are not Pareto-dominated by any other point in this space – i.e., systems that cannot be made more accurate without becoming less fair, or made fairer without becoming less accurate. Note that there is no context-independent way to decide which point along the frontier we consider optimal, for this judgment

Fig. 10.1  A schematic example of the Pareto frontier. Models are scored by their error and unfairness. Pareto-efficient solutions form a boundary beyond which no model can improve in either direction without incurring some loss in the other. From (Kearns & Roth, 2019, p. 127)

 The concept now referred to as Pareto frontier (also Pareto efficiency) is attributed to the Italian economist and sociologist Vilfredo Pareto and his work Course in Political Economy and Manual of Political Economy. For a more contemporary introduction to the concept, we recommend (Lockwood, 2017). 3

144

D. Watson and J. Mökander

depends upon our relative valuations of these two desiderata for this particular problem. What Kleinberg et al. (2017a, b) effectively demonstrate is that in the case of bail decisions, human judges in their dataset are nowhere near the Pareto frontier. In their book The Ethical Algorithm: The Science of Socially Aware Algorithm Design, theoretical computer scientists Kearns & Roth argue that delicate trade-offs like this cannot be navigated without first confronting them head-on: once we pick a decision-making model…there are only two possibilities. Either that model is not on the Pareto frontier, in which case it’s a “bad” model…or it is on the frontier, in which case it implicitly commits to a numerical weighting of the relative importance of error and unfairness. Thinking about fairness in less quantitative ways does nothing to change these realities – it only obscures them. (2019, pp. 127–28)

The notion that putting numbers to problems like this somehow does violence to our underlying humanity or commits us to naïve sociotechnical dogmatism is itself reductive and wrong-headed. Quantitative methods are merely one tool among many for diagnosing and combatting social injustice, and a potentially powerful one at that. Ignoring this option on philosophical grounds incurs a devastating opportunity cost that society can ill afford. A more nuanced approach to these challenging issues – one that rejects both the unjustified positivism of the dogmatists and the self-defeating puritanism of the critical theorists – is possible. This perspective would need to be principled but flexible, acknowledging the irreducible context-dependence of particular judgments on algorithmic fairness and intelligibility without compromising its commitment to either ideal. It would have to be relational, not relativist, preserving the autonomy of individual agents to determine their own tolerance for error, complexity, and unfairness. Finally, it would have to be technically grounded, unafraid to combine quantitative and qualitative modes of reasoning to arrive at novel solutions. We submit that the philosophy of pragmatism meets all these desiderata. Kleinberg et al. and Kearns & Roth provide some intuition for what a pragmatic approach might look like in the specific case of algorithmic fairness. To extend and generalize this approach is the focus the next section.

2.5 Sociotechnical Pragmatism With origins in nineteenth century American thought – particularly the works of Peirce (1999), James (1975), and Dewey (1999) – pragmatism has a rich and somewhat controversial history in analytic philosophy. For a good overview, see (Legg & Hookway, 2019). Central to all varieties of pragmatism is the primacy of agents and contexts over ideas and abstractions. Conceptual advances are only valuable insomuch as they are useful. A theory with no practical implications is little more than a formal exercise. How can sociotechnical pragmatism be situated in relation to the dogmatic and skeptical perspectives outlined in this section? To start with, pragmatism is a philosophical tradition that does not separate knowing the world from acting within it.

10  In Defense of Sociotechnical Pragmatism

145

The epistemological consequences of this central claim in pragmatist philosophy will be discussed at greater length in the next Section. At this point, it is sufficient to highlight that the emphasis pragmatist philosophy puts on agency – as in not only representing, but also intervening in the world (see Hacking, 1983) – is fundamentally incompatible with determinism in general and with sociotechnical dogmatism in particular. As noted by Lessig (2006), the future is not determined by technological innovation alone but depends on the actions we take to shape it. At the same time, and in contrast to skepticism, pragmatism offers a robust foundation on which to build constructive and progressive programs. According to the American pragmatists, theories should be judged by their success when applied practically to real-world situations (Legg & Hookway, 2019). This “Pragmatic Maxim” has a few immediate consequences for our purposes. Most importantly, opposing the use of ML systems “on principle” is literally nonsensical in pragmatist terms. A specific ML system may have undesirable consequences or be employed for unethical purposes and, in such cases, should rightly be opposed. However, such an evaluation will necessarily be context-specific, and reflect the values and goals of the communities that either design or are subject to decisions produced by ML systems. Now, adopting the stance of sociotechnical pragmatism does not resolve all (or even most) tensions discussed in this Section. As Isaiah Berlin demonstrates in his essay “The pursuit of an ideal” (1997), different values that are desirable in and of themselves can clash and require tradeoffs. It is important to stress that this “incommensurability” – which gives rise to moral dilemmas – is not a conflict between reason and sentiment. Nor is it exclusively (or even primarily) a conflict between groups in society that hold different sets of values. Rather, it is the result of the many conflicting impulses experienced by individual human beings and thus a conflict between alternative yet incompatible modes of self-realization (Rorty, 2021). For our purposes, these lessons from the theory of value pluralism do not constitute a conclusion but a starting point. For example, accepting that ML systems can both bring social benefits and potential harms only tells us that it is reasonable to subject these technologies to proportional governance and oversight. It does not tell us what the nature or the purpose of this governance ought to be. This is where CDS has a role to play. Sociotechnical pragmatism can be conceived as problem solving. In the words of Monica Prasad (2021), problem solving is a practice in which a community engages in examination of the empirical world in order to change a particular situation the community has decided is in need of change. By highlighting the shortcomings of ML systems, critically-oriented researchers and social advocacy groups help surface and frame empirical situations that are in need of change. The point we want to stress here is that all pragmatic research has a critical component – but not all critical research has a pragmatic component. For example, to address a specific real-­ world problem, we first have to accept (a) that there exists such a thing as a “real world”; and (b) that causal analysis is possible, whereby an intervention at time t1 leads to an observable difference at time t2. The sociotechnical pragmatists thereby share with dogmatists the belief that progress is possible. However, as opposed to

146

D. Watson and J. Mökander

dogmatism, pragmatism views emancipation neither as inevitable nor final (Gross et al., 2022). While not being able to guarantee pro-social outcomes, good governance can facilitate well-intentioned actors, prevent or deter malicious actors, and provide a foundation for communities to engage in an informed dialogue around what normative goals to prioritize and at what cost. To summarize our argument in this Section, let us return to an example we discussed earlier. At a high-level of abstraction, the performance of a specific model across a plurality of normative dimensions can be visualized by a Pareto frontier. This visualization has two purposes. First, ML systems performing far from the Pareto frontier should be considered bad systems, and their use morally impermissible. Second, by identifying, quantifying, and communicating the implicit tradeoffs in the design of a specific ML model, an analysis based on Pareto frontiers contributes to procedural transparency and regularity. The purpose of technically informed evaluation tools is therefore not to ensure ethical outcomes (an impossible goal) but to make visible implicit choices and tensions, give voice to different stakeholders, and arrive at resolutions that – even when imperfect – are at least publicly defensible (Whittlestone et al., 2019).

3 The Philosophy of Explanation In the previous Section, we first traced how all roads in the CDS discourse lead to explainability. There can be no algorithmic fairness, accountability, or transparency without some method for making ML models more interpretable. While this holds true across the board, the role and nature of explanations is especially central from a pragmatist point of view. This is because sociotechnical pragmatism – as outlined in the previous Section – aims to arrive at socially acceptable compromises, something that presupposes a public discourse. Explanations do not only inform but also enable dialogue between different stakeholders. Often overlooked in the FAccT literature are certain fundamental questions about algorithmic explainability perhaps considered too abstract to bother asking. To wit: • What constitutes a satisfactory explanation? • What are the basic elements of explanation? • How do explanations advance knowledge? These questions are squarely within the ambit of philosophy – specifically, epistemology and philosophy of science. Explanation is an ancient topic of philosophical interest, debate, and confusion. In the Phaedo, Plato writes – through the voice of his mentor, Socrates, speaking from his deathbed – that all inquiry begins with a call for explanation (Plato., 1997). Aristotle devotes large portions of his Physics and Metaphysics to expounding the doctrine of the four causes  – formal, material, efficient, and final  – that jointly explain the nature of things (Aristotle., 1984). More examples can no doubt be found in venerable philosophical traditions the world over. In this Section, we will

10  In Defense of Sociotechnical Pragmatism

147

focus on modern analytic epistemology and philosophy of science, which provide an insightful and heterogeneous collection of formal theories and perspectives on explanation. In doing so, we will highlight how the dialectic between dogmatism and skepticism in ethics – which we traced in detail in Sect. 2 – is mirrored by the age-old tension between naïve realists and radical constructivists in epistemology. The purpose thereby is not only to introduce and elaborate on the position of epistemological pragmatism, but also to demonstrate how this position complements and supports sociotechnical pragmatism as a research stance.

3.1 The Deductive-Nomological Model Contemporary philosophical analysis of scientific explanation arguably begins with Hempel, who attempts to boil causal reasoning down to its most basic formal elements (Hempel, 1965; Hempel & Oppenheim, 1948). According to his theory, the explanation for some physical event E consists of two components: 1. a non-empty set of observation statements S = {s1, …, sn}; and 2 . at least one law-like generalization L, such that

S & L  

E



This account is deductive, insomuch as the explanandum follows logically from the explanans; and nomological, insomuch as it incorporates a law of nature as an essential premise of the argument. Thus Hempel terms this the deductive-­ nomological (DN) model, an influential fusion of logic and science that is characteristic of the positivist tradition from which it emerged. The DN model works fairly well within the framework of classical mechanics, where the observation set S may contain, say, information about the position and momentum of some object x, which, in conjunction with Newtonian laws of motion L, entails a new proposition E that states the position of x at some future time. This model no doubt boasts a certain formal elegance, but critics have persuasively argued that it provides neither necessary nor sufficient conditions for successful explanation. Hempel himself (1965) points out that the entailment relation purported to hold between explanans and explanandum is overly strong. Consider a probabilistic explanation of the form: s1: Patient a has infection x s2: Patient a receives treatment L1: 0% of untreated patients with infection x survive L2: 99% of treated patients with infection x survive ∴ E: Patient a survives The conjunction of observations S = {s1, s2} and statistical regularities L = {L1, L2} confers high probability on the outcome E but does not logically entail it. Yet little

148

D. Watson and J. Mökander

or nothing about a’s survival is left unexplained by S & L. If this critique is right, then the DN model does not provide necessary conditions for explanation.4 A more damning counterexample for the DN model comes from Salmon (1971, p. 34), who proposes the following DN-compliant explanation: S: John Jones is a male who has been taking birth control pills regularly L: All males who take birth control pills regularly fail to get pregnant ∴ E: John Jones fails to get pregnant Intuitively, the issue with this explanation is that it violates some unstated causal relevance criterion. John Jones did not fail to get pregnant because he took birth control pills; he failed to get pregnant because S: John Jones is a male; and L: Males do not get pregnant. His pill-taking is therefore causally irrelevant to the explanandum. If Salmon’s argument is sound, then the DN model does not provide sufficient conditions for explanation. The DN model has become something of a philosophical punching bag in the analytic tradition. Detractors certainly have a lot to work with, but we should be careful to recognize that a baby lurks somewhere in that bathwater. Hempel’s categorical distinction between observation statements (i.e., particular claims) and law-­ like regularities (i.e., universal claims) is especially astute, as is the formal requirement that both types of propositions be involved in any successful explanation. Although his theory was designed with the natural sciences in mind, the DN model can easily accommodate algorithmic explanations as well. We simply explain the behavior of some trained model f by combining input datapoints S with model parameters L, thereby rendering predicted outcome E. If the function f is deterministic  – as function classes for most prominent ML algorithms are, at least at the token level– then the relationship between explanans and explanandum is even one of logical implication. For reasons to be explained below, this account of algorithmic explanation will not quite do  – but it is a start. Notable competitors to (and improvements upon) Hempel’s DN model include the statistical relevance model (Salmon, 1971); the causal mechanical model (Dowe, 2000; Salmon, 1984); and the unificationist model (Kitcher, 1989). A thorough explication of these theories is beyond the scope of this chapter. For a good introduction to all three, see (Woodward, 2019). The following Sections focus instead on counterfactual, interventionist, and pragmatic accounts, which we believe better capture the most important and interesting aspects of successful explanations.

 Hempel (1965) proposes a new class of explanations called inductive-statistical (IS) to accommodate such cases, but the IS model struggles to account for low probability events. The alternatives analyzed below are better equipped to handle statistical explanations. 4

10  In Defense of Sociotechnical Pragmatism

149

3.2 Counterfactuals and Interventionism The counterfactual theory of explanation is most closely associated with Lewis (1973a, b), who develops a rich metaphysics on the basis of his possible worlds semantics. Although Lewis famously endorses realism with respect to possible worlds, his account of explanation does not depend upon this ontological commitment. All that his theory requires is a similarity relation that distinguishes between worlds that are closer to and farther from the actual world. Truth conditions for counterfactual statements can then be defined as follows: 1. ‘If p were the case, then q would be the case’ is true in the actual world if and only if (i) there are no possible p-worlds; or (ii) some p-world where q holds is closer to the actual world than is any p-world where q does not hold. Ignoring the trivial case (i), we may employ this notion of counterfactuals to give truth conditions for causal dependence claims: 2. Where c and e are two distinct actual events, e causally depends on c if and only if, if c were not to occur, then e would not occur. Lewis argues that causal dependence is sufficient but not necessary for causation, as only the latter is a transitive relation (1973b). We can define causation more generally in terms of causal chains, which string together a finite sequence of successive causal dependencies: 3. c is a cause of e if and only if there exists a causal chain leading from c to e. This model can be generalized to handle issues of temporal asymmetry (Lewis, 1979) and probabilistic dependencies (Lewis, 1986). It has some difficulties accommodating instances of overdetermination and causal pre-emption – see (Menzies & Beebee, 2020) for an overview – but later refinements incorporate contextual information that arguably defuses these objections (Lewis, 2000). The counterfactual theory of explanation has been explicitly endorsed by several groups working on XAI (Wachter et al., 2018). Briefly, the idea runs as follows. Say that some bank uses opaque ML model f to predict the creditworthiness of loan applicants. The model judges applicant a to be high risk, and the bank therefore denies her the loan. We would like an explanation why. One strategy would be to identify the statistical regularities that govern the model’s outputs, at least in the region of a – this is (a localized version of) the DN approach – but these may be difficult to establish if f is complex and/or inaccessible. Another method would be to find the nearest neighbor on the opposite side of the decision boundary, i.e. the most proximal counterfactual applicant who receives the opposite prediction. Call this applicant a’. An explanation for why a was denied her loan can then be given by pointing out the differences between a and a’, which should, by construction, be minimal. If those differences include sensitive attributes – if, for instance, the only difference between a and a’ is race or gender – then this is evidence of algorithmic bias (Kusner et al., 2017).

150

D. Watson and J. Mökander

The interventionist theory of causation builds upon Lewis’s counterfactual account, although arguably its roots are more directly traced to foundational work in experimental design (Fisher, 1935) and econometrics (Haavelmo, 1944), as well as more recent research in computer science (Pearl, 2000). The most vocal proponent of interventionism in contemporary philosophy is Woodward (2003, 2008, 2010, 2015), who articulates what he calls a “minimal” theory of explanation as follows (2003, p. 203). Let E stand for some explanandum assigning a particular value y to an output variable Y. Let L be some generalisation relating inputs X to outputs Y. Then explanans M is explanatory with respect to E if and only if the following conditions are met: (i) M and E are true, or at least approximately so. (ii) According to L, Y = y under an intervention that sets X to x. (iii) There is some intervention that changes the value of X from x to x′, where x  ≠  x′, with L correctly describing the value y′, where y  ≠  y′, that Y would assume under this intervention. Woodward emphasizes that, whereas Lewis aspires to reduce causality to relations of counterfactual dependence, Woodward has no such reductionist ambitions. In fact, he is “skeptical that any reductive account will turn out to be adequate” (2003, p.  20). Just as Hempel sought to avoid any direct mention of causality by tying explanation to law-like regularities that could be defended on empiricist grounds, so Lewis attempts to explain events through counterfactuals that serve as primitives within his account. Both philosophers inherit a Humean skepticism with respect to causality, fearing that anything less would amount to question begging. Woodward objects that Lewis’s notion of similarity metrics over worlds requires some prior concept of causality to render accurate judgments on real world cases regardless. Meanwhile, the circularity inherent in his own interventionism, Woodward argues, is more illuminating than vicious. A final distinction of note is that, whereas Lewis is focused exclusively on causal tokens (i.e., local explanations), Woodward expands his analysis to include causal types (i.e., global explanations). Briggs (2012) and Fine (2012) examine several instances where Lewis’s and Woodward’s theories diverge, and in all cases favor the interventionist model over the counterfactual account. Whatever their differences, it is important to acknowledge how both theories improve upon the DN model. Specifically, they address sufficiency objections originally raised by Salmon (1971), who observes that DN-compliant explanations cannot distinguish between relevant and irrelevant regularities. To rephrase the point in slogan form: correlation does not imply causation. The mere fact that males who take birth control pills regularly fail to get pregnant does not by itself entail some law of nature. It merely suggests a certain hypothesis that must be tested by disentangling the two universals it conjoins. This is the logic behind randomized control trials (RCTs), the gold standard of causal inference, which attempt to resolve potential confounding effects by assigning treatment conditions uniformly throughout the target population.

10  In Defense of Sociotechnical Pragmatism

151

For all interventionism’s merits, Woodward’s account fails to accommodate most cases of interest in XAI. To see why, consider that every supervised learning model is by definition a mapping of inputs X to outputs Y. It may be applied to both actual feature vectors (i.e., true observations) or counterfactual feature vectors (i.e., perturbed datapoints). But though this setup meets all of Woodward’s criteria, we do not judge some particular prediction to be explained just by pointing to the model parameters L and the output y when L is not itself intelligible. Such an explanation may be maximally accurate, but it is far too complex to be of any use. An opaque supervised learning algorithm cannot provide an explanation of itself. The upshot of this critique is that some further constraints on L are required before we regard its dicta as explanatory. Raw instrumentalism was found lacking in Sect. 2.3; it fares no better here. Without some understanding of the mechanisms that L instantiates – its operating logic – we cannot predict outcomes on future points or envision results of individual perturbations. Explanations must run deeper.

3.3 Epistemological Pragmatism Lurking in the background of this critical exegesis has been a collection of related objections and proposals that could all be said to fly under the banner of epistemological pragmatism. Twentieth century pragmatic accounts of scientific explanation are numerous and varied (see, e.g., Achinstein, 1983; Bromberger, 1966; Scriven, 1962), but perhaps no one crystallizes their collective thrust so neatly as van Fraassen: The discussion of explanation went wrong at the very beginning when explanation was conceived of as a relation like description: a relation between a theory and a fact. Really, it is a three-term relation between theory, fact, and context. No wonder that no single relation between theory and fact ever managed to fit more than a few examples! Being an explanation is essentially relative for an explanation is an answer…it is evaluated vis-à-vis a question, which is a request for information. But exactly…what is requested differs from context to context. (1980, p. 156)

This approach marks a radical departure from all previously considered accounts. The DN, counterfactual, and interventionist models may differ in the particulars, but they all share the goal of enumerating some objective criteria deemed necessary and sufficient for successful explanation. The pragmatist, by contrast, rejects this undertaking altogether. He starts from the simple, indisputable observation that explanations do not occur in a vacuum. Rather, they are the product of interactions between epistemic agents with certain beliefs and interests. Ignoring these contingencies does not result in deeper, more general models; it merely vitiates our true target, producing instead some formal theory with little to say about scientific practice. The contention that good explanations should be flexible with respect to their levels of abstraction – ranging in target from models to systems, and in resolution from local to global, as required – is at least partially motivated by pragmatic considerations. What is simple for some agents may be complex for others, depending

152

D. Watson and J. Mökander

on their capabilities, experience, background knowledge, and so on. A theory of explanation must be able to accommodate such variation without artificially imposing a one-size-fits-all solution. Similarly, the objection that Woodward’s interventionism is not explanatory beyond a certain threshold of complexity is essentially a pragmatic one. Perhaps there is some abstract sense in which a neural network’s behavior is only fully explained by listing the values of every parameter associated with every node located in every layer of the model – but this is not much help to the defendant who just wants to know why the algorithm has predicted that he is likely to reoffend. Some of the most important and relevant contemporary work on epistemological pragmatism is due to Floridi (2011a, 2013, 2019), who places information at the center of all philosophical inquiry. Specifically, Floridi is interested in the semantic (rather than Shannon’s [1948] syntactic) notion of information, which he defines as well-formed, meaningful, and veridical data (2011a). The ability to move clearly and smoothly between different levels of abstraction is not just a theoretical desideratum for Floridi, but a guiding methodological principle (2008a). There can be no theoretical analysis without first identifying the relevant observables and typed variables, for these both delimit the conceptual space and ground the semantics of all propositions, informative or otherwise. But a level of abstraction is not enough on its own. According to Floridi’s (2011b) correctness theory of truth (CTT), semantic information can always be polarized into question/answer pairs – but only once we have specified a particular context, level of abstraction, and purpose (collectively labelled “CLP parameters”). These specifications ensure the epistemic relevance of information, which is a prerequisite for successful coordination between agents (Floridi, 2008b). An answer is true if and only if it saturates the question (verifying and validating it), thereby generating an adequate model of the target system. However, Floridi cautions that “Queries cannot acquire their specific meaning in isolation or independently of CLP parameters” (2011b, p. 155). Floridi (2012) advances a unique theory of explanation that he calls the network theory of account (NTA). NTA builds upon the question/answer format envisioned by van Fraassen above and developed in detail as part of CTT. Since information is a strictly weaker concept than knowledge  – only the latter can possibly admit of positive or negative introspection (Floridi, 2006), which means only the former is free of the otherwise insoluble Gettier problem (Floridi, 2004; Gettier, 1963) – we need some method of upgrading mere information into full-blown knowledge. Floridi’s solution is to embed the information in a flow network with certain graph theoretic properties. A source s and target t are connected by a number of directed edges through which information flows. t queries s with well-formed questions about some explanandum of interest  – being careful to specify the relevant CLP parameters – while s sends truthful answers back in reply. Floridi shows that under certain reasonable assumptions, such a network will tend to elevate t’s information into knowledge. Thus NTA avoids (but does not quite “resolve”, since this is impossible) Gettier-type objections through reliabilist pragmatics, thereby providing necessary (but insufficient) conditions for successful explanation.

10  In Defense of Sociotechnical Pragmatism

153

In a companion paper that explicitly acknowledges an intellectual debt to the American pragmatists, Floridi (2010) considers skeptical objections to his philosophy of information. He frames the problem in a possible worlds semantics, pointing out that Lewis is deliberately vague about how to define or quantify inter-world similarity. Floridi’s strategy is to characterize each world by a unique Borel number, some (potentially infinite) string of 1’s and 0’s representing the answers to a fixed sequence of CLP-indexed yes-or-no questions. Given this setup, the Hamming distance – which simply counts the differences between two Boolean strings of equal length – offers a simple and effective way to measure similarity between worlds. Floridi then exploits the properties of this metric to prove that radical skepticism reduces to an innocuous redundancy, while moderate skepticism is actually beneficial, insomuch as it promotes methodological rigor. This latter argument he attributes to Peirce, who emphasizes the social dimension of scientific inquiry, especially the communal commitment to advancing knowledge by attempting to falsify one another’s theories. The upshot of Floridi’s analysis is that explanation is essentially a process – not a deductive argument or a structural causal model. We may describe the sequence of questions and answers using formal tools, but the explanation itself is no formalism. It is a messy, dynamic, social interaction in which at least two agents iteratively trade information within some particular context, at some particular level of abstraction, and for some particular purpose. This insight is beginning to gain traction within the XAI community, as evidenced by a number of recent articles and conference papers explicitly endorsing pragmatic approaches to algorithmic explanation (Watson & Floridi, 2021;  Miller, 2019; Mittelstadt et  al., 2019; Murdoch et  al., 2019; Páez, 2019). Tools implementing such proposals remain relatively rare, although this is beginning to change (Watson, 2022a; Watson et al., 2022).

3.4 Trust and Testing One major motivation for the XAI project to convince users that ML models are reliable or trustworthy. But how do we determine whether any method, computational or otherwise, is reliable? The question is a familiar one in analytic philosophy. One prominent answer comes from Goldman (1979), who led the vanguard what Williams calls “the reliabilist revolution” (2016) in anglophone epistemology. Goldman’s theory is simple (and decidedly pragmatic): a process is reliable if it has a historical track record of cognitive success. Performance over time can be boiled down to what Goldman calls a “truth ratio”, i.e. the rate of true judgments among all those attributable to the process in question. High truth ratios are evidence of reliable methods. In a somewhat different context, Taddeo (2010a, b) defines trustworthiness along similar lines. She describes a model in which rational agents (human or artificial) evaluate one another’s historical performance on particular tasks to calculate success ratios. An agent is deemed trustworthy with respect to a given task if and only if its success ratio over time exceeds some threshold value. This is an

154

D. Watson and J. Mökander

intuitive, if somewhat idealized account that ignores statistical niceties such as base rates and effect sizes, to say nothing of the variable costs associated with different kinds of errors. (A false negative may be far more dangerous than a false positive in certain medical contexts, for example.) Yet Goldman and Taddeo are right to observe that reliability must be gradually earned and steadily maintained. In the context of ML, a new algorithm is always regarded with skepticism until it proves itself on unseen data. Even then, our confidence in the model is never more than a few mistakes away from being irreparably dashed. Peirce was perhaps the first to describe this dynamic at play in the natural sciences. Systematic skepticism is elevated to an organizing principle by both Merton (1973) and Popper (1959), two towering intellectual figures widely credited with founding the modern disciplines of sociology and philosophy of science, respectively. Merton enumerates four norms that collectively “comprise the ethos of modern science” (p. 270): universalism, communism, disinterestedness, and organized skepticism. Last but not least among these norms, organized skepticism “is both a methodological and an institutional mandate” (p. 277), he writes. The point is developed in considerable detail by Popper (1934), who argues that the demarcation criterion of science – what distinguishes it from all other modes of inquiry – is the empirical falsifiability of the theories it produces. According to his philosophy of falsificationism, science advances knowledge through an iterative procedure of conjectures and refutations with the formal structure of a modus tollens inference. This view, which is closely related to the DN model discussed in Sect. 3.1, can be explicated as follows. If L denotes a scientific theory, then we must be able to combine it with some initial conditions S to predict some empirical consequence(s) E: (1) (S & L) → E Presuming that S includes all causally relevant information,5 we may test L by checking whether it is in fact the case that E. If (2a) ~ E, then we can logically infer (via modus tollens) that (3a) ~ L. If, on the other hand, (2b) E, then we have failed to falsify L and can infer only that (3b) L has been corroborated.

 This is a non-trivial assumption. According to the Duhem-Quine thesis, Popper’s falsificationism fails precisely because it is impossible to design a test that isolates the effects of L. We can always salvage any theory no matter how anomalous the observations E, provided we make sufficient amendments to the conjunct S, e.g. adding auxiliary hypotheses. See (Duhem, 1954; Quine, 1951). 5

10  In Defense of Sociotechnical Pragmatism

155

Popper goes to great pains to stress that corroboration is not to be confused with verification, for Hume has definitively shown that this is impossible. To take a famous example,6 let L be the universal statement “All swans are white”, and let S = {s1, …, sn} be a set of n swans. Either at least one swan is not white (~E), in which case we may definitively reject L; or else they are all white (E), in which case we have not yet rejected L – but neither have we ruled out the possibility that swan sn + 1 might be black. Thus, Popper reasons that universal statements are asymmetrically decidable – they can be falsified but never verified. Numerous authors have pointed out major difficulties with Popper’s account. See (Hansson, 2017) for a critical discussion. Perhaps most problematic is that strict falsificationism struggles to differentiate between theories that are more or less corroborated by the evidence. In later work, Popper (1963) would go on to adopt Tarski’s (1983) formulation of the correspondence theory of truth, which provides a method for ranking theories by verisimilitude. Even Popper’s most ardent acolytes generally judge this work unfavorably (Thornton, 2019), and Popper himself would come to disavow the undertaking (Popper, 1972). Other attempts to address the issue include Carnap’s inductive logic (1950, 1952) and various Bayesian epistemologies (for a good overview, see [Talbott, 2016]). However, the most convincing and sophisticated resolution, we contend, is found in the error-statistical philosophy of Mayo (1996, 2018). Her notion of severe testing brings clarity and rigor to Popper’s falsificationism while avoiding many of the traps that inevitably ensnare probabilist logics. Mayo – who, coincidentally, also cites Peirce as a major source of philosophical inspiration – offers both weak and strong versions of her severity principle. We will focus on the strong form, which states that “We have evidence for a claim C just to the extent it survives a stringent scrutiny. If C passes a test that was highly capable of finding flaws or discrepancies from C, and yet none or few are found, then the passing result, x, is evidence for C” (2018, p. 14). The basic intuition behind this principle is that not all tests are created equal. To adapt Mayo’s own example, suppose Jones weighs himself on digital and analogue scales prior to an extended vacation in Argentina. He also weighs a copy of Floridi’s book The Philosophy of Information, which clocks in at exactly one pound. Upon his return from Argentina, where he has consumed prodigious volumes of beef, wine, and potatoes, he is disheartened (but not entirely surprised) to discover that both scales report a weight gain of approximately ten pounds on his part, while stubbornly insisting that Floridi’s monograph remains exactly one pound. In this case, we can safely reject the hypothesis that Jones has lost weight in his travels. Moreover, we should and do have greater confidence in this conclusion under the scenario above than we would if the scales had been mutually inconsistent, or if Floridi’s book had somehow gained a proportional amount of weight. The hypothesis that Jones gained weight  Technically, this example should be formalized in first-order logic to quantify predicates over sets. We stick with propositional logic here for consistency with previous sections and ease of presentation. The example is sufficiently simple and familiar that we doubt the ambiguity will lead to any confusion. 6

156

D. Watson and J. Mökander

has been not just corroborated, but severely tested. More generally, Mayo argues that our justification for believing in any non-logical proposition – be it a hypothesis, explanation, or mere observation statement – is a function not just of the proposition itself, but of how severely it has been tested. The formal details of this proposal are beyond the scope of this chapter. Briefly, we evaluate the severity of a test by observing how likely it is to detect all and only true effects, i.e. by computing its expected rate of false positives and false negatives as a function of effect size. This represents an advance over Goldman’s truth ratios, which do not differentiate between easy and hard judgments, and Taddeo’s success ratios, which do not distinguish between simple and challenging tasks. Mayo adopts the Neyman-Pearson hypothesis testing framework to derive optimal decision procedures for a wide range of parametric examples, thereby reframing model selection as a statistical inference problem. The implications of this error-statistical reasoning for XAI are spelled out in detail by Watson (2022b), who argues that there can be no explanation without inference. If XAI tools fail to pass severe tests, then they can do little to promote greater trust in ML models. To conclude, it is worth spelling out how Floridi’s levelism and Mayo’s reliabilism relate to soctiotechnical pragmatism as a research stance. The former encourages researchers and organizations seeking to “explain” the properties of specific ML systems to start with the end in mind. Who is the target audience of the explanation? What are the intended target audience expected to do with the explanation? What information would the intended target audience need to act accordingly? The latter provides a blueprint for assessing the quality of explanations by interrogating how grounded a specific hypothesis is and how it fares against severe tests. Few XAI tools in use today operate according to the error statistical logic outlined here, but they will need to if the goal is to promote greater trust in ML. Together, Floridi’s levelism and Mayo’s reliabilism form the basis for a pragmatist approach to validating ML models that is both socially reproducible and technically robust.

4 Conclusion In this chapter, we have defended the claim that sociotechnical pragmatism constitutes a constructive and coherent stance that allows researchers who seek to identify and mitigate the risks associated with emerging technologies to navigate between the Scylla of dogmatism and the Charybdis of skepticism. As we have demonstrated, this stance has both epistemological and ethical components. In epistemology, pragmatism is the substitution of intersubjectivity for objectivity, whereby objectivity in this case refers to a supposed privileged relationship to something non-human, like God or “a view from nowhere”. However, it should be stressed that pragmatism does not equal relativism. As Floridi (2011a) notes, every question is asked for a purpose, and, for that specific purpose, there is an appropriate level of abstraction at which a phenomenon can be described and understood.

10  In Defense of Sociotechnical Pragmatism

157

In ethics, pragmatism is the attitude that what counts as morally acceptable is not an insight produced by something non-human, but simply a revisable cultural inheritance. Since we have no reason to believe that this inheritance should offer theoretical closure, we should not expect normative tensions to be overcome by technological innovation or solved by blanket bans on specific technological systems. Instead, the best we can do is to build an ethical infrastructure (Floridi, 2017) that emphasizes the role of agency and context when evaluating the prospective advantages and limitations of specific technologies and promotes trust through transparency and procedural regularity. Even under ideal circumstances, however, there are limits on what good governance and ethically-aligned design can reasonably expect to achieve (Mökander, 2021). For example, Hardin’s famous thought experiment regarding The tragedy of the commons (1968) illustrates how discourses on certain levels of abstraction (i.e. “the individual”) can lead to undesirable consequences on other levels (i.e. “the environment). Moreover, Hayek’s (1973) classical distinction between cosmos and taxis reminds us that systems can (and often do) have emergent properties, i.e. properties that cannot be deduced from the system’s parts. Because both technical and social systems display emergent properties, they are not only hard to understand, but also hard to control and govern. The major shortcoming of sociotechnical skepticism is a failure to account for this complexity. Put bluntly, some of the evils ascribed to ML systems in the CDS literature are not only found in human-centric (bureaucratic) information processing systems but in all instantiations of human organization. The lesson sociotechnical pragmatism offers is that if want to solve specific problems, we also have to be specific in our problem formulation. We opened this chapter with reference to the Luddite Rebellion of 1811. This historical event embodies many of the themes that are central not just to XAI but to technological advancement more generally. Though they are sometimes misrepresented as prototypical sociotechnical skeptics, the Luddites are perhaps more fairly remembered as a disadvantaged, disenfranchised group actively seeking redress. Destroying textile machinery was a (technological) means toward a (social) end – namely, greater wage security and worker’s rights. Purely technological solutions – e.g., fully automating the textile industry (as sociotechnical dogmatists might have liked) or banning such automation outright (as sociotechnical skeptics might have preferred) – fundamentally miss the point. A pragmatic alternative more likely to satisfy the Luddites would be to redistribute the surplus value created by increased automation in an equitable fashion through progressive tax policy. We do not intend to suggest that such an undertaking is easy or straightforward – striking the ideal balance between free market dynamism and social welfare protections is arguably the defining struggle of modern capitalism – but there is little to gain by ignoring the trade-off altogether. Similarly, ML algorithms often force us to (re)evaluate our commitment to competing imperatives such as fairness, transparency, and accuracy. The solution, then as now, is to roll up our sleeves and do the work – no matter how unsavory the challenge. The parable of the Luddites holds lessons for the philosophy of explanation as well. The end users of XAI are people – messy, imperfect creatures with particular

158

D. Watson and J. Mökander

preferences, beliefs, and abilities. Models of explanation that operate at logical, statistical, or physical levels under the monist assumptions of naïve realism are of no value to agents who cannot use or understand them. Efforts to assuage the Luddites through economic projections of GDP growth as a function of increased automation in nineteenth century England would likely be met with blank stares – or worse. Thankfully, twentieth century philosophy and statistics provide a range of tools for rigorously specifying levels of abstraction, encoding agentive preferences, and quantifying uncertainty.

References Achinstein, P. (1983). The nature of explanation. Oxford University Press. Ananny, M., & Crawford, K. (2016). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society, 20(3), 973–989. https://doi.org/10.1177/1461444816676645 Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. Wired. Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., & Rudin, C. (2018). Learning certifiably optimal rule lists for categorical data. Journal of Machine Learning Research, 18(234), 1–78. Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine bias. Aristotle. (1984). In J. Barnes (Ed.), The complete works of Aristotle. Princeton University Press. Barocas, S., & Selbst, A. (2016). Big data’s disparate impact. California Law Review, 104(1), 671–729. Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and machine learning. fairmlbook.org Beer, D. (2017). The social power of algorithms. Information Communication and Society, 20(1), 1–13. https://doi.org/10.1080/1369118X.2016.1216147 Berlin, I. (1997). The pursuit of an ideal. In H. Hardy & R. Hausheer (Eds.), The proper study of mankind: An anthology of essays. Pimlico. Bijker, W. E., Hughes, T. P., & Pinch, T. (Eds.). (1987). The social construction of technology systems: New directions in the sociology and history of technology. The MIT Press. Bimber, B. (1990). Karl Marx and the three faces of technological determinism. Social Studies of Science, 20(2), 333–351. https://doi.org/10.1177/030631290020002006 Bloor, D. (1976). Knowledge and social imagery. University of Chicago Press. Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In Advances in neural information processing systems. Boyd, D., & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information Communication and Society, 15(5), 662–679. https://doi.org/10.1080/1369118X.2012.678878 Breiman, L. (2001). Statistical modeling: The two cultures (with comments and a rejoinder by the author). Statistical Science, 16(3), 199–231. https://doi.org/10.1214/ss/1009213726 Briggs, R. (2012). Interventionist counterfactuals. Philosophical Studies, 160(1), 139–166. https:// doi.org/10.1007/s11098-­012-­9908-­5 Broussard, M. (2018). Artificial unintelligence: How computers misunderstand the world. The MIT Press. Bromberger, S. (1966). Why questions. In R. Colodny (Ed.), Mind and cosmos: Essays in contemporary science and philosophy. University of Pittsburgh Press.

10  In Defense of Sociotechnical Pragmatism

159

Browning, M., & Arrigo, B. (2021). Stop and risk: Policing, data, and the digital age of discrimination. American Journal of Criminal Justice, 46(2), 298–316. https://doi.org/10.1007/ s12103-­020-­09557-­x Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In S.  A. Friedler & C.  Wilson (Eds.), Proceedings of the 1st conference on fairness, accountability and transparency (pp. 77–91). PMLR. Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 1–12. https://doi.org/10.1177/2053951715622512 Carnap, R. (1950). Logical foundations of probability. University of Chicago Press. Carnap, R. (1952). The continuum of inductive methods. University of Chicago Press. Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047 Crawford, K. (2021). The atlas of AI. Yale University Press. Dafoe, A. (2015). On technological determinism: A typology, scope conditions, and a mechanism. Science, Technology, & Human Values, 40(6), 1047–1076. https://doi. org/10.1177/0162243915579283 Datta, A., Tschantz, M.  C., & Datta, A. (2015). Automated experiments on ad privacy settings. Proceedings on Privacy Enhancing Technologies, 1, 92–112. https://doi.org/10.1515/ popets-­2015-­0007 Dewey, J. (1999). In L.  Hickman & T.  Alexander (Eds.), The essential Dewey. Indiana Univesity Press. Diamandis, P., & Kotler, S. (2013). Abundance: The future is better than you think. Free Press. Doshi-Velez, F., & Kortz, M. (2017). Accountability of AI under the law: The role of explanation. In Berkman Klein Center for Internet & Society. Dowe, P. (2000). Physical causation. Cambridge University Press. Du Sautoy, M. (2019). The creativity code: Art and innovation in the age of AI. Harvard University Press. Duhem, P. (1954). In P.  W. Wiener (Ed.), The aim and structure of physical theory. Princeton University Press. Edwards, L., & Veale, M. (2017). Slave to the algorithm? Why a “right to explanation” is probably not the remedy you are looking for. Duke Law and Technology Review, 16(1), 18–84. https:// doi.org/10.2139/ssrn.2972855 Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press. Fine, K. (2012). Counterfactuals without possible worlds. The Journal of Philosophy, 109(3), 221–246. Fisher, R. A. (1935). The design of experiments. Oliver & Boyd. Floridi, L. (2004). On the logical unsolvability of the Gettier problem. Synthese, 142(1), 61–79. https://doi.org/10.1023/B:SYNT.0000047709.27594.c4 Floridi, L. (2006). The logic of being informed. Logique et Analyse, 49(196), 433–460. Floridi, L. (2008a). The method of levels of abstraction. Minds and Machines, 18(3), 303–329. Floridi, L. (2008b). Understanding epistemic relevance. Erkenntnis, 69(1), 69–92. Floridi, L. (2010). Information, possible worlds and the cooptation of scepticism. Synthese, 175, 63–88. https://doi.org/10.1007/s11229-­010-­9736-­0 Floridi, L. (2011a). A defence of constructionism: Philosophy as conceptual engineering. Metaphilosophy, 42(3), 282–304. https://doi.org/10.1111/j.1467-­9973.2011.01693.x Floridi, L. (2011b). Semantic information and the correctness theory of truth. Erkenntnis, 74(2), 147–175. https://doi.org/10.1007/s10670-­010-­9249-­8 Floridi, L. (2012). Semantic information and the network theory of account. Synthese, 184(3), 431–454. Floridi, L. (2013). The ethics of information. Oxford University Press. Floridi, L. (2014). Open data, data protection, and group privacy. Philosophy & Technology, 27(1), 1–3. https://doi.org/10.1007/s13347-­014-­0157-­8

160

D. Watson and J. Mökander

Floridi, L. (2017). Infraethics  – On the conditions of possibility of morality. Philosophy & Technology, 30(4), 391–394. https://doi.org/10.1007/s13347-­017-­0291-­1 Floridi, L. (2019). The logic of information. Oxford University Press. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., et  al. (2018). AI4People  — An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/ s11023-­018-­9482-­5 Frey, C.  B. (2019). The technology trap: Capital, labor, and power in the age of automation. Princeton University Press. Friedler, S.  A., Scheidegger, C., & Venkatasubramanian, S. (2016). On the (im)possibility of fairness. Gettier, E.  L. (1963). Is justified true belief knowledge? Analysis, 23(6), 121–123. https://doi. org/10.2307/3326922 Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–193). The MIT Press. Goldman, A. (1979). What is justified belief? In G. S. Pappas (Ed.), Justification and knowledge (pp. 1–25). Reidel. Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press. Goodman, B., & Flaxman, S. (2017). European Union regulations on algorithmic decision-­making and a “right to explanation”. AI Magazine, 38(3), 76–99. https://doi.org/10.1609/aimag. v38i3.2741 Greenwald, A. G., & Krieger, L. H. (2006). Implicit bias: Scientific foundations. California Law Review, 94(4), 945–967. https://doi.org/10.2307/20439056 Gross, N., Reed, I. A., & Winship, C. (Eds.). (2022). The new pragmatist sociology. Columbia University Press. Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46, 205–211. https://doi.org/10.1136/medethics-­2019-­105586 Haavelmo, T. (1944). The probability approach in econometrics. Econometrica, 12, 3–115. https:// doi.org/10.2307/1906935 Habermas, J. (1981). Theory of communicative action (T. McCarthy, Trans.). Polity Press. Hacking, I. (1983). Representing and intervening. Cambridge University Press. Hanna, A., Denton, E., Smart, A., & Smith-Loud, J. (2020). Towards a critical race methodology in algorithmic fairness (pp. 501–512). Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3351095.3372826 Hansson, S. O. (2017). Science and pseudo-science. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Summer 201). Metaphysics Research Lab, Stanford University. Hao, K. (2020, August 20). The UK exam debacle reminds us that algorithms can’t fix broken systems. MIT Technology Review. Hardin, G. (1968). The tragedy of the commons. Science, 162(3859), 1243–1248. Hayek, F. A. (1973). Law, legislation and liberty: A new statement of the liberal principles of justice and policitical economy. Routledge. Hempel, C. (1965). Aspects of scientific explanation and other essays in the philosophy of science. Free Press. Hempel, C., & Oppenheim, P. (1948). Studies in the logic of explanation. Philosophy of Science, 15, 135–175. Hey, T., Tansley, S., & Tolle, K. (Eds.). (2009). The fourth paradigm: Data-intensive scientific discovery. Microsoft Research. HLEGAI. (2019). Ethics guidelines for trustworthy AI. Hobsbawm, E.  J. (1952). The machine breakers. Past & Present, 1(1), 57–70. https://doi. org/10.1093/past/1.1.57 Hoffmann, A. L. (2019). Where fairness fails: Data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7), 900–915. https://doi.org/10.108 0/1369118X.2019.1573912

10  In Defense of Sociotechnical Pragmatism

161

Horkheimer, M., & Adorno, T. (1947). Dialectic of enlightenment (G. S. Noerr, Ed.; E. Jephcott, Trans.). Stanford University Press. Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3(2), 1–16. https://doi.org/10.1177/2053951716674238 James, W. (1975). Pragmatism: A new name for some old ways of thinking. Harvard University Press. Jones, S. E. (2006). Against technology: From the luddites to neo-Luddism. Routledge. Kearns, M., & Roth, A. (2019). The ethical algorithm: The science of socially aware algorithm design. Oxford University Press. Kim, M., Reingold, O., & Rothblum, G. (2018). Fairness through computationally-bounded awareness. In S.  Bengio, H.  Wallach, H.  Larochelle, K.  Grauman, N.  Cesa-Bianchi, & R.  Garnett (Eds.), Advances in neural information processing systems 31 (pp.  4842–4852). Curran Associates, Inc.. Kitcher, P. (1989). Explanatory unification and the causal structure of the world. In P. Kitcher & W. Salmon (Eds.), Scientific explanation (pp. 410–505). University of Minnesota Press. Kleinberg, J., Lakkaraju, H., Leskovec, J., Ludwig, J., & Mullainathan, S. (2017a). Human decisions and machine predictions. The Quarterly Journal of Economics, 133(1), 237–293. https:// doi.org/10.1093/qje/qjx032 Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017b). In C. H. Papadimitriou (Ed.), Inherent trade-offs in the fair determination of risk scores (pp. 43.1–43.23). 8th Innovations in Theoretical Computer Science Conference (ITCS 2017). https://doi.org/10.4230/LIPIcs.ITCS.2017.43 Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of algorithms. Journal of Legal Analysis, 10, 113–174. https://doi.org/10.1093/jla/laz001 Kusner, M. J., Loftus, J., Russell, C., & Silva, R. (2017). Counterfactual fairness. In I. Guyon, U.  V. Luxburg, S.  Bengio, H.  Wallach, R.  Fergus, S.  Vishwanathan, & R.  Garnett (Eds.), Advances in neural information processing systems (pp. 4066–4076). Curran Associates, Inc. Latour, B., & Woolgar, S. (1979). Laboratory life: The construction of scientific facts. Princeton University Press. Lee, M. S. A., Floridi, L., & Denev, A. (2021). Innovating with confidence: Embedding AI governance and fairness in a financial services risk management framework. In L. Floridi (Ed.), Ethics, governance, and policies in artificial intelligence (pp. 353–371). Springer. https://doi. org/10.1007/978-­3-­030-­81907-­1_20 Legg, C., & Hookway, C. (2019). Pragmatism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy. (Spring 201). Metaphysics Research Lab, Stanford University. Lessig, L. (2006). Code (2nd ed.). Basic Books. Lewis, D. (1973a). Causation. Journal of Philosophy, 70, 556–567. Lewis, D. (1973b). Counterfactuals. Blackwell. Lewis, D. (1979). Counterfactual dependence and Time’s Arrow. Noûs, 13(4), 455–476. https:// doi.org/10.2307/2215339 Lewis, D. (1986). Philosophical papers, Volume II. Oxford University Press. Lewis, D. (2000). Causation as influence. Journal of Philosophy, 97, 182–197. Lockwood, B. (2017). Pareto efficiency. In The new Palgrave dictionary of economics (pp. 1–5). Palgrave Macmillan. https://doi.org/10.1057/978-­1-­349-­95121-­5_1823-­2 Floridi, L., & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360. https:// doi.org/10.1098/rsta.2016.0360 Marx, K. (1990). Capital (B. Fowkes, Trans.). Penguin. Marx, K. (1992). Capital (D. Fernbach, Trans.). Penguin. Mayer-Schönberger, V., & Ramge, T. (2018). Reinventing capitalism in the age of big data. John Murray. Mayo, D. G. (1996). Error and the growth of experimental knowledge. University of Chicago Press. Mayo, D. (2018). Statistical inference as severe testing: How to get beyond the statistics wars. Cambridge University Press.

162

D. Watson and J. Mökander

McQuillan, D. (2018). Data science as Machinic Neoplatonism. Philosophy & Technology, 31(2), 253–272. https://doi.org/10.1007/s13347-­017-­0273-­3 Mendes, L. S., & Mattiuzzo, M. (2022). Algorithms and discrimination: The case of credit scoring in Brazil. In M.  Albers & I.  W. Sarlet (Eds.), Personality and data protection rights on the internet: Brazilian and German approaches (pp.  407–443). Springer. https://doi. org/10.1007/978-­3-­030-­90331-­2_17 Menzies, P., & Beebee, H. (2020). Counterfactual theories of causation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Spring 202). Metaphysics Research Lab, Stanford University. Merton, R. (1973). The normative structure of science. In N. Storer (Ed.), The sociology of science: Theoretical and empirical investigations (pp. 267–278). University of Chicago Press. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007 Mittelstadt, B. (2017). From individual to group privacy in big data analytics. Philosophy & Technology, 30(4), 475–494. https://doi.org/10.1007/s13347-­017-­0253-­7 Mittelstadt, B.  D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3, 205395171667967. https://doi. org/10.1177/2053951716679679 Mittelstadt, B., Russel, C., & Wachter, S. (2019). Explaining explanations in AI. Proceedings of FAT* ’19: Conference on Fairness, Accountability, and Transparency. https://doi. org/10.1145/3287560.3287574 Mökander, J. (2021). On the limits of design: What are the conceptual constraints on designing artificial intelligence for social good? In J. Cowls & J. Morley (Eds.), The 2020 yearbook of the digital ethics lab (pp. 39–52). Springer. https://doi.org/10.1007/978-­3-­030-­80083-­3_5 Mökander, J., Axente, M., Casolari, F., & Floridi, L. (2022). Conformity assessments and post-­ market monitoring: A guide to the role of auditing in the proposed European AI regulation. Minds and Machines, 32(2), 241–268. https://doi.org/10.1007/s11023-­021-­09577-­4 Mökander, J., Juneja, P., Watson, D. S., & Floridi, L. (2022). The US Algorithmic Accountability Act of 2022 vs. The EU artificial intelligence act: what can they learn from each other? Minds and Machines, 32(4), 751–758. https://doi.org/10.1007/s11023-022-09612-y Morris, J.  W. (2015). Curation by code: Infomediaries and the data mining of taste. European Journal of Cultural Studies, 18(4–5), 446–463. https://doi.org/10.1177/1367549415577387 Murdoch, W. J., Singh, C., Kumbier, K., Abbasi-Asl, R., & Yu, B. (2019). Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences, 116(44), 22071–22080. https://doi.org/10.1073/pnas.1900654116 Narayanan, A. (2018). Tutorial: 21 fairness definitions and their politics. Retrieved April 8, 2020, from https://www.youtube.com/watch?v=jIXIuYdnyyk Nasrabadi, N. (2014). Hyperspectral target detection: An overview of current and future challenges. IEEE Signal Processing Magazine, 31(1), 34–44. https://doi.org/10.1109/MSP.2013.2278992 Newman, N., Fletcher, R., Kalogeropoulos, A., & Nielsen, R. (2019). Reuters Institute Digital News Report 2019 (Vol. 2019). Reuters Institute for the Study of Journalism. Noble, S. U. (2018). Algorithms of oppression. New York University Press. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi. org/10.1126/science.aax2342 OECD. (2019). Recommendation of the council on artificial intelligence. Páez, A. (2019). The pragmatic turn in explainable artificial intelligence (XAI). Minds and Machines, 29(3), 441–459. https://doi.org/10.1007/s11023-­019-­09502-­w Pasquale, F. (2015). The Black Box Society. Harvard University Press. https://doi.org/10.4159/ harvard.9780674736061 Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge University Press. Peirce, C. S. (1999). The essential Peirce (The Peirce Edition Project ed.). Indiana Univesity Press.

10  In Defense of Sociotechnical Pragmatism

163

Plato. (1997). In J. M. Cooper & D. S. Hutchison (Eds.), Plato: Complete works. Hackett. Popper, K. (1959). The Logic of scientific discovery. Routledge. Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge. https://doi. org/10.2307/2412688 Popper, K. (1972). Objective knowledge: An evolutionary approach. Clarendon Press. Prasad, M. (2021). Pragmatism as problem solving. Socius, 7, 2378023121993991. https://doi. org/10.1177/2378023121993991 Quine, W. v. O. (1951). Two dogmas of empiricism. The Philosophical Review, 60(1), 20–43. Romano, Y., Barber, R.  F., Sabatti, C., & Candès, E.  J. (2019). With malice towards none: Assessing uncertainty via equalized coverage. Harvard Data Science Review. Rorty, R. (2021). In E.  Mendieta (Ed.), Pragmatism as anti-authoritarianism. Harvard University Press. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi. org/10.1038/s42256-019-0048-x Sale, K. (1996). Rebels against the future. Basic Books. Salmon, W. (1971). Statistical explanation. In W. Salmon (Ed.), Statistical explanation and statistical relevance (pp. 29–87). University of Pittsburgh Press. Salmon, W. (1984). Scientific explanation and the causal structure of the world. Princeton University Press. Sánchez-Monedero, J., Dencik, L., & Edwards, L. (2020). What does it mean to “solve” the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems (pp.  458–468). Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3351095.3372849 Schapire, R. E., & Freund, Y. (2012). Boosting: Foundations and algorithms. MIT Press. Schroeder, R. (2007). Rethinking science, technology, and social change. Stanford University Press. Scriven, M. (1962). Explanations, predictions, and Laws. In H.  Feigl & G.  Maxwell (Eds.), Scientific explanation, space, and time (pp. 170–230). University of Minnesota Press. Selbst, A., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–242. https://doi.org/10.1007/s13347-­017-­0263-­5 Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(3), 379–423. https://doi.org/10.1002/j.1538-­7305.1948.tb01338.x Sharifi-Malvajerdi, S., Kearns, M., & Roth, A. (2019). Average individual fairness: Algorithms, generalization and experiments. In H.  Wallach, H.  Larochelle, A.  Beygelzimer, F. d’Alché-­ Buc, E.  Fox, & R.  Garnett (Eds.), Advances in neural information processing systems 32 (pp. 8242–8251). Curran Associates, Inc. Taddeo, M. (2010a). An information-based solution for the puzzle of testimony and trust. Social Epistemology, 24(4), 285–299. https://doi.org/10.1080/02691728.2010.521863 Taddeo, M. (2010b). Modelling trust in artificial agents, a first step toward the analysis of e-trust. Minds and Machines, 20(2), 243–257. https://doi.org/10.1007/s11023-­010-­9201-­3 Taddeo, M. (2019). Three ethical challenges of applications of artificial intelligence in cybersecurity. Minds and Machines, 29(2), 187–191. https://doi.org/10.1007/s11023-­019-­09504-­8 Taddeo, M., McCutcheon, T., & Floridi, L. (2019). Trusting artificial intelligence in cybersecurity is a double-edged sword. Nature Machine Intelligence, 1(12), 557–560. https://doi. org/10.1038/s42256-­019-­0109-­1 Talbott, W. (2016). Bayesian epistemology. In E.  N. Zalta (Ed.), The Stanford encyclopedia of philosophy. (Winter 201). Metaphysics Research Lab, Stanford University. Tarski, A. (1983). The concept of truth in formalized languages. In S.  Logic (Ed.), Metamathematics (2nd ed., pp. 152–278). Hackett. Thornton, S. (2019). Karl Popper. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 201). Metaphysics Research Lab, Stanford University. Topol, E. J. (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1), 44–56. https://doi.org/10.1038/s41591-­018-­0300-­7

164

D. Watson and J. Mökander

Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: Key problems and solutions. AI & Society, 37, 215–230. https://doi. org/10.1007/s00146-­021-­01154-­8 Turkle, S. (2017). Alone together: Why we expect more from technology and less from each other (2nd ed.). Basic Books. Upadhyay, A., & Khandelwal, K. (2018). Applying artificial intelligence: Implications for recruitment. Strategic HR Review, 17(5), 255–258. https://doi.org/10.1108/SHR-­07-­2018-­0051 Ustun, B., & Rudin, C. (2019). Learning optimized risk scores. Journal of Machine Learning Research, 20(150), 1–75. van Fraassen, B. C. (1980). The scientific image. Oxford University Press. Véliz, C. (2020). Privacy is power: Why and how you should take back control of your data. Penguin. Wachter, S., & Mittelstadt, B. D. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of Big Data and AI. Columbia Business Law Review, 2, 443–493. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76–99. Wachter, S., Mittelstadt, B., & Russell, C. (2018). Counterfactual explanations without opening the Black Box: Automated decisions and the GDPR. Harvard Journal of Law and Technology, 31(2), 841–887. Watson D, S., Floridi, L. (2021). The explanation game: a formal framework for interpretable machine learning. Abstract Synthese, 198(10), 9211–9242. https://doi.org/10.1007/ s11229-020-02629-9 Watson, D. (2022a). Rational Shapley values (pp. 1083–1094). 2022 ACM Conference on Fairness, Accountability, and Transparency. https://doi.org/10.1145/3531146.3533170 Watson, D.  S. (2022b). Conceptual challenges for interpretable machine learning. Synthese, 200(2), 65. https://doi.org/10.1007/s11229-­022-­03485-­5 Watson, D. S., Gultchin, L., Taly, A., & Floridi, L. (2022). Local explanations via necessity and sufficiency: Unifying theory and practice. Minds and Machines, 32(1), 185–218. https://doi. org/10.1007/s11023-­022-­09598-­7 Weber, M. (2002). The Protestant Ethic and the Spirit of Capitalism (T. Parsons, Trans.). Routledge. Whittlestone, J., Nyrup, R., Alexandrova, A., & Cave, S. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions (pp. 195–200). Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. https://doi.org/10.1145/3306618.3314289 Williams, M. (2016). Internalism, reliabilism, and deontology. In B. McLaughlin & H. Kornblith (Eds.), Goldman and his critics (pp. 1–21). Wiley. Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford University Press. Woodward, J. (2008). Cause and explanation in psychiatry: An interventionist perspective. In K.  Kendler & J.  Parnas (Eds.), Philosophical issues in psychiatry (pp.  287–318). Johns Hopkins University Press. Woodward, J. (2010). Causation in biology: Stability, specificity, and the choice of levels of explanation. Biology and Philosophy, 25(3), 287–318. https://doi.org/10.1007/s10539-­010-­9200-­z Woodward, J. (2015). Interventionism and causal exclusion. Philosophy and Phenomenological Research, 91(2), 303–347. https://doi.org/10.1111/phpr.12095 Woodward, J. (2019). Scientific explanation. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. (Winter 201). Metaphysics Research Lab, Stanford University. Završnik, A. (2019). Algorithmic justice: Algorithms and big data in criminal justice settings. European Journal of Criminology, 18, 623–642. https://doi.org/10.1177/1477370819876762 Zuboff, S. (2019). The age of surveillance capitalism. Profile Books.

Index

A Adapting capabilities, 3 Algorithms, 18, 22, 26, 44, 132–146, 148, 151, 152, 154, 157 Artificial intelligence (AI), 2, 12, 17, 19, 22, 23, 39–41, 43–48, 94, 97, 98, 107, 114, 134, 135, 137, 138, 141 Autonomous artificial agents, 58, 72 Autonomous weapons systems (AWS), 2, 3, 58 Autonomy, 3, 20, 28, 92, 94, 119, 125, 137, 143 B Bias, 24, 28, 44, 134, 135, 139, 142, 149 Brussels effect, 106

Digital technologies, 1, 2, 107, 108, 123, 132 E English School of International Relations, 84 Epistemology, 133, 146, 147, 153, 155, 156 Ethics, 5, 7, 16, 29, 31, 44, 46, 85, 108, 119–121, 126, 127, 137, 147, 157 European Union (EU), 3, 15, 55, 83, 84, 91, 94–100, 106–112, 127, 134, 135, 139 Explainability, 135, 137, 138, 146 F Fairness, 4, 26, 133, 135, 139–144, 146, 157 G Global challenges, 43

C COVID-19, 3, 19, 118–129 Cyber conflict, 51–56, 82 Cyber warfare, 55, 82 D Data privacy, 2, 13, 17, 20, 135 Defense, 108, 131–158 Definition, 2, 3, 12–17, 29, 30, 43, 87, 88, 92, 93, 107, 122, 140, 142, 143, 151 Digital age, 51, 54, 107, 108, 113, 114 Digital ethics, 1, 119, 125, 127 Digital innovation, 2 Digital sovereignty, 3, 55, 81–101, 106, 108

H Hobbes, T., 1, 7 Human control, 3 Human rights, 3, 90, 94, 98, 99, 109, 110, 118–129 I Intellectual property (IP), 2, 39–43, 45–48, 95, 136, 139 International relations (IR), 2, 51–56, 81–88, 90, 91, 93, 101, 108 International society, 3, 81–101

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 F. Mazzi (ed.), The 2022 Yearbook of the Digital Governance Research Group, Digital Ethics Lab Yearbook, https://doi.org/10.1007/978-3-031-28678-0

165

166 L Lethal autonomous weapons systems (LAWS), 60, 65

Index

M Moral evil, 1, 5–8

Standard of civilization, 81–101 Strategic autonomy, 3, 106–114 Surveillance, 2, 12, 13, 17, 19–20, 28, 30, 82, 96, 126, 136, 137 Sustainable Development Goals (SDGs), 2, 39–44, 46, 48 Sustainable innovation, 2

N Natural evil, 1, 5–8 Nomos, 1, 5–8

T Transparency, 4, 22, 44, 133, 138, 142, 146, 157

P Paideia, 1, 5–8 Platforms, 23, 25, 93, 100, 107, 110, 112, 113, 126, 137 Pragmatism, 4, 132–158 Public health, 3, 20, 27, 42, 110, 118–128

V Vaccine passport, 3, 118–129 Values, 3, 12, 19, 26, 29, 85–88, 90, 93–100, 107, 108, 110, 111, 113, 114, 124–126, 128, 132, 139, 141, 145, 150, 152, 153, 157, 158

S Smart cities, 2, 12–31 Socrates, 1, 7, 146 Sovereignty, 3, 46, 84, 87–93, 96, 97, 99, 106–114

W War, 1, 2, 6, 52–56, 83, 88, 90, 111, 112, 120, 132