Research Handbook on Law and Technology 1803921315, 9781803921310

This thorough and incisive Research Handbook reconstructs the scholarly discourses surrounding the field of law and tech

134 14 6MB

English Pages 534 [535] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Research Handbook on Law and Technology
 1803921315, 9781803921310

Table of contents :
Contents
List of figures and tables
List of contributors
Acknowledgements
1 Introduction to the Research Handbook on Law and Technology • Olia Kanevskaia and Przemysław Pałka
PART I: FRAMEWORKS
2 Law, regulation, and technology: the bigger picture of good governance • Roger Brownsword
3 Legal responses to techlaw uncertainties • BJ Ard and Rebecca Crootof
4 What’s law got to do with IT: an analysis of techno-regulatory incoherence • Zachary Cooper and Arno R. Lodder
5 Formalising law, or the return of the Golem • Burkhard Schafer
6 How not to get bored, or some thoughts on the methodology of law and technology • Przemysław Pałka and Bartosz Brożek
7 Grounding computational ‘law’ in legal education and professional legal training • Mireille Hildebrandt
8 Hype and cultural imaginary in law and technology • Lachlan Robb and Kieran Tranter
PART II: BRANCHES
9 Technology, monopoly, and antitrust from a historical perspective • Ramsi A. Woodcock
10 When worlds collide: copyright law, technology, and legislative drama • Ewa Laskowska-Litak
11 EU consumer law and technology • Agnieszka Jabłonowska
12 Criminal law and technology • Sofie Royer and Rune Vanleeuw
13 Privacy at a crossroads • Artur Pericles Lima Monteiro
14 When computers say no: towards a legal response to algorithmic discrimination in Europe • Raphaële Xenidis
15 International human rights law in the digital age: perspectives from the UN human rights system • Claudia Victoria Ionita and Machiko Kanetake
16 Legal principles and technology at the intersection of energy, climate, and environmental law • Leonie Reins
PART III: PERSPECTIVES
17 Afro-centric law and technology discourse • Caroline B. Ncube and Thabiso R. Phiri
18 Incorporating digital development perspectives in international trade law • Binit Agarwal and Neha Mishra
19 Perspectives on digital constitutionalism • Francisco de Abreu Duarte, Giovanni De Gregorio and Angelo Jr Golia
20 The saga of copyrighted standards: a perspective on access to regulation • Olia Kanevskaia
21 The normative novelty of obligations in automated contracts • Helen Eenmaa
22 STS jurisprudence: exploring the intersection between science and technology studies and law • Kasper Hedegård Schiølin
23 An outsider’s view on law and technology • Hans-W. Micklitz
PART IV: CHALLENGES
24 Autonomous weapons • Magdalena Pacholska
25 Issues in robot law and policy • A. Michael Froomkin
26 Artificial intelligence and the law: can we and should we regulate AI systems? • Riikka Koulu, Suvi Sankari, Hanne Hirvonen and Tatjaana Heikkinen
27 Machine learning and law • Andrzej Porębski
28 Why we need to rethink procedural fairness for the digital age and how we should do it • Jed Meers, Simon Halliday and Joe Tomlinson
29 Patent law and economics: open issues in technology standards • Giuseppe Colangelo and Eleonora Pierucci
30 Blockchain and cryptocurrency • Dan Traficonte
Index

Citation preview

RESEARCH HANDBOOK ON LAW AND TECHNOLOGY

Research Handbook on Law and Technology Edited by

Bartosz Brożek Full Professor, Faculty of Law and Administration, and Copernicus Center for Interdisciplinary Studies, Jagiellonian University, Krakow, Poland

Olia Kanevskaia Assistant Professor of European Public Law and Technology, Utrecht Centre for Regulation and Enforcement in Europe (RENFORCE), Department of International and EU Law, Faculty of Law, Economics and Governance, Utrecht University, the Netherlands

Przemysław Pałka Assistant Professor, Faculty of Law and Administration, Jagiellonian University, Krakow, Poland; Affiliated Fellow, Information Society Project, Yale Law School, USA

Cheltenham, UK · Northampton, MA, USA

© The Editors and Contributors Severally 2024   With the exception of any material published open access under a Creative Commons licence (see www​.elgaronline​.com), all rights are reserved and no part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical or photocopying, recording, or otherwise without the prior permission of the publisher.

Chapters 11, 17 and 27 are available for free as Open Access from the individual product page at www​ .elgaronline​ .com under a Creative Commons Attribution-NonCommercialNoDerivatives 4.0 International (https://creativecommons​.org​/licenses​/ by​-nc​-nd ​/4​.0/) license. Published by Edward Elgar Publishing Limited The Lypiatts 15 Lansdown Road Cheltenham Glos GL50 2JA UK   Edward Elgar Publishing, Inc. William Pratt House 9 Dewey Court Northampton Massachusetts 01060 USA       A catalogue record for this book is available from the British Library   Library of Congress Control Number: 2023947012 This book is available electronically in the Law subject collection http://dx​.doi​.org​/10​.4337​/9781803921327    

ISBN 978 1 80392 131 0 (cased) ISBN 978 1 80392 132 7 (eBook)

EEP BoX

Contents

viii ix xii

List of figures and tables List of contributors Acknowledgements 1

Introduction to the Research Handbook on Law and Technology Olia Kanevskaia and Przemysław Pałka

1

PART I  FRAMEWORKS 2

Law, regulation, and technology: the bigger picture of good governance Roger Brownsword

12

3

Legal responses to techlaw uncertainties BJ Ard and Rebecca Crootof

28

4

What’s law got to do with IT: an analysis of techno-regulatory incoherence Zachary Cooper and Arno R. Lodder

45

5

Formalising law, or the return of the Golem Burkhard Schafer

59

6

How not to get bored, or some thoughts on the methodology of law and technology 82 Przemysław Pałka and Bartosz Brożek

7

Grounding computational ‘law’ in legal education and professional legal training Mireille Hildebrandt

8

Hype and cultural imaginary in law and technology Lachlan Robb and Kieran Tranter

99 128

PART II  BRANCHES 9

Technology, monopoly, and antitrust from a historical perspective Ramsi A. Woodcock

142

10

When worlds collide: copyright law, technology, and legislative drama Ewa Laskowska-Litak

160

11

EU consumer law and technology Agnieszka Jabłonowska

174

v

vi  Research handbook on law and technology 12

Criminal law and technology Sofie Royer and Rune Vanleeuw

190

13

Privacy at a crossroads Artur Pericles Lima Monteiro

214

14

When computers say no: towards a legal response to algorithmic discrimination in Europe Raphaële Xenidis

222

International human rights law in the digital age: perspectives from the UN human rights system Claudia Victoria Ionita and Machiko Kanetake

235

Legal principles and technology at the intersection of energy, climate, and environmental law Leonie Reins

254

15

16

PART III  PERSPECTIVES 17

Afro-centric law and technology discourse Caroline B. Ncube and Thabiso R. Phiri

276

18

Incorporating digital development perspectives in international trade law Binit Agarwal and Neha Mishra

296

19

Perspectives on digital constitutionalism Francisco de Abreu Duarte, Giovanni De Gregorio and Angelo Jr Golia

315

20

The saga of copyrighted standards: a perspective on access to regulation Olia Kanevskaia

330

21

The normative novelty of obligations in automated contracts Helen Eenmaa

349

22

STS jurisprudence: exploring the intersection between science and technology studies and law Kasper Hedegård Schiølin

23

An outsider’s view on law and technology Hans-W. Micklitz

365 379

PART IV  CHALLENGES 24

Autonomous weapons Magdalena Pacholska

392

25

Issues in robot law and policy A. Michael Froomkin

408

Contents 

vii

26

Artificial intelligence and the law: can we and should we regulate AI systems? Riikka Koulu, Suvi Sankari, Hanne Hirvonen and Tatjaana Heikkinen

427

27

Machine learning and law Andrzej Porębski

450

28

Why we need to rethink procedural fairness for the digital age and how we should do it Jed Meers, Simon Halliday and Joe Tomlinson

468

29

Patent law and economics: open issues in technology standards Giuseppe Colangelo and Eleonora Pierucci

483

30

Blockchain and cryptocurrency Dan Traficonte

497

Index

512

Figures and tables

FIGURES 19.1 Three perspectives on digital constitutionalism 24.1 SEQ

317 396

TABLES 16.1 16.2 28.1 28.2 28.3 28.4 28.5 28.6 28.7

Inexhaustive taxonomy of legal principles in environmental, climate, and energy law Direct references to (the use of) technology, technological capacity, and technical assistance in the SDGs Question on applying to a public body Question on the use of digital technology Questions on the fairness of the use of digital technology Perception that digital technology made the handling of their case fairer Chi-Square test of association between the use of digital technology and whether the decision was biased Chi-Square test of association between the use of digital technology and being treated with dignity and respect Chi-Square test of association between the use of digital technology and the opportunity to present the full facts of their case

viii

264 268 474 475 475 477 479 479 480

Contributors

Francisco de Abreu Duarte, PhD Researcher, European University Institute Binit Agarwal, LLM candidate, European Masters in Law and Economics, University of Vienna BJ Ard, Associate Professor of Law, University of Wisconsin Law School Roger Brownsword, Professor of Law, King’s College London and Bournemouth University Bartosz Brożek, Full Professor, Faculty of Law and Administration, and Copernicus Center for Interdisciplinary Studies, Jagiellonian University, Krakow Giuseppe Colangelo, Jean Monnet Professor of European Innovation Policy and an Associate Professor of Law and Economics, University of Basilicata Zachary Cooper, Research Scholar in Emergent Technologies, Vrije Universiteit Amsterdam Rebecca Crootof, Associate Professor of Law, University of Richmond School of Law Helen Eenmaa, Researcher in Information Technology Law, University of Tartu School of Law, Estonia A. Michael Froomkin, Laurie Silvers and Mitchell Rubenstein Distinguished Professor of Law at the University of Miami; Fellow, Yale Information Society Project; Member, Miami Center for Computational Science Angelo Jr Golia, Assistant Professor of Constitutional Law and Comparative Public Law, University of Trento School of Law Giovanni De Gregorio, PLMJ Chair in Law and Technology at Católica Global School of Law and Católica Lisbon School of Law, Universidade Católica Portuguesa Simon Halliday, Professor of Socio-Legal Studies, University of Strathclyde Kasper Hedegård Schiølin, Assistant Professor, Digital Design and Information Studies, Aarhus University Tatjaana Heikkinen, LLM, University of Helsinki

ix

x  Research handbook on law and technology Mireille Hildebrandt, Full Professor of Smart Environments, Data Protection and the Rule of Law, Radboud University Nijmegen, and Research Professor of ‘Interfacing Law and Technology’, Vrije Universiteit Brussel Hanne Hirvonen, Doctoral Researcher, University of Helsinki Claudia Victoria Ionita, Legal Research Master student, Utrecht University Agnieszka Jabłonowska, Postdoctoral Researcher, Institute of Private Law, Leiden University Machiko Kanetake, Associate Professor of Public International Law, Utrecht University Olia Kanevskaia, Assistant Professor of European Public Law and Technology, Utrecht Centre for Regulation and Enforcement in Europe (RENFORCE), Department of International and EU Law, Faculty of Law, Economics and Governance, Utrecht University Riikka Koulu, Associate Professor (Social and Legal Implications of AI), Faculties of Law and Social Sciences, Director of the University of Helsinki Legal Tech Lab Ewa Laskowska-Litak, Assistant Professor (Adjunct), Jagiellonian University; Future Law Lab Arno R. Lodder, Professor of Internet Governance and Regulation, Vrije Universiteit Amsterdam Jed Meers, Senior Lecturer in Law, University of York Hans-W. Micklitz, Professor of Economic Law, Robert Schuman Center for Advanced Studies, European University Institute Neha Mishra, Assistant Professor in International Law, Graduate Institute, Geneva Artur Pericles Lima Monteiro, Lecturer in Global Affairs and Schmidt Visiting Scholar, Yale Jackson School of Global Affairs; Resident Fellow, Information Society Project, Yale Law School; Affiliated Researcher, Constitution, Law & Politics Research Group, University of São Paulo Caroline B. Ncube, Professor of Law, DSI-NRF SARChI Research Chair: Intellectual Property, Innovation and Development, University of Cape Town Magdalena Pacholska, Marie Sklodowska-Curie Postdoctoral Fellow, T.M.C. Asser Instituut, University of Amsterdam Przemysław Pałka, Assistant Professor, Faculty of Law and Administration, Jagiellonian University, Krakow; Affiliated Fellow at Yale’s Information Society Project

Contributors 

xi

Thabiso R. Phiri, Research Assistant, DSI-NRF SARChI Research Chair: IP, Innovation and Development, Faculty of Law, University of Cape Town Eleonora Pierucci, Associate Professor of Applied Economics, Roma Tre University Andrzej Porębski, Researcher, Jagiellonian University Leonie Reins, Professor of Public Law and Sustainability, Erasmus University Rotterdam Lachlan Robb, Lecturer, Queensland University of Technology School of Law Sofie Royer, Research Expert, KU Leuven Center for IT and IP Law and guest professor, UAntwerpen and ULiège Suvi Sankari, Deputy Director and Research Coordinator, University of Helsinki Legal Tech Lab Burkhard Schafer, Professor of Computational Legal Theory at Edinburgh Law School Joe Tomlinson, Professor of Public Law at the University of York Dan Traficonte, Associate Professor of Law at Syracuse University College of Law Kieran Tranter, Professor of Law, Queensland University of Technology School of Law Rune Vanleeuw, Researcher, KU Leuven Center for IT and IP Law Ramsi A. Woodcock, Wyatt, Tarrant and Combs Associate Professor of Law, University of Kentucky Rosenberg College of Law; Secondary Appointment, Department of Management, University of Kentucky Gatton College of Business and Economics Raphaële Xenidis, Assistant Professor in European Law at Sciences Po Law School

Acknowledgements

The publication of this Research Handbook would not have been possible without the help of many people and institutions. As editors, we want to hereby acknowledge them and thank them. We thank the dozens of reviewers and commentators who read the early drafts of the Handbook’s chapters, ensuring their quality, and helping the authors make their arguments even sharper. As editors, we also thank all the contributors to the Research Handbook, for their hard work, patience with our suggestions and requests, cooperation, and good spirits. We thank all the colleagues and friends who advised us throughout the preparation of the Handbook, and in particular Filipe Brito Bastos, Nikolas Guggenberger, William Janssen, Mira Scholten, and Thomas Streinz. Your support, suggestions, and encouragement were priceless. We thank the entire editorial team of Edward Elgar Ltd. and in particular Laura Mann, Amber Watts, and Emily White. You made the publishing process smooth and pleasant, and helped transform the Word files into this beautiful book. The preparation of the chapters for submission, as it pertains to ensuring the consistency of the references’ style, has been supported by a grant from the Faculty of Law and Administration, under the “Strategic Programme: Excellence Initiative” at the Jagiellonian University. The work of Bartosz Brożek was conducted in the framework of the project “The Legal Imagination,” financed by the Polish National Science Center (Grant no. 2021/43/B/ HS5/01509). The work of Przemysław Pałka has been generously funded by Norway Grants; the support for which he is very grateful. The formal acknowledgement reads: The research leading to these results has received funding from the Norwegian Financial Mechanism 2014-2021, project no. 2020/37/K/HS5/02769, titled “Private Law of Data: Concepts, Practices, Principles and Politics.”

xii

1. Introduction to the Research Handbook on Law and Technology Olia Kanevskaia and Przemysław Pałka

1. LAW AND TECHNOLOGY: A BIRD’S-EYE VIEW Law and technology1 is in flux. Over the last decade, the “field” has gone from being a niche endeavor to establishing itself as one of the central discourses in the legal scholarly mainstream. From a growing body of literature interrogating the relationship between law and technologies, over new (and now well-established) research centers2 and journals3 bringing the communities together, to LLM programs aiming to educate future tech law professionals, law and technology is expanding, maturing, and evolving. The “field” no longer seems temporary or emerging (if it ever was). And, as it develops, it simultaneously becomes richer and more challenging to navigate. The interest in law and technology is hardly surprising. Technology mediates ever-more daily activities, from shopping and entertainment to communication, dating, or exercising. At the same time, socio-technological reality continues to transform and provide themes for reflection. Artificial intelligence, cryptocurrency bubbles, and concerns associated with privacy and data protection capture the imagination of policymakers, civil society, and (legal) scholars. Digitalization and automation of the market and the government challenge the law’s assumptions about what is possible, shifting the power dynamics between different actors. Such changes, real or perceived, trigger lawyers’ intuitions about opportunities and threats, 1  We use the term “law and technology” broadly, when referring to a variety of scholarly discourses concerning the intersections of law and technology, as to distinguish the object of analysis from specific laws and specific technologies themselves. Different authors use different concepts, sometimes similar and sometimes different in meaning. For the purpose of this chapter, “law and technology” also includes such concepts as technology law, technology and law, legal tech, and techlaw, which are addressed and unpacked in some of the contributions. 2  A non-exhaustive list includes the International Legal Technology Association, founded in 1980 (https://www​.iltanet​.org​/home); the Centre for IT & IP Law at KU Leuven, founded in 1988/1990 (https://www​.law​.kuleuven​.be​/citip/), Berkman Klein Center for Internet & Society, founded in 1996 (https://cyber​.harvard​.edu/), the Information Society Project of the Yale Law School, founded in 1997 (https://law​.yale​.edu​/isp), CodeX: Stanford Center for Legal Informatics, founded in 2006 (https://law​ .stanford​.edu​/codex​-the​-stanford​-center​-for​-legal​-informatics/), Transatlantic Technology Law Forum, established in 2004 by the Stanford law School Program in Law, Science and Technology and the University of Vienna School of Law (https://law​.stanford​.edu​/transatlantic​-technology​-law​-forum/), The Centre for Technology, Robotics, Artificial Intelligence & the Law of the National university of Singapore, launched in 2019 (https://law​.nus​.edu​.sg​/trail​/about​-us/). 3  E.g., Harvard Journal of Law and Technology (https://jolt​.law​.harvard​.edu/), Berkeley Technology Law Journal (https://btlj​.org/), Stanford Technology Law Review, International Journal of Law and Information Technology, Review the European Journal of Law and Technology (https://ejlt​.org​/index​ .php​/ejlt), Technology and Regulation (https://techreg​.org/), the Artificial Intelligence and Law Journal (https://www​.springer​.com ​/journal​/10506).

1

2  Research handbook on law and technology expressed in scholarship, popular media, and policy briefs. In some jurisdictions, like the European Union, these changes have already translated into intense legislative efforts aimed at “mitigating the risks without impeding the benefits” of technology.4 Legal reflection on technology is not a new phenomenon. As demonstrated by several contributions to this Research Handbook, some branches of law, including traffic, antitrust, intellectual property, consumer law, or privacy, emerged largely due to socio-technological changes, and have continued to transform in relation to them (see, e.g., the chapters by Jabłonowska, Lachlan, and Robb, Laskowska-Litak, and Micklitz, Pericles, and Woodcock in this Research Handbook). Legal education and legal practice have always been predicated upon a given state of technology (see the chapter by Hildebrandt in this Research Handbook). And even regarding the “emerging” digital technologies like software, the internet, robots, and artificial intelligence, serious legal research and reflection have been ongoing at least since the 1990s (see the influential works of Jack Balkin (1995), Julie Cohen (1195, 1999), Lawrence Lessig (1999a, 1999b), Michael Froomkin (1996, 1999), Giovanni Sartor (1993), and many others). Some things, however, are changing. Gradually, the predominant feeling has shifted from excitement to caution or even fear. For example, social media – nowadays blamed for polarization, manipulation, and intrusions upon privacy (Hacker, 2021; Susser et al., 2019; Zuboff, 2019) – were hailed by some as the facilitators of civic engagement and democracy only a decade ago (Gil de Zúñiga et al., 2012; Tudoroiu, 2014; Warren et al., 2014). Related to that, the regulation of digital technologies, once seen as impossible (due to the internet’s transnational character) or undesirable (as potentially stifling innovation), nowadays presents itself as necessary or unavoidable. Moreover, the omnipresence of digitalization and automation make clear that even the traditional, “black letter” fields of legal scholarship – e.g., contract, criminal, administrative, and constitutional law – need to account for the omnipresent sociotechnological changes. The fast pace of technological advancement contributes to the feeling of uncertainty and urgency, both driving and shaping the contributions to law and technology. Yet even an account like this – comforting in the order it attempts to bring – is a gross simplification. Unlike many other fields of legal research, mapping the history, or the general contours of law and technology, as a separate or coherent academic “discipline,” is close to impossible. Law and technology is not one community of discourse; neither is it one “field,” nor one “approach.” Furthermore, and unlike many other “fields,” law and technology does not have an established methodology or a central normativity. Moreover, law and technology can hardly be seen as a separate branch of law (Cockfield, 2003), akin to health law or environmental law. Technology is entrenched in numerous legal aspects, and legal scholars have been tapping into the most salient issues related to the development and application of technologies from different angles. Rather, law and technology, in its totality, is multifaceted. Next to the divergence in methods and normativities, one should be mindful of the evident geographical divide between 4  Examples mostly discussed in the literature at the moment of writing include the proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021) 206 final; the Regulation (EU) 2022/1925 of the European Parliament and of the Council of September 14, 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) [2022] OJ L265/1; Regulation (EU) 2022/2065 of the European Parliament and of the Council of October 19, 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L277/1.

Introduction  

3

the development of law and technology scholarship. While the discourse seems to have been largely pioneered by the scholars already in the position of privilege, i.e., those writing in and on the Global North (and, in particular, the Anglo-Saxon traditions), literature on law and technology in Asian and African countries has been recently growing in prominence and impact. Consequently, it is not surprising that academic research falling under the umbrella term “law and technology” is so diverse. Among many themes and approaches, one may attempt to rethink fundamental questions about the relationship between various technologies and the law; engage in discussing how and when the emerging technologies should be regulated, if at all; explore how certain legal tasks, or even legal reasoning at large, could be automated; interrogate what structural social problems technology exacerbates and what it makes salient; examine how technologies challenge the regulatory landscape of such established fields as constitutional, administrative or criminal law; or how technology takes a role of the regulator and what consequences it brings to the legitimacy of rule-making. The list of types of legal questions pertaining to technological advancement is non-exhaustive and constantly evolving. What is generally termed as “law and technology” has thus become an amalgam of different projects, schools of thought, and communities of discourse. All this makes law and technology a thrilling endeavor: the objects of study, be it sociotechnical practices or the laws governing them, transform incredibly fast. The methods of research are often themselves an opportunity for experimentation. This excitement, however, comes with many traps. Given the fast pace of technological development, some studies might lose relevance, or even become obsolete, in less than a couple of years. Certain projects risk being termed as “hypes” by the scholarly community, questioning whether engaging with the questions they pose, and the answers they seek to offer, is truly worth an academic effort (though, as Robb and Tranter argue in this Research Handbook, hype might not necessarily be a bad thing, and are sometimes even necessary to foster lawyers’ thinking on technological advancement). Navigating the ever-changing scholarly discourses, continuously producing papers about laws and technologies, themselves in constant flux, is simultaneously gratifying and difficult, stimulating and stressful, liberating and constraining. Doing research in law and technology is hard, but it is also rewarding. If there is one thing that brings law and technology together, it is the authors’ urge to look beyond the legal text, onto the social world, and focus on what the current, or future, state of technical development renders (im)possible. What the next step could or should be, however, remains part of the challenge. This Research Handbook does not aim to give a definite answer on how research on law and technology should be conducted or what areas are worth studying, let alone what law and technology means. Neither does it aim to provide any definite categorization of concepts relevant to the study of law and technology. Instead, by including diverse contributions from leading and emerging scholars from around the world, this volume offers a meta-reflection on various approaches for studying law and technology, inviting the readers to critically engage with the current state of the art and aiming to serve as a reference point to those embarking on an academic journey in this fascinating area.

2. THE FOCUS OF THE RESEARCH HANDBOOK While introducing different conceptual and methodological treatments of the broader law and technology discourses, the contributions of this Research Handbook are guided by an

4  Research handbook on law and technology overarching question: What does one need to know when embarking on research in law and technology? In a broader sense, the contributions analyze, through different lenses, how law and technology influence each other in the current digitalized society, how this influence has been, or could be, studied, and how could one understand both the relationships between law and technology, and the scholarly attempts to comprehend and influence them. In doing so, the Research Handbook intends to serve as a reference point for scholars researching various aspects of law and technology, and to be used as a tool assisting in the review of literature and relevant legislation while also offering a selection of methodological choices. As the literature is often abundant, and various discussions have been going on for years, this volume offers a way to “catch up” with the existing discourses, questions asked, answers proposed, and intuitions tested. Furthermore, the Research Handbook invites the readers to remain critical when engaging with law and technology questions and approaches to answering them, by bringing together the champions and the new voices in the law and technology discourses, coming from different scholarly traditions and backgrounds. As in any area of scholarly reflection, law and technology may sometimes become path dependent, on subject, methods, and assumptions. There is a lot to learn from the existing scholarship but there is also a lot to question. In this sense, the Research Handbook is an invitation to challenge the existing paradigms. Finally, the Research Handbook offers a “snapshot” of the state of the art in 2023, something akin to the “reports from the research frontiers” in these tumultuous times. Which of its insights will turn out to be timeless or prescient, and which will seem naïve or misguided in a decade? How will the scholarly community look back at this time in five, ten, or twenty years? Although it is impossible to know the future, the Research Handbook attempts to situate the law and technology research in history – looking back at the lessons of the past, and forward to the challenges awaiting. Current laws and technologies are the result of ideas promoted, ideologies crafted, and normative choices made over the last several decades. It is prudent to bear that in mind, as only then will one come close to fully understanding the current reality, while also appreciating the gravity of the law and technology discourses presently taking place.

3. THE STRUCTURE OF THE RESEARCH HANDBOOK The Research Handbook consists of 29 contributions that reflect different frameworks, views on branches of law, perspectives, and challenges from within the law and technology research. Some contributions are theoretical and engage with specific concepts or legal and methodological problems. Others discuss concrete cases of the uses of particular technologies and how these fit into the current regulatory framework(s). Yet others scrutinize these problems from a normative perspective, pondering how these regulatory and legal frameworks should address the specific concerns arising from the use of technology. 3.1 Frameworks Part I of the Research Handbook discusses various frameworks and fundamental conceptions illuminating the broader horizontal questions about the complex relationships between law(s) and technology(ies). This part inquires: What is the relationship between law and technology, and how could or should this relationship be studied? Next to discussing methodologies and

Introduction  

5

central discourses in law and technology research, this part proposes some categorizations and invites the reader to critically rethink some of the concepts, approaches and normative stances in law and technology. This part commences with a critical re-assessment of the possible approaches to the regulation of emerging socio-technological challenges. Roger Brownsword distinguishes three different kinds of tech governance conversations, Law 1.0, 2.0, and 3.0, offering a compelling way to order various research projects in law and technology. Viewing the law as one option of governance, Brownsword looks at the question of technology regulation through the lens of good governance, emphasizing that considerations of legitimacy, global commons, and respect for fundamental community values should prevail over those of effectiveness. In turn, Rebecca Crootof and BJ Ard analyze “techlaw” as the attempt of various legal actors to resolve various uncertainties brought about by technological change. They review two general approaches, namely applying the existing laws to new situations by analogy and creating new laws or shifting institutional powers to address the arising challenges. Zachary Cooper and Arno Lodder propose yet another angle, and focus on the law and technologies’ co-dependence, arguing that as the legal system entangled itself with the internet, it might paradoxically be less fit to regulate the emerging technologies based on fundamentally different architectures. They analyze how this process might unfold, and what consequences it might have, proposing to study the “the regulatory multiverse” of not necessarily coherent architectures. Turning to the methodological challenges arising while studying, teaching, and researching the different paths of law and technology, Burkhard Schafer challenges the computational vision of law. He argues that when studying formalization or automatization of legal reasoning, one should focus on how specific formalization approaches affect conceptions of justice and the vision of a good legal system, and what ethical and legal implications are brought by different choices taken during the formalization processes. Przemysław Pałka and Bartosz Brożek proceed with highlighting the variety of methodological tools and approaches available to those embarking on research in law and technology. Noting that human minds, in general, gravitate toward cognitive conservatism, and are subject to peer pressure and various biases, they discuss how variation, caution and intuition can assist the researchers in formulating good research questions and offering innovative answers. Mireille Hildebrandt continues by discussing methods and approaches for education and research on computational technologies, proposing a hermeneutic approach, continuous adaptation, and a new understanding of the meaning of legal norms to the changing settings. She cautions against naïve ways of looking at the relationship between technology and the law, inviting a critical approach to what technologies claim to be doing and actually do, as well as calling for a stronger commitment to the law’s fundamental values in research and teaching. Part I concludes by addressing the notion of “hypes” in “law and technology” research. Even though “hype” might strike many as a notion with negative connotations, Lachlan Robb and Kieran Tranter offer a different take. They argue that law and technology is an inherently “hyped” discourse, posit that “hypes” as such might actually be needed for making more humane technological futures, and invite scholars to engage more with the cultural imaginary. 3.2 Branches Part II of the Research Handbook maps how technology influences, and is or could be influenced by, selected branches of law. Looking at technologies at large, the chapters in this

6  Research handbook on law and technology part focus on various elements of the legal system. While being helpful to researchers in the established legal discourses, this part, as a whole, seeks to reveal whether there are any convergences in the ways different branches of law deal with technological challenges. The first three chapters engage with the issues from the border of public and private law, situating the current developments in their historical perspectives. Ramsi Woodcock takes the readers on the journey of antitrust development in the United States, demonstrating what one can learn from the interactions between law and technologies in the past and how these lessons can be relevant in the current litigation. He submits that technology creates dependence and is thus, first and foremost, about power. Taking the European perspective, Ewa Laskowska-Litak takes the reader through the history of EU copyright law, examining how the legal framework reacted to technological changes. She underscores that it is not a possession of a copy, but an access to its content that forms the central issue of the European copyright debate. She also questions the effectiveness of the regional approaches and the economic justifications for the legal solutions that have been dominating European copyright law. Staying in Europe, Agnieszka Jabłonowska examines the impact of digital technologies on EU consumer law, focusing in particular on the problem of exploitation through personalization and the division of responsibility in multi-party settings. She reveals how the core themes of the EU consumer law – transparency and fairness – remain relevant in the platform economy. Sofie Royer and Rune Vanleeuw open the discussion on human and fundamental rights by examining new types and iterations of crimes that arise from technologies’ developments, and scrutinizing whether the substantial and procedural criminal law regulating these offenses should be technology-neutral rather than technology-specific. They conclude that the established criminal law is not yet ready to deal with the new types of harm stemming from technologies due to its inherent limits and conceptual shortcomings. Artur Pericles Lima Monteiro proceeds by discussing the right to informational privacy, revealing the theoretical disputes and conceptual shifts embodied by the “pragmatic turn,” as well as the potential limitations of the transition from privacy law to data governance. In the chapter that follows, Raphaële Xenidis revisits the application of EU anti-discrimination law on algorithmic discrimination. While examining the legal and practical shortcomings, she explains that to guarantee the protection of fundamental rights in the algorithmic society, EU non-discrimination law should be applied in a teleological and instrumental manner, bearing in mind the value-laden framings of socio-technical artifacts and articulating the normative equilibria underpinning existing legal constructs. Kanetake and Ionita take the discussion on fundamental rights to the global level. They analyze the role that the UN plays, and could play, in safeguarding human rights protection in the digital age, focusing on freedom of expression, non-discrimination, and right to privacy. Continuing the global perspective, Leonie Reins analyzes the fields of environmental, climate and energy law (“ECEL”), their principles and interactions, and implications for technology regulation. She notices that, paradoxically, technology is both the main contributor to, and mitigator of, climate change and pollution, and that technology can thus both threaten and strengthen sustainable development. 3.3 Perspectives Part III introduces the different and specific perspectives in law and technology, and aims to emphasize voices, emerging views and takes sometimes overlooked, yet rapidly gaining

Introduction  

7

prominence in the scholarship. It inquires how law and technology debates are shaped by perspectives different methodologically, conceptually and geographically. The aim of the part is to reveal the inner diversity of the discourse and invite the readers to look at the seemingly familiar problems through new lenses. This part starts with perspectives on law and technology that go beyond the discourses shaped by the Global North. Introducing the Afro-centric perspective on law and technology, Caroline B. Ncube and Thabiso R. Phiri discuss the rich array of African contributions and argue that technology regulation needs to be carefully fine-tuned to the African context in order to ensure that the legal framework does not further develop inequality and digital divide. On top of masterfully reconstructing the arguments of the African anglophone scholars, they bring the readers’ attention to the question of whose voices shape the discourse, who is traditionally heard and who is too often ignored. In the chapter that follows, Binit Agarwal and Neha Mishra examine the digital divide from the perspective of global trade, highlighting the shortcomings of international trade law when dealing with the issues of digital commerce. Their chapter both illuminates the problem of bias in the context of trade law itself, but also serves as a template for broadening focus when analyzing other areas of law. The next three chapters address various changes brought by technologies that challenge the established legal orders, requiring new normative perspectives. Francisco de Abreu Duarte, Giovanni De Gregorio and Angelo Golia illustrate how constitutional values and the publicprivate regulatory divide are reshaped through the use of emerging technologies, and how this regulatory shift can be understood and interpreted through the liberal, societal and global perspectives on the “digital constitutionalism.” Olia Kanevskaia proceeds by discussing how technical and technology standards are increasingly gaining legal force in the European Union and the United States, and how the legitimacy of these standards is challenged by their private ownership. In turn, Helen Eenmaa reviews how new technological constraints may normatively alter the nature obligation of and relations between contract parties and, in the broader sense, how emerging technologies challenge the traditional doctrinal categories of private law. The two following chapters aim to look at law and technology from the outside of the mainstream legal discourses. Restating the co-dependence between law and technology, Kasper Hedegård Schiølin sheds light on the issue of technology regulation from the Science, Technology, and Society (STS) perspective, challenging the established distinctions between the positive and normative questions. He surveys the rich contributions of the STS, and invites the legal scholars to critically examine the assumptions they often tacitly make. In turn, HansWolfgang Micklitz, drawing on his “latecomer’s” perspective, offers and “outsider’s” view on law and technology. He ponders what it means to be an outsider or an insider, and how the inner dynamics of different communities of discourse shape the content and the context of their contributions. 3.4 Challenges Part IV of the Research Handbook focuses on the newly arising challenges from particular technologies that need further concretization and potential conceptual or normative responses. The question this part, as a whole, seeks to ponder on is: what kind of challenges do development and social adoption of technology bring to the law, and is there, or should there be, a common approach to address such emerging challenges?

8  Research handbook on law and technology The first four chapters of this part address technologies that heavily rely on automated decision-making. Magdalena Pacholska analyzes the use of autonomous weapons under the regime of international law, introducing the reader to the complex technical field and the heated scholarly debates. Looking at the problem of attribution of responsibility, she argues that it is the application of international law, rather than the law itself, that may produce flaws in addressing the problems stemming from the use of autonomous weapons. Michael Froomkin provides the US perspective on the legal questions pertaining to the broader field of robot technologies, discussing how the complex emerging law and policy issues change fundamental legal arrangements. He offers a tour de force of the debates ongoing for decades now, critically examining the intuitions tested over the years, as well as the challenges to come. Riika Koulou, Suvi Sankari, Hanne Hirvonen, and Tatjaana Heikkinen address the needs, challenges and possibilities to regulate artificial intelligence, taking as an example the recent AI proposal in the European Union. They situate the AI Act in the long-term perspective on technology regulation, and critically examine the promises and perils of the European Union’s current approach. Finally, Andrzej Porębski develops this further by looking at machine learning systems developed and used in the context of legal practice. He discusses two general problems that arise at the intersection of machine learning and law – incomprehensibility and the lack of transparency – and invites lawyers to partake in the debate on the creation of machine-learning-powered legal tech. In the following chapter, Jed Meers, Simon Halliday, and Joe Tomlison examine how we can ensure that the use of technologies in administrative decision-making is fair, emphasizing the importance of the citizens’ perspectives and experiences. Drawing on their own empirical research, they demonstrate the need to supplant the armchair legal scholarship with continuous reality checks. Giuseppe Colangelo and Eleonora Pierucci then discuss specific issues arising from the application of antitrust law and patent law to technologies that are essential for technological interoperability, providing an overview of recent developments in the regulatory and policy landscapes and case law. In the final chapter, Dan Traficonte reviews the most salient legal and regulatory issues of blockchain technologies, taking a sober look at how much this technology has actually affected social practices. He focuses in particular on securities and intellectual property laws, and suggests that increased regulation can remedy arising concerns of consumer protection and market manipulation.

4. CONCLUSION The divergent contributions of this Research Handbook demonstrate that what is often termed as the “field” of law and technology represents multiple communities, discourses, and schools of thought, sometimes loosely connected by the interest in deeper legal and societal questions pertaining to broadly understood “technologies.” Research on law and technology is inherently multidisciplinary and involves considerations of different perspectives. This plurality of approaches, perspectives and methodologies is perhaps what sets it apart from other “law and …” discourses (see the chapter in this Research Handbook by Micklitz on this matter). Due to this conceptual and methodological plenitude, answering the main questions that each part pursues is a challenging exercise, but it is worth a try. As it appears from the contributions in Part I (“Frameworks”), the marriage of law and technology is one of co-dependence and co-influence; methodological approaches for studying these relationships should reveal

Introduction  

9

the broader picture of the current dynamics in law, regulation and governance, bearing in mind the normative concepts of justice and the aim of the legal system. The contributions of Part II (“Branches”) demonstrate that the co-dependence between law and technology is unpredictable, even though history can teach us some important lessons on the development of this relationship. In various legal systems, technologies can be used as a sword and as a shield. And while some traditional and established fields of law appear to sometimes be ill-equipped to address concerns arising from technological advancements, newer legal frameworks could potentially be more adaptive to new technologies, though this focus on “newness” comes with its own traps. Part III (“Perspectives”) adds to this that law and technology can be studied from different perspectives that challenge the current concepts and normative understandings. To address some specific legal challenges posed by the rapid pace of technological development, a shift in conceptual thinking might often be in order. That said, one should also be cautious when borrowing concepts from other disciplines to be applied to specific legal questions. Lastly, and perhaps not surprisingly, Part IV (“Challenges”) demonstrates that there is no common approach for addressing legal and policy challenges that arise from the use of specific technologies. In some cases, the stronger presence of public regulation is required, while in others, private ordering together with the proper application of the existing legal framework might appear sufficient. Despite their diversity, the contributors seem to generally agree on two points. First, law and technology are becoming inseparable. Their relationship is characterized by increasing co-dependence. Second, lawyers and legal scholars cannot, and should not, shy away from studying new technologies. Whether one thinks of themselves as a law and technology scholar, or as a “traditional” black letter lawyer, technological change will neither go away nor slow down, most probably. The ability to study and interpret the relationships of law and tech becomes both a necessary tool in a legal scholar’s apparatus, and a social need, if the technological development is to be steered in a direction respecting the fundamental rights and legal values. Among the many themes of the Research Handbook’s contributions, one may notice some recurring questions and issues: are the existing legal frameworks sufficient to address the present and arising challenges brought by technologies? What are the possible regulatory approaches to address specific issues? Should such regulation take place at the national or international level? And what are the best and the worst regulatory practices? Without a doubt, these and more questions are bound to continue to arise as digitalization and the use of technologies progress, and the current answers to these questions, if there are any, will be challenged by many generations of “law and technology” scholars. This, however, is a topic for the future volumes and handbooks.

BIBLIOGRAPHY Balkin, J. M. (1995). Media Filters, the V-Chip, and the Foundations of Broadcast Regulation. Duke Law Journal, 45, 1131. Cockfield, A.J. (2003). Towards a Law and Technology Theory. Manitoba Law Journal, 30, 383 Cohen, J. E. (1995). Right to Read Anonymously: A Closer Look at Copyright Management in Cyberspace, A. Connecticut Law Review, 28, 981. Cohen, J. E. (1999). Examined Lives: Informational Privacy and the Subject as Object. Stanford Law Review, 52, 1373.

10  Research handbook on law and technology Froomkin, A. M. (1996). The Essential Role of Trusted Third Parties in Electronic Commerce. Oregon Law Review, 75, 49. Froomkin, A. M. (1999). The Death of Privacy. Stanford Law Review, 52, 1461. Gil de Zúñiga, H., Jung, N., & Valenzuela, S. (2012). Social Media Use for News and Individuals’ Social Capital, Civic Engagement and Political Participation. Journal of Computer-Mediated Communication, 17(3), 319–336. https://doi​.org​/10​.1111​/j​.1083​- 6101​.2012​.01574​.x. Hacker, P. (2021). Manipulation by Algorithms. Exploring the Triangle of Unfair Commercial Practice, Data Protection, and Privacy Law. European Law Journal, 1–34. https://doi​.org​/10​.1111​/eulj​.12389. Lessig, L. (1999a). Code and Other Laws of Cyberspace. Basic Books. Lessig, L. (1999b). The Law of the Horse: What Cyberlaw Might Teach. Harvard Law Review, 113(2), 501–549. https://doi​.org​/10​.2307​/1342331. Sartor, G. (1993). Artificial Intelligence and Law: Legal Philosophy and Legal Theory. Tano. Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online Manipulation: Hidden Influences in a Digital World. Georgetown Law Technology Review, 4, 1–45. Tudoroiu, T. (2014). Social Media and Revolutionary Waves: The Case of the Arab Spring. New Political Science, 36(3), 346–365. https://doi​.org​/10​.1080​/07393148​.2014​.913841. Warren, A. M., Sulaiman, A., & Jaafar, N. I. (2014). Social Media Effects on Fostering Online Civic Engagement and Building Citizen Trust and Trust in Institutions. Government Information Quarterly, 31(2), 291–301. https://doi​.org​/10​.1016​/j​.giq​.2013​.11​.007 Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (First edition). PublicAffairs.

PART I FRAMEWORKS

2. Law, regulation, and technology: the bigger picture of good governance Roger Brownsword1

1. INTRODUCTION In our economically and socially disrupted times, a raft of emerging technologies has given rise to a multitude of questions, to a great deal of hype, hope, and expectation but also to considerable concern, and to significant challenges and opportunities for law, regulation, and governance (Brownsword et al., 2017; Aral, 2020). Whilst many of the questions presuppose that these technologies are, so to speak, ‘out there’ being deployed in various domains and inviting the application of legal principles or regulatory response, other questions are generated by proposals to employ these same technologies as tools of governance. In other words, whilst some questions are about the regulation and governance of technologies, others are about regulation and governance by technologies (Brownsword & Yeung, 2008). With so many particular legal and regulatory questions inviting discussion, there is a risk that we lose sight of the bigger picture of governance. Responding to that risk, this chapter highlights three general questions that give shape and substance to the bigger picture. First, there is the question of how we view the relationship between law and governance; second, how we understand the ‘legitimacy’ of governance and its relationship with regulatory ‘effectiveness’; and third, how we view global governance and its relationship to local governance. With regard to the first question, the position taken in this chapter is that we should treat ‘law’ as a particular rule-based mode of governance (operating alongside other modes of governance) and that our priority should be to elaborate the aspiration for ‘good governance’ (whether the particular mode of governance is legal, regulatory, or technological). It is implicit in this position that the legitimacy of both law and regulation needs to be taken more seriously and, responding to the second question, the thrust of the chapter is that we should not allow considerations of legitimate governance to be dominated, diluted, or marginalised for the sake of effective governance.2 Responding to the third question, it is argued that the foundation of good governance is laid by respect for the ‘global commons’, that is, for the conditions that enable human communities to be viable in the first place. To be sure, good governance also 1  I am grateful to Leonie Reins and Olia Kanevskaia for their helpful comments on an earlier version of this chapter; and, likewise, to Rebecca Crootof and B.J. Ard for their comments at a final pre-delivery workshop. Needless to say, the usual disclaimers apply. 2  In some contexts, if the legitimacy of governance outputs is neglected, or if processes are viewed as unfair, this might limit the willingness of those who are subject to governance to comply (Tyler, 2006). But, of course, this presupposes, first, that the modality of governance that is employed offers some affordances (both interpretive and practical) for contestation and non-compliance and, second, that those who are governed are able to act on those affordances. Where the modality of governance becomes less reliant on humans and rules (or standards), and/or where practical options are heavily technologically managed, the context might be significantly different.

12

Law, regulation, and technology  13 makes demands on the integrity of local governance but, without respect for the generic conditions, no kind of governance—whether good, bad, or indifferent—is possible. Bringing all this together, the seminal idea of the chapter is that our responses to the many questions raised by emerging technologies should be guided by the thought that human communities cannot get started without governance and that, if they are to flourish, they need governance that is good in the sense that it reflects the values of the community as well as being in the interests of its members and, above all, that it respects the global commons. It follows that the questions that we have about the regulation of emerging technologies and the questions that we have about the use of new technologies to perform legal and regulatory functions share a common concern with good governance; and it follows that, in order to answer these questions—whether they are questions about governance of, or governance by, new technologies—we need a developed concept of good governance. The chapter is in four principal parts. First, there are a few remarks about the relationship between law and governance, about the basic functions of governance, and about why human communities need it. Second, three governance conversations that currently dominate the landscape of law and technology are sketched, each conversation having its own distinctive agenda of questions and its own particular way of engaging with new technologies. Third, the idea of legitimate governance (including its relationship with effectiveness, and the place of ethics, pluralism, and moral scepticism in governance) is discussed. Fourth, the criteria of good (legitimate) governance—the global and the local criteria—are elaborated.

2. LAW AND GOVERNANCE Famously, Lon Fuller (1969) characterised law as the enterprise of subjecting human conduct to the governance of rules. This chimed in with the widely held Hartian view that law is an affair of rules and that legal concepts are to be understood in the context of systems of rules (Hart, 1961); and it enabled Fuller to develop his ideal of legality around the commitment to governance by rules. With all this attention to rules, the idea that law is about governance slipped into the background. So, when governance came back into the spotlight (alongside law and regulation), it could be viewed not so much as a human enterprise of which rule-based law was a particular kind but as a particular kind of direction or guidance to be differentiated from law—for example, by reference to its lesser formality, or lesser institutionalisation, or its less top-down nature, or its non-governmental or transnational or technological characteristics, and so on (see, e.g. Brownsword & Goodwin, 2012). It follows that we might conceive of governance in a broad sense that includes a rule-based legal instantiation of governance or in a narrow sense that contrasts in some way with rulebased law. In the broad sense, we have a number of governance options of which law is one; in the narrow sense, law is to be contrasted with governance. For the purposes of this chapter, it is not necessary to comment on whatever competing conceptions of governance in the narrow sense might be up for debate; quite simply, for present purposes, it is the idea of governance in the broad sense that needs to be retrieved (compare, Kornhauser, 2004). Turning to the concept of law, it hardly needs to be said that this is an idea that has been much debated. The nature of law is a central jurisprudential question. On one axis, different conceptions of the relationship between law and morals, between legal and moral reason, are developed; and, on a second axis, different conceptions of formal (Westphalian) and informal

14  Research handbook on law and technology law are developed. Whichever view we take of law within this heavily contested conceptual space, the common jurisprudential ground is that we are treating the legal enterprise as being about governance by rules. However, in the present chapter, the emphasis, as I have said, is on law being an enterprise of governance. Taking this line, law is one mode of governance; but, so too, is regulation; and, crucially, so too, are strategies that systematically employ technologies for governance purposes. Taking this view, we do not need to agonise about whether Lessig’s ‘code’ is ‘law’ (Lessig, 1999); the essential point is that both ‘law’ and ‘code’ are modes of governance. Why do humans need governance? According to Karl Llewellyn (1940), all groups need governance if they are to be viable. If members of groups are to interact and transact, if they are to act in the collective interest, and if they are to manage conflicts and disputes, they need governance. The group needs a basic code of conduct to channel behaviour, to stabilise expectations, and to coordinate actions; it needs mechanisms to settle disputes; it needs to have a sense of direction or common purpose; and it needs to allocate authority to group members for drafting the code, settling disputes, and maintaining the direction of the group. In small groups operating in a stable context, once the basic code of conduct has been agreed, governance should largely run itself. However, in larger groups and particularly in groups where the context is more dynamic—where the group has to adjust to new technologies, new applications, and new circumstances, to new challenges and new opportunities (possibly, implicating new roles and responsibilities), or where the group regularly contemplates a change of direction—then governance will be more challenging. Indeed, in many contexts, finding the right regulatory balances between continuity and change will always be a work in progress—and, in our technological times, where there is a commitment to both law and order and democratic decision-making, then governance conversations are likely to be heavily contested.

3. THREE GOVERNANCE CONVERSATIONS In previous work (Brownsword, 2019a, 2020, 2022a), I have suggested that we can detect three distinct but co-existing conversations about ‘law and technology’. Each conversation—that is, each governance conversation—has its own agenda, its own framing, and its own questions. In what I call a Law 1.0 conversation, the questions are about the application of traditional legal principles, concepts, and classifications to emerging technological phenomena and their deployment. For anyone who has been trained ‘to think like a lawyer’, both the questions and the answers that are characteristic of this conversation are very obviously ‘legal’. In what I call a Law 2.0 conversation, the questions are about the fitness of legal rules relative to a raft of rapidly emerging technologies and relative to specified regulatory purposes. The questions and answers here are distinctly rule-based but their orientation to policy is one of the features that distinguishes them from traditional legal conversations. The regulatory approach of Law 2.0 is taken a step further in what I call a Law 3.0 conversation. Here, the questions are not about the fitness of legal rules for given purposes but about the potential use of technologies to deliver policy and to undertake legal and regulatory functions. On the face of it, a Law 3.0 conversation, by shifting the focus from rules (and principles and standards) to technologies and technical measures is very different to either Law 1.0 or

Law, regulation, and technology  15 Law 2.0. Nevertheless, I want to proceed on the basis that all three conversations are relevant for lawyers and that they are all concerned with ‘governance’—governance by law, governance by regulation, and governance by technologies. In other words, what joins these three conversations is that they are all concerned with particular modes of governance; and the many questions of governance that we now face arise within and between these conversations. 3.1 Law 1.0 In what we can term a Law 1.0 conversation, the focal question is the one that lawyers traditionally ask when confronted by new technologies or by novel situations. In common law jurisdictions, that question is how the precedents and the historical principles of the law apply to, or fit with, the technology or situation (similarly at the Conseil d’État, see Latour, 2010, Ch. 4). For example, we might ask how the general principles of tort law apply to defamatory content that is hosted online; or how the principles of contract law might be applied to socalled ‘smart contracts’ or to those platforms in which the relationship between, the roles, and the responsibilities of the parties are not clear; or we might ask how copyright law maps onto creative works generated by AI or by those who engage in remixing; or we might ask how traditional concepts of property, assignment, and novation map onto transfers of cryptoassets. The list of potential questions is not endless, but it is long and it gets longer with each new technology and its applications. Often, this kind of question will be asked (and answered) by lawyers who are advising clients on their best reading of the legal position. However, where such questions are referred to courts, the flexibility of the law notwithstanding, there is a tendency towards conservative rulings coupled with no more than incremental development of the law. So, for example, although patent offices were able to adjust their understanding of patentability and disclosure to accommodate new products and processes in biotechnology (Pottage & Sherman, 2010), the courts were not so quick to recognise body parts, embryos, and gametes as property in order to ground tort claims (Fox, 2019). On the other hand, as Joshua Fairfield points out, in at least some US jurisdictions, judges have developed the old idea of ‘trespass’ to apply to various cyber-wrongs (such as spam mail) (Fairfield, 2021, pp. 54–59). Certainly, if we are guided by Fairfield’s examples, we might reason that traditional legal concepts and principles are flexible enough if only common law judges are imaginative enough. This is not to say that practitioners of Law 1.0 are uncritical of the state of the law. On the contrary, the ‘coherence’ of the body of legal doctrine is a matter of intense and enduring concern (Brownsword, 2019, pp. 192–194, 2020, Ch. 8). Contradictions and inconsistencies in the body of doctrine are not to be tolerated; precedents and principles should not simply be ignored; legal doctrine should not be distorted; law should be applied in a way that respects its integrity―all of this being regarded as desirable in itself. Indeed, for many lawyers, we should make no apologies for Law 1.0. On the contrary, it is the reasoning of Law 1.0 that speaks to the essence of the Rule of Law (the rule of rules) and it is the judicial orientation to the published rules and principles that exemplifies the virtues of ‘legality’. Given this culture, there is a good deal of judicial nervousness about stretching legal principles, or about creating ad hoc exceptions in order to accommodate a hard case, or about correcting the law where it is plainly not fair, just, or reasonable. Similarly, at times of rapid economic, social and technological disruption, the concern for doctrinal coherence can inhibit major development of the law. Whilst critics will say that the law should move with the times,

16  Research handbook on law and technology judges will tend to exercise restraint and be mindful of being accused of assuming an unauthorised legislative role. Accordingly, whilst the courts will give an answer to the Law 1.0 question that is put to them, they do not have either the resources or the mandate for expansive lawmaking or for setting new policies (compare Goddard, 2022). This means that the burden of responding to questions that invite a serious overhaul of the regulatory environment moves elsewhere. 3.2 Law 2.0 The paradigmatic question in a Law 2.0 conversation, the kind of question that regulatory scholars and various kinds of regulatory agencies typically ask, is whether existing rules are fit for purpose, whether the rules are effective and appropriate in serving regulatory policies, and whether perhaps new rules are required. In short, the question is whether the regulatory environment is fit for purpose. This is an exercise in setting and serving policy and the reasoning (with its focus on effectiveness) is predominantly one of instrumental rationality. In practice, the engagement with this question will be in the political arena. The answers given to the headline questions in Law 2.0 are not constrained in the way that we find in the courts. It is not a matter of finding an answer from within a limited set of materials; there is no pressure for consistency with the jurisdictional history and nor for doctrinal coherence. Regulation can make a fresh start and regulators can develop bespoke responses to particular questions in a way that would offend doctrinal coherentism. So, for example, regulators can adopt any number of absolute or strict liability offences (relating to health and safety, the environment, and so on) that would offend the classical code of criminal law in which it is axiomatic that proof of mens rea is required (Sayre, 1933). Or, if the protection of the investment in databases does not fit well with standard IPRs, a bespoke regulatory regime can be put in place (as was the case with Directive 96/9/EC);3 and, if innovation policy is not well served by limiting patents to human inventors (thereby excluding AI invention), the limitation should, and could, be removed (Abbott, 2020, Chs. 4 and 5). In Law 2.0 circles, there is no need to justify a departure from a historical legal principle or classificatory scheme; in Law 2.0, regulators operate with a new brush which, if they so wish, they can use to sweep the law clean. Although much regulatory discourse is focused on finding what works, modern scholarship in law, regulation, and technology sometimes undertakes a broader critique. In this articulation of Law 2.0, it is not simply a matter of regulation being effective in serving its purposes; those purposes and the means employed must be legitimate, and there needs to be a sustainable connection between regulatory interventions and rapidly changing technologies and their applications (Brownsword, 2008). It follows that this invites a more complex critical appraisal of the fitness of the regulatory environment. So, the law needs to make its regulatory moves at the right time (neither too early nor too late); and, even if regulation seems ‘to work’, there might be questions about the acceptability of the position that has been taken up in relation to a new technology. With regard to the acceptability of the legal position, a key question is whether the regulatory environment strikes the optimal balance between providing support for beneficial 3  Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases [1996] OJ L 77.

Law, regulation, and technology  17 innovation and providing adequate protection against the risks of harm that might be caused by emerging technology. Accordingly, much of the regulatory theory and practice in Law 2.0 circles is focused on avoiding both over-regulation (and stifling innovation) and under-regulation (and exposing consumers and others to unacceptable risks) (Brownsword, 2019). Getting regulation right in an age of rapid technological innovation is a considerable challenge; and, moreover, keeping it right is a case of constant regulatory work in progress. 3.3 Law 3.0 With the emergence of a Law 3.0 conversation, the questions are whether new tools and technical measures might be used in support of the rules relied on to serve particular regulatory policies, whether technologies might be used to assist those who are undertaking legal and regulatory functions, and whether the technologies and technical measures might actually supplant the rules and the humans who make, administer, and enforce them (Deakin & Markou, 2020). Sometimes, where the technologies at issue have already been developed, the question is whether and how they might be given useful regulatory applications. For example, blockchain technology might be considered as a way of supporting the registration of various kinds of proprietary interests (Quinn & Connolly, 2021), and technologies such as facial recognition and AI might be considered as tools to support policies of crime prevention and reduction, security, or immigration control, and the like. At other times, the technologies have not yet been developed but we already have an idea about how they might be given regulatory application. For example, we can imagine how the greater connectivity of Web 4.0 and its successors might support a much more ‘joined-up’ approach to governance. Beyond such technological support and assistance for governance by rules, there is a vision of governance by machines in which rules are no longer directed at citizens, humans are out of the loop, expert systems do the work, and environments are fully managed by the technology. Typically, in such environments, the intent and effect of ‘technological management’ is either to design in one or more acceptable actions or to design out those actions that are treated as unacceptable. That said, technological management might also be employed in a less restrictive way to remove the cause of conflict (for example, overcoming scarcity of resources by digitising materials or by using nanotechnologies). It is not altogether clear who should respond to the questions that are on the agenda in Law 3.0, nor who should be parties to the conversation. Because technological solutions will often be developed in the private sector, there seems to be a need for a public–private partnership or some form of co-regulation where public bodies set the desired regulatory objectives but leave it to industry to develop the best technological means. However, there also needs to be urgent and intensive public engagement when proposals are made that contemplate humans being taken out of the loops of law and regulation (as with governance by machines) or rules being replaced by technological management. For humans at least, where they interface with such technologies, the displacement of humans needs to be ‘socially acceptable’ (Legal Services Board and Solicitors’ Regulation Authority, 2022). Scholars who take an interest in governance by technology will, of course, ask whether it works, whether it is robust and resilient, whether it has unintended negative effects, and the like. However, they should also ask whether it is legitimate (Supiot, 2017). For example, if digital rights management technologies over-reach in their protection of IP rights, this is clearly

18  Research handbook on law and technology incompatible with the Rule of Law; and, similarly, where the criminal justice agencies rely on AI tools to make decisions about where to police or whom to bail or remand in custody, and so on, this needs to be compatible with due process and human rights (Brownsword & Harel, 2019). Moreover, there are also recurring deeper questions about whether technological measures (that do the regulatory work) change the complexion of the regulatory environment in ways that crowd out human autonomy, human dignity, and moral development (Brownsword, 2011). In short, Law 3.0 offers a wide spectrum of regulatory deployment―with technologies being deployed both in support of rules and in place of rules, to assist human decision-makers and to replace human decision-makers, to interface with both regulatees and with regulators, to support legal officials and to supplant them, and to supervise both regulatees and legal officials, and so on―as a result of which, in various ways, the needle shifts from governance by rules to governance by machines and technological management (Brownsword, 2022a, 2022b). 3.4 Summing Up Summing up, we can detect three distinct but co-existing modes of legal engagement with emerging technologies. Each mode has its own particular framing, its own range of focal concerns, and each relates to a particular input into the regulatory environment as we should now conceive of it (Brownsword, 2019). We can also say that each mode of thinking has its own strengths and weaknesses: Law 1.0 reflects the virtues of legality (predictability, consistency, and so on) but it is not geared for agile and responsive governance where technologies are disrupting the social and economic order; Law 2.0 asks the right question but, because regulatory decisions will be made in the political arena, it is subject to the usual political pathologies; and Law 3.0, by potentially taking both humans and rules out of the equation, promises greater efficiency, consistency, and effectiveness but, at the same time, might be feared as dystopian control of humans by machines. Arguably, Code is not just Law by another name, it is a radically different regulatory modality and form of governance (compare Lessig, 1999). If governance is no longer based on rules, and if humans are to be taken out of the legal and regulatory loop, this prompts questions about the adequacy of our rule-based legal ideals; and, crucially, it prompts questions about the legitimacy of governance.

4. LEGITIMACY, ETHICS, PLURALITY, AND MORAL SCEPTICISM In this section, our attention turns to the concept of legitimacy and, in particular, to how this is viewed from the distinctive perspectives of Law 1.0, Law 2.0, and Law 3.0. Alongside this discussion, we can also begin to place ethics, plurality, and moral scepticism within these perspectives and their conversations about law and technology. 4.1 Law 1.0 4.1.1 Legitimacy In Law 1.0 conversations, with law having broken free from a religious base, legal doctrine, and the legitimacy of law are self-contained within ‘autonomous’ legal systems and their

Law, regulation, and technology  19 particular practices of governance. Reliance on self-justification shows up most clearly when the authority of law is challenged. According to the standard account, the authority of law is rooted in the constitution. In H.L.A. Hart’s articulation of the concept of law, any question about the authority (title) of a body or person undertaking a legal function (or claiming to act in a legal capacity) hinges on there being an appropriate mandate in an authorising rule (Hart, 1961). In the case of legislative and judicial bodies, this authorisation might be explicitly declared in the founding constitutional rules; in other cases, the authorisation will be found in rules that have themselves been made by authorised rule-makers. Famously, this account gives rise to a puzzle about what authorising basis lies beyond the (apparently unauthorised) formal constitutional framework or the operative or traditional constitutional arrangements There are, of course, a number of standard responses to this puzzle. However, these responses tend to beg the question (by presupposing the truth of what is at issue) or, as with Hans Kelsen’s ‘Basic Norm’, look (just) outside the system for its justification but rely on hypothetical presupposition or fiction (Kelsen, 1967). Suffice it to say that such justifications are less than convincing (compare Shapiro, 2011). In the absence of a compelling justification, for Law 1.0, the bottom line is that, relative to the particular rules of the constitution, we can have a reasoned discussion about whether a particular person or body has authority and is acting within the terms of their authority; beyond that, however, a conversation about the legitimacy of law is not worth having. 4.1.2 Ethics, plurality, and moral scepticism In line with its internal view of the legitimacy of law, Law 1.0 treats ethics and morality, plurality, and scepticism as legally irrelevant and as purely external phenomena. Whilst philosophers might debate such matters, they are not relevant in law’s autonomous conversations. In other words, the legitimacy of the law does not turn on Kantian criteria of duty, or Rawlsian criteria of right, or a Benthamite view of the good; these are external standards formulated beyond the boundaries of legal discourse and doctrine. That said, there are ways in which legal doctrine might reflect and internalise some elements of ethics and morality, as well as pluralism. For example, some legal principles and doctrines (such as ‘good faith’ and ‘unconscionability’ in contract law, or ‘consent’ in medical law, and high-level constitutional principles) act as gateways that invite ethical and moral elaboration; and some tolerance of pluralism is built into some legal systems by ideas such as the margin of appreciation in human rights law. However, such contingent and variable internal reflection of ethical or moral standards does nothing to respond to fundamental questions about the legitimacy of governance (compare Reid & Murray, 2018, on the question of authority to govern in cyberspaces). 4.2 Law 2.0 4.2.1 Legitimacy Although regulatory thinking does not externalise ethics and morality, it tends to subordinate the legitimacy of regulatory positions to their effectiveness in achieving specified policy objectives. What regulatees might think about the legitimacy of regulatory processes and positions is relevant only to the extent that it bears on the effectiveness of the law.

20  Research handbook on law and technology To illustrate, suppose that a community opens a debate about the use of AI in delivering health care, then how might the regulatory conversation go? The regulatory conversation might start with questions about what can and cannot be done. If the thing in question cannot be done, then that is the end of that part of the regulatory conversation; and the focus will now be on those deployments of AI that are possible—on what can be done. The next question will be about the performance of AI, about how effective it is, about how well it works relative to human comparators (for example, how well AI performs in reading MRI scans compared with a human radiologist) or about how well it works when operating in conjunction with humans, or sequentially with humans, and so on. If the AI has not been tested and evaluated, this will be the next step. Assuming that the AI’s performance meets the required level of effectiveness, the next question might concern the preferences of the members of the community. Some might prefer AI; others might prefer the human touch. If the balance of preferences clearly favours use of the AI, then there is at least a prima facie regulatory case for permitting the proposed use of AI. Against the proposal, a question might be asked about whether the use of AI would be prudent for the community as a whole. Granted, there might be efficiencies and economies with AI but, in the longer run, might this diminish the quality of care and might there be a risk that essential human skills would be lost? Such a question, however, does not threaten to block the proposal. Even regulators who are persuaded that there is something in such prudential concerns would see them as no more than a case for caution and monitoring of the roll-out of AI. If the proposal is to be turned back, it will be for moral or ethical reasons. Even if the AI can be used, even if it can be deployed effectively, even if the majority prefer health care with AI than without it, and even if there are no prudential reservations, it might be argued that it simply would not be right to proceed. By way of comparison, when Mary Warnock was invited to advise the European Patent Office on the ethics of patenting inventive processes and products that were associated with state-of-the-art genetics (these too promising significant health care benefits), she remarked that we should always ask whether ‘even if the benefits of the practice seem to outweigh the dangers, it nevertheless so outrages our sense of justice or of rights or of human decency that it should be prohibited whatever the advantages’ (Warnock, 1993, p. 67). So, reverting to AI, we might express a variety of moral and ethical concerns— such as concerns about the loss of jobs, or possibly about human dignity or privacy, or about justice and human decency—but, for our purposes, the details of our concern do not matter. The point is that, in the regulatory conversation, this is not only the final consideration, it is also now the last stand against the use of AI. The burden of resistance, so to speak, is entirely on the moral argument. In this context, many would argue that legitimate governance has to rely on the integrity of the process—the process of public engagement and deliberation—coupled with settling on positions that are ‘acceptable’. However, whilst the notion of ‘acceptability’ might be a convenient shorthand for a number of elements of regulatory fitness that we might wish to audit, in this usage it covers and compresses everything from personal preference to moral principle. The full range of considerations (whether matters of preference, prudence, or morality) are taken into consideration but they are flattened and then reduced to a currency of positive and negative attitudes, to so many thumbs up and so many thumbs down, to so many likes and so many dislikes, before being subjected to ‘balancing’. In this thicket of regulatory process, ethical and moral reason, and distinctively human values, lose their identity.

Law, regulation, and technology  21 In this Law 2.0 approach, three features are striking: questions of possibility and effectiveness dominate questions of legitimacy; propositions that are susceptible to testing along with evidence-based reasoning are regarded as superior to propositions that do not have these characteristics; and moral and ethical reason loses its identity in a hotch-potch of ‘acceptability’. 4.2.2 Plurality and moral scepticism Picking up on the vulnerability of ethical and moral considerations to instrumental regulatory reasoning, two weaknesses stand out. One is that, in many cases, we find that the ethical protagonists espouse different approaches which lead to conflicting views; in other words, we find that we have a plurality of views. To the extent that views converge, that is manageable; but, often, the plurality will bring together ethical views that indicate, on the one side, prohibition and, on the other, non-prohibition (permission or requirement). The other weakness is that, even where there is consensus or convergence, and even where there is confidence in a moral judgment, sceptics will insist that, in the final analysis, no compelling argument can be given to ground the judgment—which is not to say that, for those who hold the views, morality does not matter but that the views rest purely and simply on the person in question being committed to that particular view (or ‘opinion’ as sceptics might term it). In a community that has invested heavily in innovation and technology, to set mere opinions against evidence-based science and fact is to offer little resistance. 4.3 Law 3.0 The regulatory nature of Law 3.0 means that, like Law 2.0, it tends to subordinate questions of legitimacy to questions of effectiveness,4 thereby treating ‘can’ as more important than ‘ought’. However, in Law 3.0, we also see moral and ethical thinking being diluted by its translation into the language of the technologies that are being deployed for regulatory purposes. We can see this in many quarters—for example, in relation to notions of autonomy, respect, and justice—but it will suffice to speak only to one case, that of trust and trustworthiness. Thus, in one striking instance, the European Commission (2021) explains that the purpose of its proposed regulation on AI is to deliver an ecosystem of trust by proposing a legal framework for trustworthy AI. The proposal is based on EU values and fundamental rights and aims to give people and other users the confidence to embrace AI-based solutions, while encouraging businesses to develop them. AI should be a tool for people and be a force for good in society with the ultimate aim of increasing human well-being.

Despite this commitment to trustworthy AI, the draft regulation does not explicitly elaborate this desideratum (although, implicitly, the conspicuous focus on managing risks suggests that AI is trustworthy when it would be reasonable to judge that the risks it presents are acceptable). In another context, following the global financial crisis in 2008, one response to the loss of trust and confidence in banks and other financial intermediaries was to devise decentralised systems that would take out human intermediaries and their institutions and place our trust in technologies for transfer of digital assets (instead of fiat currency) and registration that would

4 

But note my remarks about legitimacy as a condition of effectiveness in n 2 above.

22  Research handbook on law and technology make use of the best cryptography. Such was the promise of blockchain (De Filippi & Wright, 2018). However, trust in technology is not equivalent to trust in fellow humans (Kerasidou & Kerasidou, 2021, pp. 86–88). To trust a person to do the right thing (or indeed to do anything) is to rely on them without any security or insurance even though one has reservations about doing so and even though one is aware that there is a risk in doing so. In a context where humans are out of the loop, to trust a technology is to rely on it without any security or insurance because one has no reservations about doing so and because one treats such reliance as risk-free (or subject only to acceptable risks). Whereas, in the context of human-centric governance, to judge that a person is ‘trustworthy’ is to judge that a person will do the right thing even when they are disposed to (and have the opportunity to) do otherwise, in the context of governance by machines, it is a proxy for reliability. In an insightful analysis, Christoph Kletzer (2021, p. 322) hits this nail on the head when he says: There are entirely different stakes at play in reliance and trust. Whilst reliance is a mundane and technical issue, trust is a morally laden issue that potentially concerns our very human essence. This makes it crucial to keep these two concepts neatly separated. Failing to do so leads to a host of misunderstandings not only about trust and reliance itself but also about the relationship of technology and law.

So, on this analysis, parties who make use of AI or who commit their transaction to a blockchain, do not place their ‘trust’ in the technology, they merely rely on it; and, although we should not jump too quickly to this conclusion, we might infer that, by relying on the technology rather than their fellow human, they do not trust the latter (or regard the latter as trustworthy). Moreover, contrary to much of the conventional wisdom, it is not so much the difference between trust and trustworthiness that is critical as the difference between reliance based on trust (or trustworthiness) (which involves a judgment as to the moral character of a human) and mere reliance on technology (which signifies only the use of a tool or, at most, a prudential judgment—a judgment that might be more or less reflective, more or less informed as to the technology’s fitness for purpose and its quality, more or less a matter of choice, but never morally judgemental). Just as Ian Kerr (2010) famously rejected the idea that virtue can be automated, so too we should reject the idea that trust can be automated.

5. THE CRITERIA OF GOOD (LEGITIMATE) GOVERNANCE According to Henk Addink (2019, p. 3), ‘the Rule of Law, democracy, and good governance are the cornerstones of the modern state’. Precisely, how these cornerstones relate to one another is unclear—indeed, we might think that, where the Rule of Law and democracy are observed, then that takes care of good governance. Moreover, given the multitude of lists of governance desiderata that have been issued, it is not clear what we should treat as the essence of good governance. Recognising that good governance is a contested matter, let me nevertheless suggest that, in a community of humans, the core idea of ‘good governance’ is (i) the enterprise of instating and maintaining order in the group, (ii) where those who govern act with integrity, (iii) where they strive to govern in the interests of the group, (iv) where they respect the community’s fundamental values, and, most importantly, (v) where they govern in a way that is compatible

Law, regulation, and technology  23 with respect for the global commons. In what follows, I will speak briefly to the last three elements, these being key for the legitimacy of governance. 5.1 Respect for the Global Commons I take it that no group or community of humans will be viable unless some baseline (generic) conditions are respected. These conditions generate three imperatives as follows: first, humans must protect the global commons, respecting the planetary boundaries and its resources lest human existence on Earth is no longer sustainable; second, humans must observe the conditions for peaceful co-existence, both between humans in a particular community and between communities; and, third, humans must respect the conditions that support their agency and autonomy. Ideally, humans would recognise and sign up to these imperatives before they have begun the process of forming their own communities and before they have invested in their own interests. However, the challenge now is much more difficult because, at all levels, humans have invested in current arrangements. Nevertheless, if any human were to challenge these imperatives—for example, by proposing that it should be permissible to deplete the global planetary resource at will and to undermine the possibility of communities forming around their own projects and developing in their own way—then this should be rejected as being so unreasonable that no reasonable human could hold such a view. That said, the world does not stand still and there needs to be a constant dialogue about whether particular practices are compatible with the imperatives. For example, nation-states might claim to be acting in line with the imperative of peaceful co-existence because they are not engaging in ‘aggressive’ acts whilst, at the same time, they engage in various kinds of hybrid and cybrid threats, disabling critical infrastructures, and destabilising communities (Brownsword, 2022b, Ch. 11; Freedman, 2017). There is also much room for discussion about the nature of human agency and its supporting conditions (concerning human self-development and then for engagement in purposeful self-direction whether for oneself, with others, or in groups and communities). Whilst we might be convinced that some modern technologies (such as mass surveillance systems) interfere with the conditions for agency, how precisely is agency inhibited or compromised? One very important limit on the imperatives is that they must remain neutral between particular humans, particular communities, particular projects, particular views of what is prudent or moral, and so on; these are imperatives that are foundational for the possibility of human community but, other than insisting that the foundational conditions themselves are respected, they are strictly impartial between particular humans, particular articulations of community, and particular preferences, positions, and policies. Over and above the imperatives, and given the non-ideal conditions in which we now seek to establish them, two practical measures need to be adopted. First, because self-interest is so much entrenched in both domestic and international relations, a precautionary buffer should be adopted (Brownsword, 2022a, Ch. 11; Simpson, 2009). This means that if there is any reasonable doubt about whether an imperative is being violated by some act or practice, it should be restricted or even cease. To those who protest at this inconvenience, we can concede that, after the event, we might find that our precaution was unnecessary. But we have to act on what we know at the time, and if the choice at the time is between a possible catastrophe and a possible loss of utility, then it is better to be precautionary than sorry. Second, because the most

24  Research handbook on law and technology dangerous violations of the imperatives are likely to occur on the international stage, global regulatory stewards need to be authorised to maintain respect for the imperatives. To be sure, the idea that the global powers would support such an initiative might be wishful thinking; but, when even the most powerful nations are threatened by a breakdown in the conditions for governance, who knows, we might be surprised. Finally, it bears repetition that, whilst the foundational imperatives might not be extensive, any views or actions, any positions or policies, that are incompatible with these constitutive commitments—which, we should recall, are the key to the viability of human communities (Fairfield, 2021, p. 143)—must be rejected as so unreasonable that no reasonable human could possibly entertain them. Even the most sceptical of humans cannot coherently reject the importance of the preconditions for their own existence, or for the viability of human communities and their governance. 5.2 Governance That Respects the Fundamental Values of the Community If global commons is the first base, humans start building their communities at the second base. It is here that they declare their distinctive values, as well as their distinctive conception of cornerstone values (such as respect for human dignity) and define themselves as the particular people that they aspire to be. Where there are questions about the interpretation or application of these values, it will fall to the community’s laws and legal institutions to respond. It is, of course, essential that whatever the fundamental values to which a particular community commits itself they should be consistent with (or cohere with) the commons’ conditions. It is the commons that sets the stage for community life; and then, without compromising that stage, particular communities form and self-identify with their own distinctive values. So, members of a particular community can expect to be asked to respect the law in relation to several kinds of dispute or conflict. For example, there might be questions about the interpretation or application of the community’s fundamental values (such as a question about the interpretation or application of privacy, or a question about the weight to be given to privacy when it is in tension with freedom of expression), or about some other constitutional matter; and there might be conflicts about what the basic rule is to be (for example, about whether some conduct should be criminalised or de-criminalised and, if so, on what terms, or about whether some technology or application should be permitted or prohibited) or about the application of the rule in a particular case. New technologies provoke many governance questions and, it will be appreciated, these questions are not all engaged in the same way or at the same level. 5.3 Governance in the Interests of the Group Within each community, there will be a plurality of different preferences, priorities, and assessments of costs and benefits and so on. Conflicts that concern such differences might engage the community’s fundamental values or even reach through to the generic conditions themselves; but let us assume that characteristically they do not. Here, we get, as it were, to third and fourth base as the community’s legal and political institutions will need to take a position and state the rule or will need to settle an individual dispute or other everyday conflicts that arise from competing preferences and legitimate interests. Dealing with such conflicts is the routine stuff of governance.

Law, regulation, and technology  25 Consider, again, the adoption of new technologies (robots, AI, and so on) in healthcare as a case in point. Whilst some persons will regret the loss of the human touch, others will welcome 24/7 intelligent care. More generally, whilst some will push for a permissive regulatory environment that is facilitative of beneficial innovation, others will push back against healthcare practice that gives rise to concerns about the safety and reliability of particular technologies as well as their compatibility with respect for fundamental values. Yet, how are the interests in pushing forward with research into and the adoption of potentially beneficial health technologies to be reconciled with the heterogeneous interests of the concerned who seek to push back against them (Brownsword, 2021)? In such a scenario, a Law 2.0 approach is likely to be adopted, with regulators being expected to seek an accommodation or balancing of interests that is broadly acceptable. However, as we have already intimated, this is a long way short of what we might ideally expect of legitimate governance. First, it is not clear on what basis particular interests that are pressed on regulators are to be judged legitimate or illegitimate, or indeed whether interests are differentiated in this way. In order to distinguish between legitimate and illegitimate interests, a theory of legitimacy is required; and this balancing approach simply does not have any such theory. If, on the other hand, no distinction is drawn between legitimate and illegitimate interests, then illegitimate interests might be allowed to shape an accommodation of interests that will be claimed to be ‘legitimate’. Second, all interests (whether legitimate only or legitimate and illegitimate) are flattened in the balancing process. No distinction is drawn between ‘higher order’ and ‘lower order’ interests. Indeed, there is no ranking of interests (whether higher order or lower order). To do this, a theory of value would be needed and, again, this strategy simply does not have any such theory. Third, a proposed balance of interests will be presented as legitimate if it is ‘reasonable’ or ‘not unreasonable’ relative to the interests put forward for consideration. Not only is this a weak view of legitimacy, it allows for more than one accommodation to be presented as (and accepted as) legitimate; and such an approach has no resource to explain or justify why one reasonable accommodation is to be preferred to another. Finally, it is unclear whether the burden of justification is on those who argue for permission or those who argue for prohibition or restriction; and nor is it clear whether, at any stage, the burden is transferred from one side to the other. Nevertheless, whilst such an approach is highly problematic, it might be the best that we can do—in which case, such everyday decisions are prime candidates for the adoption of automated governance by smart technologies.

6. CONCLUSION With so many particular questions concerning emerging technologies now prompting discussion by lawyers, there is a risk that we, so to speak, lose sight of the wood for the trees. Responding to that risk, this chapter has identified three organising threads for legal and regulatory discussions: one concerns the relationship between law and good (legitimate) governance; a second concerns the relationship between legitimate and effective governance; and a third identifies the foundations of good governance in respect for the global commons. Our discussion has highlighted the following four key take-home messages. First, we should retrieve the idea of governance in the broad sense. This allows us to place ‘law’ in the larger picture of governance. In this picture, there are different modalities of governance; and the most important questions are not about what is or is not law, nor about

26  Research handbook on law and technology whether governance is by rules or by technologies, but about the legitimacy of governance and, concomitantly, the criteria of good governance. Second, questions about legitimacy and good governance, having been neglected or marginalised by traditionally positivist and regulatory conceptions of law, also need to be retrieved. Considerations of legitimacy are prior to considerations of effectiveness. Third, the non-negotiable starting point for good governance is respect for the global commons, this representing the essential (and generic) conditions for humans, their communities, and their governance. We can discount the objections of Cartesian demons: no human agent can coherently argue that the compromising of the generic conditions (for human existence and agency) should be permitted. Finally, the imperatives of good governance will not entail the end of plurality and difference; there will continue to be room for communities to govern in their own distinctive way. Nevertheless, governance of technologies—and, for that matter, governance by technology— once orientated to the global and generic imperatives, if not unrecognisable, will be significantly transformed.

REFERENCES Abbott, R. (2020). The Reasonable Robot. Cambridge, England: Cambridge University Press. Addink, H. (2019). Good Governance. Oxford, England: Oxford University Press. Aral, S. (2020). The Hype Machine. London, England: Harper Collins. Brownsword, R. (2008). Rights, Regulation and the Technological Revolution. Oxford, England: Oxford University Press. Brownsword, R. (2011). Lost in Translation: Legality, Regulatory Margins, and Technological Management. Berkeley Technology Law Journal, 26, 1321–1365. Brownsword, R. (2019a). Law, Technology and Society: Re-imagining the Regulatory Environment. Abingdon, England: Routledge. Brownsword, R. (2019b). Legal Regulation of Technology: Supporting Innovation, Managing Risk and Respecting Values. In T. Pittinsky (Ed.). Handbook of Science, Technology and Society (pp. 109– 137). New York, NY: Cambridge University Press. Brownsword, R. (2020). Law 3.0: Rules, Regulation and Technology. Abingdon, England: Routledge. Brownsword, R. (2021). Regulating Automated Healthcare and Research Technologies: First Do No Harm (to the Commons). In G. Laurie, Dove, E., Ganguli-Mitra, A., McMillan, C., Posten, E., Sethi, N. & Sorbie, A. (Eds.). The Cambridge Handbook of Health Research Regulation (pp. 266–274). Cambridge, England: Cambridge University Press. Brownsword, R. (2022a). Rethinking Law, Regulation and Technology. Cheltenham, England: Edward Elgar. Brownsword, R. (2022b). Technology, Governance, and Respect for the Law: Pictures at an Exhibition. Abingdon, England: Routledge. Brownsword, R. & Goodwin, M. (2012). Law and the Technologies of the Twenty-First Century. Cambridge, England: Cambridge University Press. Brownsword, R. & Harel, A. (2019). Law, Liberty and Technology―Criminal Justice in the Context of Smart Machines. International Journal of Law in Context, 15, 107–125. Brownsword, R., Scotford, E. & Yeung, K. (Eds.). (2017). The Oxford Handbook of Law, Regulation and Technology. Oxford, England: Oxford University Press. Brownsword, R. & Yeung, K. (Eds.). (2008). Regulating Technologies: Legal Futures, Regulatory Frames, and Technological Fixes. Oxford, England: Hart. Deakin, S. & Markou, C. (Eds.). (2020). Is Law Computable? Oxford, England: Hart. De Filippi, P. & Wright, A. (2018). Blockchain and the Law. Cambridge, MA: Harvard University Press. European Commission (2021). Explanatory Memorandum to the Proposed Regulation on AI, Brussels, COM(2021) 206 final.

Law, regulation, and technology  27 Fairfield, J.A. (2021). Runaway Technology. New York, NY: Cambridge University Press. Fox, D. (2019). Birth Rights and Wrongs. New York, NY: Oxford University Press. Freedman, L. (2017). The Future of War: A History. London, England: Allen Lane. Fuller, L.L. (1969). The Morality of Law. New Haven, CT: Yale University Press. Goddard, D. (2022). Making Laws that Work. Oxford, England: Hart. Hart, H.L.A. (1961). The Concept of Law. Oxford, England: The Clarendon Press. Kerasidou, A. & Kerasidou, (C).X. (2018). AI in Medicine. In David E. (Ed.). Future Morality (pp. 83–92). Oxford, England: Oxford University Press. Kelsen, H. (1967). The Pure Theory of Law (2nd ed). Berkeley, CA: University of California Press. Kerr, I. (2010). Digital Locks and the Automation of Virtue. In M. Geist (Ed.). From ‘Radical Extremism’ to ‘Balanced Copyright’: Canadian Copyright and the Digital Agenda (pp. 247–303). Toronto, Canada: Irwin Law. Kletzer, C. (2021). Law, Disintermediation and the Future of Trust. In L.A DiMatteo, A. Janssen, P. Ortolani, F. de Elizalde, M. Cannarsa & M. Durovic (Eds.). The Cambridge Handbook of Lawyering in the Digital Age (pp. 312–325). Cambridge, England: Cambridge University Press. Kornhauser, L.A. (2004). Governance Structures, Legal Systems and the Concept of Law. University of Chicago-Kent Law Review, 79, 355–381. Latour, B. (2010). The Making of Law. Cambridge, England: Polity Press. Legal Services Board and Solicitors’ Regulation Authority (2022). Social Acceptability of Technology in Legal Services. London. Lessig, L. (1999). Code and Other Laws of Cyberspace. New York, NY: Basic Books. Llewellyn, K.N. (1940). The Normative, the Legal and the Law Jobs: The Problem of Juristic Method. Yale Law Journal, 49, 1355–1400. Pottage, A. & Sherman, B. (2010). Figures of Invention: A History of Modern Patent Law. Oxford, England: Oxford University Press. Quinn, J. & Connolly, B. (2021). Distributed Ledger Technology and Property Registers: Displacement or Status Quo. Law, Innovation and Technology, 13, 377–397. Reed, C. & Murray, A. (2018). Rethinking the Jurisprudence of Cyberspace. Cheltenham, England: Edward Elgar. Sayre, F. (1933). Public Welfare Offences. Columbia Law Review, 33, 55–89. Shapiro, S.J. (2011). Legality. Cambridge, MA: The Belknap Press. Simpson, G. (2009). Great Powers and Outlaw States. Cambridge, England: Cambridge University Press. Supiot, A. (2017). Governance By Numbers. Oxford, England: Hart. Tyler, T.R. (2006). Why People Obey the Law. Princeton, NJ: Princeton University Press. Warnock, M. (1993). Philosophy and Ethics. In C. Cookson, G. Nowak & D. Thierbach (Eds.). Genetic Engineering—The New Challenge (pp. 67–72). Munich, Germany: European Patent Office.

3. Legal responses to techlaw uncertainties BJ Ard and Rebecca Crootof1

1. INTRODUCTION The advent of any new technology raises a host of techlaw questions particular to that technology and the legal context. Stepping back, however, these questions take familiar forms. Some questions are substantive and focus on better understanding the nature of the change and its societal effects: which actors are newly empowered? Which relationships have been complicated? What assumptions have been altered or rendered obsolete? When is a difference in degree a difference in kind, and when is there a distinction without a meaningful difference? Other questions are structural, and focus on what type of legal uncertainty has been created: is there an application uncertainty, insofar as there is a question of how to apply existing law? A normative uncertainty, in that the application of law produces an undesirable result? Or an institutional uncertainty, raising questions about which institution should regulate the new artifact, newly relevant actors, or newly possible activities? While the resolutions of these techlaw uncertainties may sometimes appear simple or self-evident, each of these moments presents a potential inflection point in the evolution and development of the law. Technology law is still in the process of developing a shared methodology for answering these questions.2 As this Research Handbook makes clear, there are innumerable points of entry for thinking about the intersections of law and technology, including evaluating the varied implications of particular technological developments, considering how technological change will impact various legal subjects or cross-cutting values, and thinking through how technology will change the practice of law. In this chapter we discuss how the law responds to tech-fostered change, focusing on the moment when a legal actor must decide whether and how to resolve a techlaw uncertainty.3 Almost every framework for addressing these uncertainties

1  This chapter excerpts and builds upon content we originally discussed in previous work (Crootof & Ard, 2021). This piece is the product of countless hours of collaborative work; accordingly, we vary the order of our names with each publication, as it would be impossible to evaluate who contributed more. Thanks to Olia Kanevskaia, Przemysław Pałka, and Rebecca Wexler for helpful suggestions. 2  In previous work, we proposed and outlined a three-part framework for working through techlaw questions that emphasizes our ability to shape legal evolution when responding to technological developments (Crootof & Ard, 2021). This entails identifying the relevant legal uncertainty, considering the distributive effects of different stances towards technological regulation, and determining the appropriate response with an awareness of its techlaw-specific issues. 3  Of course, law influences the development of technology as well; in this chapter, we focus on the moment after a technological development has raised legal uncertainties and legal actors must decide how to respond. We use the term “legal actors” to encompass all entities charged with making, interpreting, or enforcing rules—essentially all entities who influence how law evolves. In addition to some of the more obvious players—judges, legislators, and agency rule-makers—we include legal practitioners, compliance monitors, treaty negotiators, legal advisors, policy advocates, academics, and sovereign states in this category.

28

Legal responses to techlaw uncertainties  29 employs two possible orientations. The first entails looking back to existing rules and settled compromises, then using analogical reasoning to extend this guidance to new scenarios. The second entails looking forward, crafting new laws or reconfiguring the governing regime.4 While each strategy is a familiar means of legal evolution, in this chapter we identify distinctive techlaw manifestations and considerations.

2. LOOKING BACK: REASONING BY ANALOGY The “looking back” approach often entails employing analogies to justify extending law written with different assumptions to new artifacts, actors, or activities. However, in addition to many of the familiar concerns about the use of analogies in legal reasoning (such as the risks of transplantation errors), techlaw analogies’ multiple, often-conflated roles complicate their use. Given this, it is not enough to identify particular characteristics of a technology when employing a techlaw analogy. Instead, legal actors must determine which of those characteristics are relevant in light of the legal analysis and social context. Because this analysis depends on the legal question being posed to society at a given point in time, techlaw analogies require ongoing reevaluation as time passes, technology evolves, and circumstances and norms change. 2.1 Multifaceted Roles In our personal lives, analogies help us extrapolate from past experiences to understand unfamiliar or complicated concepts or identify potential opportunities and dangers. Analogical reasoning is also a fundamental lawyering skill. Lawyers, judges, legal academics, and other legal actors are practiced at matching new fact patterns to older ones, identifying differences, and making arguments as to whether a distinction justifies applying, disregarding, or modifying a precedent. And, in both the personal and legal contexts, analogies are used to advance regulatory narratives, ranging from an advertiser hawking a “self-driving car,” to a civil society group decrying “killer robots,” to a house rental app’s lawyer arguing that their client is just a “data company.” In the techlaw context, these different functions often overlap and may be conflated. A legal actor might analogize a technology to a more familiar one to better understand it and its social uses. “Horseless carriages” and “driverless cars,” for example, both link a new technology to a prior one while emphasizing a pertinent absence. Legal actors also employ analogies to elucidate a technology’s attendant benefits and risks. “Driverless cars” won’t require you to attend to the road on a long commute—but the term also highlights that there is no driver to exercise judgment when needed. Simultaneously, a legal actor may stress a particular analogy because its associated precedent favors a desired narrative or legal conclusion. Calling an autonomous vehicle a “driverless car” suggests that, in the case of an accident, the accountable actor is missing, which might prompt regulators to identify the remote designer, manufacturer, or seller as the responsible entity; calling it a “self-driving car” insinuates that the vehicle itself has some agency, which might operate to deflect attention from relatively remote designers, 4  It is also possible to adopt a permissive approach when “looking forward” and decide against creating formal rules; we have discussed the implications of this choice elsewhere (Crootof & Ard, 2021).

30  Research handbook on law and technology manufacturers, or sellers (Crootof, 2019). Because whoever wins the “battle of analogies” often wins the war, there are incentives for legal actors to promote incomplete or misleading analogies that advance their preferred outcome. This complicated dynamic was at play in American Broadcasting Cos., Inc. v. Aereo, Inc.,5 where the legal actors used analogies to explain what the technology did, highlight its social uses, and advance their preferred legal outcome. The Aereo technology allowed subscribers to rent small antennas to record and transmit over-the-air television broadcasts to their personal, internet-connected devices. As the legal question was whether this business model constituted copyright infringement, the battle of analogies was over whether this new technology was more like prior infringing or non-infringing technologies. Focusing on the fact that the Aereo system enabled users to watch broadcast TV when they desired, Aereo’s lawyers argued that it was most akin to a home antenna and DVR—technologies that had been determined to be non-infringing in prior cases. The court disagreed; emphasizing that Aereo was profiting off others’ content, the majority analogized the technology to cable transmission, which would constitute an infringing performance. The dissent took a third tack. Asserting that the technology merely enabled individuals to engage in prohibited copying of otherwise free content, it compared the technology to “a copy shop that provides its patrons with a library card,” which suggested that the industry was not responsible for users’ copyright infringement. The disagreement over the proper analogy for the Aereo technology also illustrates a broader point: all analogies are incomplete, often in ways that limit our understanding or imagination. Each Aereo analogy captured some element of the technology at issue while masking others. The implicit concealment of these elements may be inadvertent, or it may strategically advance a regulatory narrative. Either way, how is one to evaluate whether a particular analogy is so distinct or misleading as to be inapt? As described in the next section, identifying a technology’s legally salient characteristics is a necessary step in evaluating when an analogy’s incompleteness renders it inappropriate. 2.2 Legally Salient Characteristics The basic analogical question is whether an artifact, actor, or activity is similar enough to a predecessor that it should be governed by the same legal regime. Many times, more than one analogy may provide plausible guidance, requiring an assessment of which among them is the best fit (Whitney, 2018). To identify whether a techlaw analogy is appropriate, we must identify the technology’s legally salient characteristics.6 An artifact’s legally salient characteristics might include its architectural or design features, the actors it empowers, the activities it enables, or the social structures, relationships, Am. Broad. Cos. v. Aereo, Inc., 573 U.S. 431 (2014). Cyberlaw and other early techlaw scholars regularly wrestled with the role of analogy when attempting to address the legal uncertainties raised by the internet. A common trope within the literature was the presumption that analogy is about comparing surface-level similarities; an ability to distinguish a new technology from an older one on the basis of different architectural traits could then be used to justify a call for legal intervention and change. Jack Balkin (2004, 2015), Lyria Bennet Moses (2007), Julie Cohen (2007), Michael Froomkin (1995), Daniel Hunter (2003, 2004), and others opened the door to more complicated understandings of techlaw analogies. We draw on this earlier work—especially Jack Balkin’s framing—to argue that identifying the appropriate analogy requires identifying a technology’s “legally salient characteristics.” 5  6 

Legal responses to techlaw uncertainties  31 and power dynamics it affects, entrenches, or destabilizes. For example, a court evaluating whether a statute written for wagons applies to automobiles might consider the structural differences between the vehicles, or that one may be taken offroad relatively easily while the other is mostly confined to highways, or that both are used as a means of conveyance, or that they generally are used by different socioeconomic groups and for different purposes. Importantly, legally salient characteristics need not be novel.7 No characteristic is inherently legally salient or non-salient—its salience or lack thereof depends on context. The fact that both wagons and automobiles have four wheels might be germane in the context of a statute governing axle standards and completely irrelevant in the context of a statute setting speed limits. A legally salient characteristic—and the associated analogy—that is useful in resolving legal uncertainty within a given legal framework may be less relevant in another. Given this, an analogy that is useful at one point in time may be less appropriate at another, as the sociolegal context within which the legal question is asked will change. But how does one identify which of these various traits is most legally salient? Identifying which characteristic(s) to focus on necessitates value judgments, as an interpreter’s reading of a rule’s text or purpose will affect which characteristics they deem most relevant. Consider a statute that requires the forfeiture of “wagons” used to transport liquor into prohibited areas.8 If a court applies a strict textualist reading and focuses on the distinction in the means of conveyance, the rule does not apply to automobiles; if the court assumes the purpose is to punish the conveyance of liquor into those areas, it could. But while analyzing legal salience is complicated by the fact that legal categories are themselves constructs, and while interpreters may disagree about the underlying aims and purposes of a law (or entire legal regime), the exercise is not hopelessly subjective. Rather, it highlights the import of explicit articulations of why and how an analogy is being used. 2.3 Entrenchment Often, the selection of an analogy entails the selection of a legal regime and its associated assumptions, requirements, and governing institutions. Cryptocurrencies, for example, have been variously classified as a currency, a security, and a commodity. However, each of these classifications entails a different regulatory regime with distinct obligations, leading to legal debates over which classification is most appropriate in taxation, bankruptcy, or other contexts (Hughes, 2019). Similarly, we are at the cusp of this analogy-selection point in the regulation of internet-connected devices, which permit companies to remotely alter or deactivate household appliances. Someone who is harmed as a result of this remote corporate interference might seek to bring a negligence suit, and in the absence of established duties, courts will consider potential analogies. Given a company’s ability to remotely assume control of property and discontinue services, three attractive analogies are repossession agents, public utilities, and landlords. The selection among these options will change the scope of the company’s duty, 7  As Jack Balkin (2004, 2015) has noted, a focus on novelty, rather than salience, risks underestimating the social impacts of a technology. The seeming originality of Uber, Lyft, and other ride-sharing apps distracts from the fact that they are transportation services, placing them in an industry that has long been governed by a broad array of regulations. 8  See United States v. One Automobile, 237 F. 891 (1916).

32  Research handbook on law and technology and liability decisions made now “will create a powerful feedback loop that will forge our future assumptions about IoT companies’ obligations and consumer rights” (Crootof, 2019). The initial selection of an analogy is critical because analogies often become firmly entrenched, even when they are inapt or are recognized as having problematic or dangerous consequences. First, the perceived benefits of stability may overshadow the analogy’s problems. Second, the choice of an analogy and its attendant legal regime affects how the technology subsequently develops—and interest groups who benefit from the resulting uses and configurations of the technology will have incentives to preserve the status quo. The fight over the appropriate analogy for remote corporate interference with Internet of Things devices, for example, will not only determine which and how extant rules apply, but also incentivize some technological developments and social uses over others. Inapt analogies may persist even when they contribute to problems the legal regime was originally attempting to eliminate. For example, the conception of cyberspace as a “place” encouraged disability rights advocates to interpret provisions of the Americans with Disabilities Act governing “places” of public accommodation to apply to web “sites” (Reid, 2020). While useful in improving website accessibility, this narrowed focus arguably prevented advocates from thinking more broadly about the accessibility of all internet infrastructures—thereby allowing inaccessible practices in the latter to become more ensconced (Reid, 2020). 2.4 Dangerous Analogies Mechanically extending old rules to newer technologies without wrestling with legally relevant differences in a technology’s design or social use is a recipe for ineffective or even dangerous law. In the best of the worst-case scenarios, legal actors may simply create ineffective rules because they ignore pertinent differences between technologies. The 1930 London Naval Treaty and 1936 London Protocol both equate submarines with surface warships, requiring them to comply with the general prohibition against neutralizing enemy merchant vessels without first ensuring the safety of their passengers and crew. But because those drafting rules for warships did not anticipate the existence of small, underwater boats—features that make it impossible for submarines to take additional passengers onboard or safely escort enemy vessels to a nearby port—these requirements were widely ignored during World War II (Crootof, 2015). The under-considered use of an analogy may likewise result in bad law. As many have noted, legal actors regularly overlook the differences between physical space and the internet in applying rules developed for the former to the latter, with problematic side effects. US courts have productively applied the “trespass to chattels” doctrine to create tort liability for spam emails—but their unthinking extension of the doctrine to useful spiders, scrapers, and non-commercial emails has led to unintended complications. In international law, some argue that foreign cyberoperations constitute violations of state sovereignty, which would transmute minor and routine interferences into prohibited interventions, permitting the affected state to employ unilateral escalatory countermeasures. Analogies that were useful at one point in time may become problematic as the social use of a technology changes, sometimes to the extent that an analogy can be employed to achieve aims that contradict its original purpose. For example, the internet was once celebrated as a separate “place” that existed outside of the jurisdiction of “meatspace” sovereigns. As John Perry Barlow (1996) famously declared, “Governments of the Industrial World, you weary

Legal responses to techlaw uncertainties  33 giants of flesh and steel, I come from Cyberspace, the new home of Mind … You have no sovereignty where we gather.” But some argue that the “place” metaphor has since facilitated the application of real-property logics to once-shared zones, resulting in enclosures and exclusions that prevented the shared uses Barlow and others had celebrated (Cohen, 2007; Hunter, 2003). Using analogies inappropriately to extend existing rules can also create new dangers. Today, there is a heated debate within the AI research community regarding the respective benefits of open or restricted research norms. Many are borrowing assumptions and conclusions from vulnerability disclosure norms in software development to argue for more open research practices. However, because they have not considered the relative ease of patching discovered software vulnerabilities, there is little awareness of how employing this analogy may result in publication practices that asymmetrically benefit malicious actors in the AI context (Shevlane & Dafoe, 2020). 2.5 Using Analogy Thoughtfully Analogies’ varied roles, inescapable value judgments, risk of entrenchment, and potential for intentional and inadvertent misuse have prompted some to argue we should dispense with their use in the techlaw context altogether (Whitney, 2018). But, as evidenced by daily practice, analogies are irreplaceable tools for extending the law. Instead of tossing aside analogical reasoning, legal actors must use it thoughtfully (Lakier, 2018). It often makes sense to focus on the use and social meaning of the technologies being compared rather than to place great weight on the similarities or differences in their design. Yet there are also scenarios where the technical details of the technologies’ design merit attention—including those where prior rule-makers intended to enact a narrow, tech-neutral rule to govern only a particular technology. Ultimately, those employing an analogy for a particular technology in a particular context must consider alternatives, acknowledge the selected analogy’s limitations, and regularly reexamine its fitness. However, using analogies to stretch law cannot resolve all techlaw uncertainties. Sometimes there is no good analogy; sometimes an analogy has problematic second-order effects. In such cases, rather than looking back to older law to address a techlaw uncertainty, it might be preferable to instead “look forward” and consider adopting new rules, understandings, or institutions.

3. LOOKING FORWARD: NEW RULES, NEW UNDERSTANDINGS, AND NEW INSTITUTIONS The future is full of uncertainty: legal actors simply cannot predict all the ways in which technological capabilities will impact society, nor the outcomes and effects of responsive legal evolution. While relatively tech-neutral background rules will provide general guidance, legal actors will sometimes decide that more tech-specific rules or even entirely new institutions are necessary (Ohm, 2010). When creating a new rule, lawmakers must decide how a rule should be designed and what it should govern, both of which entail tradeoffs between more- or less-flexible structures (which are more or less amenable to analogical extension). The choice between tech-specific and tech-neutral law also entails a judgment as to which

34  Research handbook on law and technology set of actors—lawmakers or legal interpreters—should decide how the law applies to future developments. More seismically, tech-fostered social change may prompt a fundamental reassessment of a legal regime’s founding assumptions or aims, which sometimes precipitates the creation or reconfiguring of legal institutions (Ard, 2022). 3.1 “Future-Proofing” New Law Lawmakers frequently aim to craft “future-proof” rules that will not be rendered obsolete by continued technological development. Unfortunately, this goal often causes lawmakers to make design and content decisions based on underexplored intuitions. When lawmakers must make design choices that affect how easy it will be to amend the rule, they often prioritize stability over flexibility; when lawmakers must determine what content a rule governs, they often prefer flexibility to precision. At the design level, lawmakers must determine how difficult it should be to amend or overturn the new law. One common assumption is that the harder-to-modify rule is generally preferable because it will resist future change; the implicit corollary is that the only reason such rules are not always pursued is the difficulty of enacting these “stronger” rules. But while higher bars to creation or modification tend to lend rules more perceived strength, these rules are also more likely to result in over- or underinclusive law. First, precisely because of their perceived strength, harder-to-change rules are subject to bargains that undercut their original aims. Second, while formal stability may stave off explicit repeal, the difficulty of updating them to address changed circumstances puts “stronger” laws at greater risk of ineffectiveness or even obsolescence. Lawmakers must also decide on the rule’s content—which artifacts, actors, or activities a rule governs. Again, there is a similar desire to create long-lasting rules, but here, this aim often manifests as a preference for tech-neutral over tech-specific law. Tech-neutral laws are more flexible, as they can be relatively easily extended to cover technological developments as they arise, thereby minimizing legal gaps. However, the benefits of this kind of interpretative flexibility often come at the expense of clarity and narrowed tailoring, giving rise to legal overlaps and overinclusive law. In short, both the design and content-level tradeoffs require careful consideration. Lawmakers—and those arguing for new laws—must clarify why their proposals will best balance a rule’s effectiveness and longevity and consider which actors will be best positioned to update and apply the law in the future. There will be situations where a more stable, techneutral rule is best; there will be many others where it is not. 3.2 Design Flexibility Legal actors create new laws in order to resolve legal uncertainties, but a rule’s ability to do so successfully will depend on its design, which includes its jurisdictional scope (the region it governs), form (whether it is a statute, common law, or other form of law), and implementation (how flexible it is, with regard to both its structure and content). Of course, legal actors will only rarely be able to actually choose a rule’s jurisdictional scope or its form; while it may be possible for interested parties to petition a specific type of lawmaker to address a particular legal uncertainty, a lawmaker’s rule-making authority is usually constrained to certain jurisdictions and forms. Still, in the interest of completeness, we briefly outline the different

Legal responses to techlaw uncertainties  35 forms rules might take to highlight a few of their relative strengths. We then discuss in greater depth the design choices where legal actors have more agency: implementation options and the tradeoffs they pose for stability and flexibility. 3.2.1 Jurisdictional scope and form New technologies often cross regional lines and blur traditional boundaries, inviting consideration of a new regulation’s appropriate jurisdictional scope and form. This is often apparent with new communications and transportation technologies, but the externalities of a technology—like pollution—also implicate these questions. A rule’s jurisdictional scope might be international, regional, national, or sub-national. The greater a rule’s jurisdictional scope, the more it promotes the benefits associated with consistency; the narrower the scope, the more it leaves room for the potential benefits of tailored applications and policy experimentation. In the United States, this tension is often at the forefront of federalism debates. National legislation can set baseline safety standards, protect minority rights, and accomplish other national policy aims. Meanwhile, states are famously “laboratories of innovation,” free to experiment with different regulatory approaches to a host of contested policy issues in light of local constraints and preferences.9 A rule’s form is related to its jurisdictional scope, insofar as international and national rules take different forms. The two primary sources of international legal obligations are treaties and customary international law. While both forms have a host of practical and political comparative advantages, one of the main distinctions for techlaw purposes is the difference in their respective levels of flexibility (Helfer & Wuerth, 2016). Historically, customary international law was comprised of non-negotiated, long-established, and stable rules governing relations among all states. These rules provided the backdrop against which states concluded bilateral, relatively flexible treaties that clarified or modified their respective legal obligations. Today, the rise of multilateral, constitutive treaties and the relatively swift development of new customary international law have complicated the international legal order. Multilateral treaties—many of which codify older customary international law—are extremely difficult to formally modify; meanwhile, newer customary international law is creating loopholes and exceptions to these established multilateral treaty regimes. Layered on top of this web of treaty and customary rules are various forms of “soft law,” non-binding, non-legal agreements on substantive commitments with which parties are expected to comply. What soft law lacks in legal force is often made up for in flexibility. Similarly, rules take different forms within domestic law with varying levels of flexibility. In US law, for example, rules follow a hierarchy: constitutional provisions supersede federal and state statutes, which in turn prevail over common law precedent.10 Constitutional provisions are difficult to enact and amend; any change requires a supermajority of states. Compared with constitutional provisions, federal and state statutes are far easier to create and modify, as they usually only require the approval of a majority of sitting legislators and the relevant executive. Meanwhile, compared with statutes, common law is celebrated for its adaptability, administrative regulations allow for frequent updates, and presidents can create

See New State Ice Co. v. Liebmann, 285 U.S. 262 (1932). Further complicating matters, both treaty and customary international law influence domestic US law, though the contours of that influence are disputed and evolving.  9  10 

36  Research handbook on law and technology or undo executive orders unilaterally. Alternative modes of regulation, such as industry selfregulation, have also prioritized flexibility. 3.2.2 Implementation Of all the design options, lawmakers have the most freedom to innovate in a rule’s implementation, allowing them to address the need for stability or flexibility in the face of technological change. Notwithstanding differences in the inherent flexibility of different forms of rules, design decisions can make a particular rule more or less flexible. For example, treaty drafters can raise or lower the default procedural requirements for formal amendment. The Convention on Certain Conventional Weapons (1980)11 is a framework convention that explicitly anticipates regular amendments, and it has been augmented by five protocols governing different types of weaponry. At the other end of the spectrum, rule-makers can force reconsideration of extant rules by incorporating sunset provisions, which establish a date upon which the law ceases to have effect unless action is taken to extend it. States chose not to renew the 1899 ban on aerial bombardment in 1907, for example, after the 1903 invention of the airplane (Crootof, 2015). Likewise, implementation decisions can constrain or expand a rule’s content flexibility. Rather than set all the particulars of a rule by statute, for example, Congress often designates an agency to promulgate or update rules as circumstances develop. Or a rule may set forth a general, tech-neutral regulation, but allow relevant parties to request tech-specific exceptions. In US law, the Digital Millennium Copyright Act12 features this sort of arrangement: Its triennial rule-making process creates an opportunity for regulated or otherwise interested parties to advocate for exemptions from the Act’s anticircumvention provisions, which has resulted in exemptions for 14 classes of copyrighted works (Casey & Lemley, 2020). Implementation choices may also explicitly or implicitly limit how much a rule’s content may be expanded through analogical reasoning. For example, the Rome Statute (1998) establishing the International Criminal Court13 states that “[a] person shall not be criminally responsible under this Statute unless the conduct in question constitutes, at the time it takes place, a crime within the jurisdiction of the Court,” and that “[t]he definition of a crime shall be strictly construed and shall not be extended by analogy.” In contrast, other classes of rules are presumed to be more flexible. Many human rights treaties lend themselves to expansive interpretations, in part because it is generally acknowledged that the meaning of certain rights will evolve over time. A rule’s subjectivity to adaptive interpretation is not always widely agreed upon—indeed, some of the more fraught arguments over the US Constitution is the extent to which it permits adaptive interpretation of certain terms, such as “cruel and unusual punishment” and “unreasonable search and seizure.” Incorporating standards or reasonableness tests rather than bright-line rules gives later interpreters more room to maneuver. Lawmakers can also constrain adaptive interpretations by prioritizing some interpreters over others: A rule may designate an authoritative interpreter to minimize disputes regarding how a rule’s content may be altered over time. The treaty establishing the World Trade 11  Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed To Be Excessively Injurious or To Have Indiscriminate Effects, October 10, 1980, S. Treaty Doc. No. 103-25, 1342 U.N.T.S. 137. 12  Digital Millennium Copyright Act of 1998, 17 U.S.C. § 1201(a)(1)(C)–(D) (2018). 13  Rome Statute of the International Criminal Court, July 17, 1998, 2187 U.N.T.S. 90.

Legal responses to techlaw uncertainties  37 Organization (1994),14 for example, states that the Ministerial Conference and the General Council “have the exclusive authority to adopt interpretations” of the treaty for all state parties. Alternatively, sometimes rule-makers exclude entire classes of legal actors who would have otherwise been able to influence the construction of the rule. For example, a legislature may pass a statute to preempt contrary judicial or agency rulings. 3.3 Content Flexibility While lawmakers are often precluded from considering the full panoply of design options by their institutional role, they have far more freedom to draft a more tech-neutral or techspecific rule. Relatively tech-neutral rules apply broadly, regardless of the technologies used. In contrast, a relatively tech-specific rule is more narrowly tailored, either with regard to a particular technology, entities who use that technology, or its use as a means to an end. We discuss these concepts as binary to emphasize the distinctions between them, but they exist on a continuum. Most rules can be rewritten to be more or less tech-neutral or techspecific. Consider the relative precision of a rule that prohibits using an AK-47 in a park, a rule that prohibits using guns in a park, a rule that prohibits using weapons in a park, and a rule that prohibits activities that might hurt others in a park. Additionally, certain terms—like “phone” or “in writing”—might become more tech-specific or tech-neutral as innovations or social meanings expand or limit what the term encompasses. Accordingly, we use the terms “tech-neutral” or “tech-specific” to refer to a rule’s relative position along the spectrum of options. It is worth noting that there is more structural flexibility within this framework than most realize. While tech-neutral rules are often conflated with standards and tech-specific rules with “bright-line” rules—possibly because both entail design choices that privilege the rule interpreter or maker, respectively—the concepts exist on separate planes. Whether a law takes the form of a rule or a standard is a structural design choice; whether a law is more tech-neutral or tech-specific is a content design choice. Accordingly, it is possible to have tech-specific standards (such as a general requirement that those using leaf blowers “exercise courtesy and take reasonable steps to minimize [their] impacts”)15 or tech-neutral rules (such as an ordinance prohibiting noise louder than 50 decibels from 10:00 pm to 7:00 am).16 There is a common assumption that a tech-neutral rule is always preferable to a tech-specific one, usually because the former is presumed to be less likely to become obsolete as technologies evolve (Birnhack, 2012). As highlighted by the spectrum of rules regarding weapons in the park, this assumption downplays the tradeoff between flexibility and clarity. Because tech-neutral rules are more flexible, they are more likely to continue to apply as technologies change; there is less risk that new technology, actors, or conduct will fall within a regulatory gap. But while tech-specific rules may be more limited, they are likely to be clearer in 14  Marrakesh Agreement Establishing the World Trade Organization art. IX(2), April 15, 1994, 1867 U.N.T.S. 154, 159. 15  Second Revised Leaf Blower Local Law 11-15-2018, Town of Ossing, N.Y., https://www​.townofossining​.com ​/cms​/publications​/all​-documents​/town​-clerk ​/ local​-laws​/1945​-second​-revised​-leaf​-blower​ -local​-law​-11​-15​-2018​/file [https://perma​.cc​/ W8BD​-LEUX]. 16  NJDEP-Office of Local Environmental Management, Noise Ordinance versus Nuisance Code, The Official Website of the State of New Jersey, https://www​.nj​.gov​/dep​/enforcement​/ Noi​seOr​dina​ncev​ sNui​sanc​eCod​eAug08​.pdf [https://perma​.cc​/ W42Z​-VAEM].

38  Research handbook on law and technology application and, as a result, sometimes more effective in dealing with the challenges of the selected technology. Tech-neutral and tech-specific rules are also likely to generate different kinds of legal uncertainties. Tech-neutral laws raise uncertainties associated with overlapping and overinclusive laws, while tech-specific laws raise uncertainties associated with gaps as well as both under- and overinclusive law. Deciding between a more tech-neutral and tech-specific rule thus requires evaluation of which kinds of legal uncertainties are most acceptable. The choice also has implications for which legal actors and institutions are empowered to update the law in the face of new developments. 3.3.1 Tech-neutral rules Tech-neutral rules are framed broadly, often with the aim of applying to activities or their consequences regardless of the technology employed. For example, US copyright law restricts unauthorized copying “by any method now known or later developed”;17 it is indifferent to the means by which this conduct is effectuated. One of the main appeals of tech neutrality lies in the intuition that it is more flexible and “future-proof” than an approach regulating specific technologies. The more tech-neutral the law, the less lawmakers must act to update the law each time a new device is invented. Instead, tech-neutral rules invite interpreters—including the regulated industry, watchdog entities, law enforcement, prosecutors, judges, and adjudicative agencies—to employ analogical reasoning to extend the tech-neutral rule to novel technologies, newly empowered actors, or previously impossible or rare conduct. Indeed, the ability to postpone wrestling with difficult application questions is a standalone strength of tech-neutral rules, insofar as their breadth may allow rule-makers who disagree on specific applications to reach agreement on broader aims and codify them. For example, the intuitively appealing idea that all weapon use should be subject to “meaningful human control” may allow states to find some degree of consensus on the regulation of autonomous weapon systems, notwithstanding the fact that there are wildly different understandings of what that phrase actually requires (Crootof, 2016). Broad rules may also promote innovation (Birnhack, 2012). By minimizing discrimination between different technologies that achieve similar results, tech-neutral rules incentivize developers to experiment with creating superior alternatives (Birnhack, 2012; Ohm, 2010). Indeed, tech-neutral rules may direct innovators toward particular ends by prescribing aspirational standards without dictating a single route to compliance. For example, Rwanda adopted performance-based regulations for civilian drones that emphasize a safety threshold but invite companies to experiment with meeting the requirement. Additionally, to the extent they do not privilege one technology over another, tech-neutral rules may also reduce the likelihood of technological lock-in (Birnhack, 2012; Reed, 2007). Particularly in legal regimes where there may be some doubt as to whether any law applies to a new technology, tech-neutral rules help minimize dangerous legal gaps and loopholes. International law, for example, is often understood as being consent-based; proponents of this view argue that states that have not agreed to be bound by a rule are not obliged to follow it. Accordingly, technologies that enable new conduct—like sending satellites into orbit or engaging in cyberoperations—regularly raise the question of whether existing law applies at 17 

17 U.S.C. § 101 (2018).

Legal responses to techlaw uncertainties  39 all. In such cases, tech-neutral rules make it easier to justify the application of old laws to new artifacts, actors, and activities. However, tech-neutral rules still give rise to numerous legal uncertainties. A tech-neutral rule may be less likely to create a legal gap, but its general language will raise application questions regarding how the law should be applied in different scenarios. Sometimes a techneutral law is so vague that it permits problematic, self-serving interpretations by regulated entities, reducing its efficacy. Tech-neutral rules may also easily become overinclusive as interpreters use analogies to stretch law too far or as technological advances alter conduct and possible outcomes. Overinclusive laws risk interfering with socially desirable conduct as well as ineffectiveness if that interference results in them being underenforced. Further, and somewhat counterintuitively, the existence of an overinclusive rule may make it more difficult for legal actors to enact needed legislation as new harms are recognized. Due to a perception that there is relevant law and a reluctance to revisit old drafting compromises, lawmakers may be less able to revise extant law than to draft entirely new rules. Because they are premised on a set of assumptions made in a particular technological moment, even laws that are facially tech-neutral may be rendered obsolete in the face of future technological developments that expose their technology-contingent assumptions (Bennett Moses, 2005; Birnhack, 2012; Reed, 2007). Road safety laws that nominally include any vehicle often presume a conventional automobile with a human driver. Accordingly, laws mandating that trucks make regular rest stops so the driver can sleep may prove overinclusive if applied to autonomous vehicles. The common law faces the same challenge. The venerable ad coelum doctrine in property—which traditionally protected a landowner’s airspace “up to the heavens”—is tech-neutral on its face. Yet it was founded on the assumption that no one could make practical use of the air; with the advent of airplanes, US courts waved away the rule as the product of “an age of primitive industrial development.”18 3.3.2 Tech-specific rules While often undervalued in discussions of regulating technology, tech-specific laws have a number of strengths that may make them preferable to more tech-neutral versions. If nothing else, tech-specific laws are relatively clear. As a result, it is easier to determine precisely whether and how the law applies to an artifact, actor, or activity. This clarity may foster ex-ante compliance, as the entities using the regulated technology will better understand their obligations and have less room to argue the rule does not apply; meanwhile, enforcers will be on more solid ground in identifying violations. Precision is especially important in areas where fundamental rights are at stake. In criminal law, for example, the rule of lenity requires clarity as a prerequisite to imposing criminal liability and depriving an individual of liberty. Tech-specific laws, like those that criminalize particular types of weapons or drugs, respond to this need for specificity. Tech-specific laws may also be more carefully tailored to the issue the lawmaker intends to address. For example, motorcycles, conventional automobiles, and tractors all pose different safety and emissions concerns; it will generally be easier and more effective for regulators to address these distinct concerns by promulgating separate rules for each class of vehicle than through a comprehensive, tech-neutral rule for all motor-driven land vehicles. This tailoring

18 

Johnson v. Curtiss N.W. Airplane Co., U.S. Aviation Reports 42 (Minn. Dist. Ct. 1923).

40  Research handbook on law and technology minimizes ambiguity and problems associated with overinclusion and can be used to slow the adoption of certain forms or uses of a technology while permitting others. The narrow scope of tech-specific rules may make them easier to pass, as lawmakers may believe that a tech-specific rule is less likely to have unintended, widespread effects. (Techspecific rules are also prone to being proposed by lobbyists and unopposed by others, as a narrow rule is likely to be supported by a concentrated group of interested parties.) More specific rules can also promote innovation, insofar as creators know exactly what they need to design around to evade regulatory constraints. Moreover, while tech-specific rules may be relatively short-lived, their impermanence may itself be a strength. As Paul Ohm notes, tech-specific laws incorporate a de facto sunset: “[T] ech-specific rules serve one unappreciated benefit: they sunset when new technologies are introduced. A law that governs only the use of a telephone, for example, will not govern the use of the Internet” (Ohm, 2010). This approach allows lawmakers to capture the benefits of a sunset provision—namely, its ability to mitigate the difficulties of regulating despite inadequate information—without the arbitrariness of picking an expiration date that may bear no relation to changes in the use or format of the relevant technologies (Ohm, 2010). Lawmakers can renew a rule if needed; meanwhile, to the extent the lawmaker is crafting the rule with insufficient information, it may be preferable to create a rule with a shorter lifespan to balance out the possibility that it does more harm than good (Birnhack, 2012). Of course, tech-specific rules also have drawbacks. While some legal rules may fade into obscurity without incident or productively spur rule-makers to act, tech-specific laws may easily create legal gaps and underinclusive rules (Ohm, 2010). These uncertainties may be mitigated if lawmakers regularly revise the law, but the practical difficulties of doing so increase the likelihood that tech-specific legal regimes for certain artifacts, actors, or activities will ultimately result in underregulation. Creating tech-specific rules is also practically difficult because it often requires more technical expertise and, collectively, repeated rule creation and revision may demand more rule-making time than creating a single tech-neutral rule. Additionally, tech-specific laws incentivize technological and legal contortions to circumvent the rule (Burk, 2016). Consider Aereo’s creative attempt to work around what it perceived as tech-specific caselaw in copyright.19 Nor is tech-specific law immune to becoming overinclusive, especially when enacted early in the development cycle of a new technology. A rule may be carefully tailored to the particular costs and benefits of the technology at that moment in time, yet impose unnecessary costs as the technology changes. Consider elevator regulations—some still in force—that require having or accommodating operators on each elevator. This rule addressed important safety and worker protection concerns when elevators required manual operation, but it is an anachronism today (Gibbard, 2014; Jones, 2015). 3.3.3 Institutional implications of the tech-neutral–tech-specific spectrum Tech-neutral and tech-specific rules have different implications for the institutional allocation of power (Birnhack, 2012). If a legislature, agency, or other lawmaker adopts a tech-specific rule, it retains more power by minimizing opportunities for discretion by those who interpret and apply the law. By contrast, if the lawmaker adopts a tech-neutral rule, it vests judges,

19 

See Am. Broad. Cos. v. Aereo, Inc., 573 U.S. 431 (2014).

Legal responses to techlaw uncertainties  41 executive branch actors, and other legal interpreters with more power to determine the scope and meaning of the rule. The decision between whether to adopt a more tech-specific or -neutral rule would ideally be driven by a determination about which type of legal actor or combination of institutions was better positioned to regulate the relevant artifact, actor, or activity. This question necessarily implicates complex sub-questions of institutional authority, competence, and legitimacy. The ideal combination of rule-makers will vary depending on a host of factors, including how much is known about the regulated item and its social uses; how likely it is to change over time; the type, magnitude, or probability of the benefits it promises; and the type, magnitude, or probability of the harm it may cause. Just to highlight an example of varying institutional competencies: The choice between tech-neutral and tech-specific rules implicates different timing and information-gathering capabilities. Because tech-neutral and tech-specific rules prioritize different sets of legal actors, the timeline for finalizing the substance of a rule varies between them. Tech-specific rules embody a decision by a rule-maker, such as a legislature, to treat current and future uses of the technology in a specific way from the moment of enactment. Notwithstanding a legislature’s broader information-gathering powers relative to a legal interpreter, like a court, a tech-specific rule may prove overinclusive or ineffective if the technology and its uses develop in directions the legislature did not foresee or study. In contrast, tech-neutral rules allow for decisions that take account of how a technology’s social uses evolve over time because they defer to legal interpreters, like courts, to determine how the rule applies to specific uses as they emerge. But this delay does not guarantee better information for making a decision—courts make decisions later in time relative to the enactment of the rule, but they do so with more limited information-gathering capabilities relative to the legislature. To the extent that rule-makers adopt a tech-neutral rule to put off a final decision until the technology and its social construction stabilizes, moreover, they may miss the opportunity to proactively shape how the technology develops (Bernstein, 2006). 3.4 Reassess the Regulatory Regime In extreme cases, tech-fostered social change may prompt a fundamental reassessment of a legal regime, possibly by rendering once-dependable assumptions inaccurate or encouraging new understandings of the regime’s aims. In some cases, this reassessment culminates with the legal system reconfiguring legal institutions or creating new ones to meet these foundational shifts. 3.4.1 Exposing assumptions Individual laws and entire legal regimes can be grounded on assumptions rendered inaccurate by technological development. For example, much of US firearm regulation depends on being able to control or at least monitor firearm sales. But if individuals can 3D-print guns at home, they can bypass the regulatory regime entirely by eliminating the point of sale. This activity exposes the assumption that “guns are sold in a marketplace” as central to the regime while simultaneously undermining it. In turn, this challenge to the efficacy of the law requires legal actors to reassess the mechanisms for effectuating the aims of firearm regulation—or any other regulatory regime for items that can now be 3D-printed if that regime is premised on governmental monitoring or intervention at the point of sale.

42  Research handbook on law and technology 3.4.2 Reconceptualizing aims Tech-fostered social change can also prompt reconceptualization of a legal regime’s foundational aims or principles. Just as the machines of the Industrial Revolution forced courts to rethink tort liability (Friedman, 1985; Witt, 2001), industrialization prompted a parallel transformation in property (Horwitz, 1977). Blackstone’s view of property rights as absolute may have worked for an agrarian society, but it stood in the way of industrialists whose activities—laying rail lines, polluting, or even flooding lands for a mill—necessitated interfering with others’ property rights (Ard, 2019). Spurred in part by technological developments, the courts reshaped property doctrine to advance contemporary notions of economic progress, replacing property absolutism with an attempt to balance newly competing rights. The principles and commitments exposed through such reassessment may not be entirely new; technological change may also push legal actors to recognize and articulate previously implicit concerns, commitments, or rights (Surden, 2007). Take Samuel Warren and Louis Brandeis’s “The Right to Privacy” (1890), one of the most famous articles in the techlaw canon. No court had previously acknowledged such a right, but in canvassing the law of defamation, intellectual property, and tangible property, they explicated a more general “right to be let alone” that had come under threat in the wake of the portable camera. Once this right was made explicit, courts developed it through tort law and legislatures codified it. 3.4.3 Reconfiguring institutions Sometimes, reassessing a legal regime indicates the need to restructure legal institutions to better grapple with emerging challenges. Such reconfiguration often entails shifting regulatory power from one institutional actor to another. When courts and legislatures proved unable to sufficiently respond to the privacy and data security challenges that followed the rise of the internet, the US Federal Trade Commission entered the void (Solove & Hartzog, 2014). As proponents of this approach have argued, the Commission’s monitoring and enforcement expertise situate it to coordinate industry self-regulation as data-collection practices develop and its enforcement authority yields a de facto body of privacy common law (Solove & Hartzog, 2014). Alternatively, reassessment may spur institutional creation or modification. The US Congress found that regulating competition among the railroads required greater agility and subject-matter expertise than it or the judiciary could muster. To meet these needs, it established the Interstate Commerce Commission as the first independent administrative agency. Congress and policy advocates subsequently embraced the creation of new agencies as a go-to strategy for regulating high-tech industries from the early twentieth century through the present (Calo, 2014). The explosion of international, multinational, and multi-stakeholder modes of lawmaking and adjudication accompanying the rise of the internet and algorithmic governance marks a new era of ongoing reconfiguration (Cohen, 2019, 2021; Kaminski, 2019).

4. CONCLUSION When confronted with a techlaw uncertainty, legal actors have two main means of resolving them. Looking backward entails using analogy to stretch extant law to new situations, while looking forward manifests in the creation of new law or reconfigurations of the governing legal regime.

Legal responses to techlaw uncertainties  43 This is, of course, a simplified account, given that each set of moves implicates the other— the future we chart now will not only shape the governance of current technologies, it will also set the stage for those looking backward when they must deal with future uncertainties.

REFERENCES Ard, B. (2022). Making Sense of Legal Disruption. Wisconsin Law Review Forward, 2022, 42–63. Ard, B. (2019). More Property Rules than Property? The Right To Exclude in Patent and Copyright. Emory Law Journal, 68, 685–737. Balkin, J.M. (2004). Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society. New York University Law Review, 79, 1–55. Balkin, J.M. (2015). The Path of Robotics Law. California Law Review Circuit, 6, 45–60. Barlow, J.P. (1996). A Declaration of the Independence of Cyberspace. Retrieved from https://www​.eff​ .org​/cyberspace​-independence. Bennett Moses, L. (2007). Recurring Dilemmas: The Law’s Race To Keep up with Technological Change. University of Illinois Journal of Law, Technology & Policy, 2007, 239–285. Bennett Moses, L. (2005). Understanding Legal Responses to Technological Change: The Example of In Vitro Fertilization. Minnesota Journal of Law, Science & Technology, 6, 505–618. Bernstein, G. (2006). When New Technologies are Still New: Windows of Opportunity for Privacy Protection. Villanova Law Review, 51, 921–950. Birnhack, M. (2012). Reverse Engineering Informational Privacy Law. Yale Journal of Law & Technology, 15, 24–91. Burk, D.L. (2016). Perverse Innovation. William & Mary Law Review, 58, 1–34. Calo, R. (2014). The Case for a Federal Robotics Commission. Retrieved from https://www​.brookings​ .edu​/research ​/the​-case​-for​-a​-federal​-robotics​-commission/. Casey, B. & Lemley, M.A. (2020). You Might Be a Robot. Cornell Law Review, 105, 287–361. Cohen, J.E. (2007). Cyberspace as/and Space. Columbia Law Review, 107, 210–256. Cohen, J.E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. New York: Oxford University Press. Cohen, J.E. (2021). From Lex Informatica to the Control Revolution. Berkeley Technology Law Journal, 36, 1017–1050. Crootof, R. & Ard, B. (2021). Structuring Techlaw. Harvard Journal of Law and Technology, 34, 343–417 Crootof, R. (2019). The Internet of Torts: Expanding Civil Liability Standards to Address Corporate Remote Interference. Duke Law Journal, 69, 583–667. Crootof, R. (2016). A Meaningful Floor for “Meaningful Human Control.” Temple International and Comparative Law Journal, 30, 53–62. Crootof, R. (2015). The Killer Robots Are Here: Legal and Policy Implications. Cardozo Law Review, 36, 1837–1915. Friedman, L.M. (1985). A History of American Law (2nd ed.). Old Tappan, New Jersey: Touchstone Books. Froomkin, A.M. (1995). The Metaphor is the Key: Cryptography, The Clipper Chip, and the Constitution. University of Pennsylvania Law Review, 143, 709–897. Gibbard, F. (2014). Blame It on the Elevator Pilot: Dark Tales of Entry-Level Negligence. Colorado Lawyer, 43, 55–60. Helfer, L.R. & Wuerth, I.B. (2016). Customary International Law: An Instrument Choice Perspective. Michigan Journal of International Law, 37, 563–609. Horwitz, M.J. (1977). The Transformation of American Law. Cambridge, Massachusetts: Harvard University Press. Hughes, S.J. (2019). Property, Agency, and the Blockchain: New Technology and Longstanding Legal Paradigms. Wayne Law Review, 65, 57–80. Hunter, D. (2003). Cyberspace as Place, and the Tragedy of the Digital Anticommons. California Law Review, 91, 439–519.

44  Research handbook on law and technology Hunter, D. (2004). Teaching and Using Analogy in Law. Journal of the Association of Legal Writing Directors, 2, 151–168. Jones, M.L. (2015). The Ironies of Automation Law: Tying Policy Knots with Fair Automation Practices Principles. Vanderbilt Journal of Entertainment and Technology Law, 18, 77–134. Kaminski, M. (2019). Binary Governance: Lessons from the GDPR’s Approach to Algorithmic Accountability. Southern California Law Review, 92, 1529–1616. Lakier, G. (2018). The Problem Isn’t the Use of Analogies but the Analogies Courts Use. Knight First Amendment Institute. Retrieved from https://knightcolumbia​.org​/content​/problem​-isnt​-use​-analogies​ -analogies​-courts​-use. Ohm, P. (2010). The Argument Against Technology-Neutral Surveillance Laws. Texas Law Review, 88, 1685–1713. Reed, C. (2007). Taking Sides on Technology Neutrality. SCRIPT-ed Journal, 4, 263–284. Reid, B. (2020). Internet Architecture and Disability. Indiana Law Journal, 95, 591–647. Shevlane, T. & Dafoe, A. (2020). The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? In Proceedings of the 2020 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’20), Feb. 7–8 (2020), New York, NY, USA. Retrieved from https://arxiv​ .org​/pdf​/2001​.00463​.pdf [https://perma​.cc​/ V3HA​-RVY9]. Solove, D.J. & Hartzog, W. (2014). The FTC and the New Common Law of Privacy. Columbia Law Review, 114, 583–676. Surden, H. (2007). Structural Rights in Privacy. Southern Methodist University Law Review, 60, 1605–1629. Warren, S. & Brandeis, L. (1890). The Right to Privacy. Harvard Law Review, 4, 193–220. Whitney, H. (2018). Search Engines, Social Media, and the Editorial Analogy. Knight First Amendment Institute. Retrieved from https://knightcolumbia​.org​/content​/search​-engines​-social​-media​-and​ -editorial​-analogy. Witt, J.F. (2001). Toward a New History of American Accident Law: Classical Tort Law and the Cooperative First-Party Insurance Movement. Harvard Law Review, 114, 690–841.

4. What’s law got to do with IT: an analysis of techno-regulatory incoherence Zachary Cooper and Arno R. Lodder

1. INTRODUCTION With the advent of internet technologies in the 1990s, the cyberutopians preached of a new era governed by the emergent rules of cyberspace itself, immune to pesky state sovereignty and that most coercive instrument of power – the law (Barlow, 1996). As a lynchpin of social ordering and control, the law’s role in society was now being challenged by a new regulatory architecture that purported to emancipate its users from the shackles of the law and usher in an era of personal liberty and self-sovereignty. As some sang the virtues of this great new utopia, others dreaded a law-avoiding society where new technological regulatory architectures could be exploited for nefarious purposes, allowing actors to successfully shirk culpability and consequence (Fisher & Wright, 2001; Wu, 2016, pp. 7–11). Reflecting upon this era of grand prediction, we can rather observe that those once strange bedfellows – the law and the internet – have evolved their initially provocative affair into an increasingly healthy, and rather less scandalous, marriage. Critically, it is a relationship of increasing co-dependence (Lodder, 2013). The notion that the one can hold no power over the other has become laughable. They are fundamentally intertwined, each increasingly shaping the other. As such, nearly every area of the law has been affected by the internet. Yet the legal system itself is also changing, as functions of the judicial system are themselves outsourced to internet architectures.1 As such, the two grow ever more coherent with one another. Yet, as the law has become increasingly dependent upon the internet and other emergent technological regulatory architectures, it has also become easier to confuse. In the depth and sophistication of its relationship with existing technologies, it is rendered less malleable to those technologies that are emergent. Thus, the law faces profound challenges as technologies develop in direct response to its own governance mechanisms, seeking to create their own privatised architectures which may be in opposition to or simply incompatible with the law. As internet architectures replaced the utility of other technologies, the law was able to adapt to regulate these new architectures and find coherence. In Section 2 of this chapter – “The weary giant falls in love”, we will outline how the law and the internet have become increasingly co-dependent in the past few decades. We note that the functionality of a technological architecture is inherently ideological. As such, alteration of the functionality of the law to cohere with a technological architecture will have ideological consequences, whether intended or not. Thus, the law is unable to integrate emergent technological architectures agnostically. Critically, ideological coherence of the law and technology renders possible incoherence with ideologically divergent architectures. 1  For an excellent analysis of the role of technological architectures in the privatisation of dispute resolution, see Koulu (2019).

45

46  Research handbook on law and technology In Section 3 of this chapter – “New kids on the block”, we outline how legislation that purports to be technology-neutral may still ultimately be beholden to the inherent ideologies and functionalities of prevailing internet architectures, or the predominant regulatory coherence. Thus, we find incoherence where there is ideological tension between architectures, such as between the GDPR’s “right to be forgotten” and blockchain architectures’ “immutability principle” (Daoui et al., 2019, p. 243). Critically, however, we exhibit how ideological tension can persevere even where the architectures are ostensibly in pursuit of the same goal. In Section 4 – “The regulatory multiverse and the limits of coherence”, we consider the unique challenges facing the law with the advent of increasingly sophisticated emergent technologies that have their own in-built privatised algorithmic regulatory structures. We posit that the challenges do not come from an inability of the law to meaningfully interact with the technologies. Nor do we believe that the challenges come from radical shifts brought about by the technologies. On the contrary, if new technologies are radical enough in their replacement of means of interaction in society, the law is able to reorient itself and re-cohere itself around this societal change. We finally speculate over a future whereby legal co-dependence with predominant technological architectures creates space for multiple fringe technological architectures, which disparately and individually exploit the abstraction of regulatory norms to avoid a coherent relationship with the law, which we call the regulatory multiverse.

2. THE WEARY GIANT FALLS IN LOVE Some 26 years have passed since John Perry Barlow, in his “Declaration of the Independence of Cyberspace”, decried the “weary giants of flesh and steel” who sought to control our new utopic space (Barlow, 1996). After a couple of decades of reflection replete with legislation directly aimed at regulating his beloved cyberspace, Barlow held his ground, declaring that his contention had been that “cyberspace is naturally immune to sovereignty and always would be” (Greenberg, 2016). Thus, for the cyberutopian, something rather unnatural has played out in the decades since, as public and private operators have fought it out to gain greater control of architectures they believed would be immune from the manipulation of the powerful. As the internet has transformed in the ensuing decades, its architecture has served less as a shelter from control but rather as its own unique ever-developing infrastructure whose complicated interaction with the law, while often fractious, is rather more co-dependent than it is hostile. Thus, in stark contrast to its nostalgic characterisation as a great lawless wasteland, it is rather now recognised as a regulatory tool profound enough in its pragmatic architecture that increasingly functions of the law are finding themselves outsourced. 2.1 Private Alternatives We might consider, for example, the significant privatisation of dispute resolution via internet architectures. Traditional barriers to entry for public dispute resolution through judicial systems have led to private recourse in algorithmic dispute resolution. For example, when the classic 256 GB iPod that Zac was so excited to buy on eBay from Arno turns out to be broken and Zac is only able to listen to the Dutch rock bands from the 1980s that Arno left on there, if Arno tells Zac that he should be grateful for the musical education and refuses to rectify the

Law and IT 

47

situation, Zac may find it easier to turn to eBay’s own private algorithmic dispute resolution tools rather than to interact with the Dutch civil legal system.2 Here, although the algorithmic law may broadly mirror legal norms in its modes of determination, and while its components are still governed by legislation, it still ultimately serves as an algorithmically determined alternative whose decision, no matter how heinous or detached from legal precedent or likelihood in a public system, may still prevail. Given the profound socio-political ramifications of the privatisation of dispute resolution, it would perhaps be intuitive to assume that courts and legislators would be broadly resistant towards algorithms presiding over disputes that were once decided in the public domain. Rather, courts have often welcomed the relief. Although courts have long incentivised outof-court settlements to ameliorate stress on judicial systems, increasingly courts are enacting ever higher thresholds to have disputes heard before a judge at all (Mulcahy, 2013, pp. 60–61). Thus, legal scholars such as Simon Roberts and Linda Mulcahy have lamented that judges are becoming “no more than exemplary ceremonial figures who legitimate other people’s decision making”, with civil justice services “used only by those prepared to pay for it” (Mulcahy, 2013, pp. 68, 77; Roberts, 2009, p. 457).3 Thus, support from public regulators for private dispute resolution inherently incentivises coders to develop algorithmic tools to support efficient and cheap ways for conflicts to be decided privately. Far from serving as emancipations from the law, algorithms rather offer publicly incentivised alternatives. Critically, private regulatory mechanisms do not need to emulate the modes of public systems, nor even attempt to predict or aspire to their outcomes. Indeed, they are fundamentally unable to, even where doing so is the pre-eminent aspiration of their design. Rather, whether by design or not, any architecture will intrinsically support certain ideologies through its functionality. 2.2 Ideological Coherence We may consider, as an example, the increasing utilisation of so-called legal technologies within the public sector, such as those that algorithmically assist judges with decisions around sentencing, bail and parole. Even where such tools would ostensibly seek to emulate processes and outcomes such that the same decision would be reached had they not been utilised, they are inherently limited by their architectures. As such, only those factors that can be processed by the algorithms hold any relevance, rendering them more reliant on clear facts or data points than on anything more abstract. In turn, those data points that are found to correlate the most directly with certain outcomes find themselves assuming greater importance, all of which has amounted to a simplification of the decision-making process. These predictive algorithmic justice tools have led to the assertion that but a small handful of factors, such as prior criminal history, are the most important to focus on in assessing risk (Carlson, 2017, p. 61). Even were an algorithm to somehow directly emulate a judge’s entire decision-making process taking into account all abstract factors and coming to the exact conclusion that she would have come to, this would still be critically different from a judge having come to the same conclusion.

2  This example was inspired by the analysis of private dispute resolution mechanisms and multiple examples outlined in Koulu (2019). 3  Quotes from Mulcahy.

48  Research handbook on law and technology Even were the judge to endorse this decision, the relationship of responsibility over the decision is inherently altered. Thus, as judicial processes are outsourced to technological architectures, and as technological architectures are simultaneously integrated into judicial processes, the regulatory landscape transforms, as the intrinsic functionalities of these architectures, and therefore their ideologies, coalesce into one another. Such a relationship is at odds with the popular notion that the law is “chasing” technology. We find that technologies are also chasing the law, and in turn, changing its functionality and its intrinsic ideologies. 2.3 Dominant Coherence Where an architecture is powerful enough in its social utility, the law must also itself bend to the functionality and the ideologies of this architecture. As such, even broad legislation that seeks to regulate behaviour ostensibly tech-agnostically must cohere with internet architectures. As an example, perhaps the most discussed piece of legislation in relation to the internet remains Europe’s General Data Protection Regulation (GDPR). Yet, the GDPR does not solely apply to electronic data, but rather defines data as “any information relating to an identified or identifiable natural person”.4 Thus, the legislation intends to be technology-neutral, as did its predecessor, the Data Protection Directive 95/46. However, for the legislation to be of any utility, it needed to be drafted with a clear technical understanding of technological architectures through which data was processed, most relevantly the internet. Thus, while the GDPR purports to grant data subjects control over their data, and thereby regulates usage of internet architectures, those architectures themselves have already confined the drafting of the GDPR. Were the GDPR to implement requirements that were fundamentally incoherent with internet architectures, it would be far more likely for the GDPR to undergo amendment to cohere it with the functionality of the internet than the other way around. Thus, its design is fundamentally influenced by the apparatuses it seeks to influence. In its enforcement, the limits of its influence are examined and tested in pursuit of clarity and cohesion between these intertwining architectures. The supremacy of the law over the technology as a regulatory architecture is challenged. The law must fall into line to some extent around technologies in order to be accepted and to function. Again, any coherence between the law and the technology discreetly transforms the functionality and ideologies of each, rendering them more alike. The greater this functional and ideological coherence, the more readily new technological architectures are able to emerge that are incoherent with the prevailing relationship between the law and technology. Thus, while much of the legal narrative of the past few decades has been defined by lawmakers and judges seeking and finding greater cohesion between mutually interactive legal and technological architectures in their pursuit of societal objectives (both “real” and virtual), the advent and popularity of new technologies which are fundamentally distinctive and disparate in their architectures manifestly challenges and muddies mutual regulatory cohesiveness and exposes the tech-specificity of regulatory architectures which were tech-agnostic in intention. These technologies challenge ostensibly universal regulation, 4  Art 4, Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2018).

Law and IT 

49

questioning the best means by which the law can meaningfully seek to regulate essentially disparate technological architectures simultaneously.

3. NEW KIDS ON THE BLOCK The popularisation of blockchain technologies has heralded a certain déjà vu of cyberutopian thought. Once more, there is talk of technological emancipation from society’s regulatory structures and the championing of greater, more efficient and more democratic code-based replacements for core societal functions. Those expecting a self-governed algorithmic wonderland free from the shackles of outside regulatory control are sure to find themselves disappointed as this emerging algorithmic architecture intertwines with and coheres around the legal regulatory architectures that preside over it. However, there are unique regulatory challenges in integrating this emergent architecture into the grander regulatory patchwork that surrounds it which were not present when integrating internet technologies. Crucially, legal regulatory architectures over the past decades have been primarily constructed and designed for and around dominant technological architectures, even when in pursuit of tech-agnosticism. This increasing co-dependence creates a greater opportunity for incoherence in emergent technologies with disparate functions and ideologies. 3.1 The Limits of Tech-Agnosticism We might consider the GDPR once more as an example, which as stated, purports to be technologically neutral (and indeed, was adopted after the advent of blockchain technologies). Yet, despite the tech-agnostic intention, the architectures of blockchain technologies are fundamentally incompatible with the GDPR, thus exhibiting the limitations of universally driven regulation as well as the means by which emergent regulatory architectures can challenge the pre-existing regulatory landscape. There are a number of deep ideological tensions between the GDPR and blockchain technologies, with core GDPR principles, such as data minimisation and privacy by design, fundamentally challenged (Daoui et al., 2019, p. 243; Finck, 2018, p. 104). The same is true of a number of rights enshrined in the GDPR such as the right not to have automated decisionmaking and the right to access (Blockchain and the GDPR, 2018, p. 25; Herian, 2020; Van Eecke & Haie, 2018, pp. 532–533). Here, through intrinsic ideological incompatibility, there is regulatory incoherence. We find such an example in the tension between the blockchain’s immutability principle and the GDPR’s Article 17 “right to be forgotten”. How exactly is a data subject to enact their right to “the erasure of personal data concerning him or her” within an architecture designed such that information can only be appended and never deleted? Contrary to narratives that paint blockchain developers as rebels against the gatekeepers of legal regulatory structures, many solutions from within the blockchain community have been posited to try to cohere these seemingly incompatible regulatory architectures. These include calls to store personal data in off-chain encrypted databases rather than on the blockchain itself. Instead, a hash, a cryptographic construct that allows one-way access to off-chain information, could be stored on-chain in place of the data itself (Mannan et al., 2019, p. 12). If a data subject exercises their right to be forgotten, the off-chain information can be erased. While the on-chain hash would survive, it would no longer reference any live information.

50  Research handbook on law and technology A glut of other innovative technical solutions have also been posited by code developers. These include state-tree-pruning, whereby smart contracts could be utilised to automatically remove irrelevant or unnecessary data from old blocks, chameleon hashes, which essentially serve as a means for blocks to be edited, and the μchain, which is a proposition for a mutable blockchain which would be able to hold alternative versions of a blockchain and ascertain which version is active through the use of consensus mechanisms (Dutta et al., 2020, pp. 7–8; Finck, 2018, p. 107). Thus, as occurred with internet architectures, despite narratives around rebellion and emancipation, we rather find seemingly incompatible regulatory architectures being massaged towards one another, in pursuit of coherence. Yet, for these proposed solutions to function, the law must realign itself so as to accommodate them. As “erasure” is not defined in the GDPR and per Article 17(2), where the right to be forgotten is exercised, data controllers are to take “account of available technology and the cost of implementation”, there has been much support for the law to interpret these provisions such that those technological solutions proposed which do not technically erase personal data, yet do render it significantly more difficult to access, might still be accepted as meeting a threshold for GDPR “erasure” (Finck, 2018, pp. 107–108). Here, the limits of technology neutrality are exposed. Despite being adopted after blockchain technologies were in wide use, the GDPR is ultimately unable to accommodate blockchain technologies without significant dilution of one or both of the respective ideologies of the regulatory architectures: namely, the right to be forgotten and the immutability principle. While hashing might practically function as a means of storing personal data off-chain, it fails to solve how a data subject would be able to exercise their right to be forgotten for personal data that is immutably stored on a blockchain. This is a critical issue, given the broad definition of “personal data” in Article 4(1) of the GDPR means that much data which is stored on blockchains, such as wallet addresses, is likely to be captured (Verma, 2018, p. 16). Similarly, state-tree-pruning, chameleon hashes, the μchain and many other proposed solutions can only reconcile the tension with the GDPR by either challenging the immutability of the blockchain by facilitating the editing of blocks, failing to actually erase the data from the blockchain such that it is unclear whether “the right to be forgotten” could be meaningfully exercised, or challenging that most sacred ideology of the blockchain – “trustless trust” – by relying on a third party for their administration (Finck, 2018, pp. 107–108). Here, the tension between the architectures is ideological, which again meaningfully exhibits the notion that architectures cannot be intrinsically agnostic as they will always hold some ingrained belief systems, whether by conscious design or not. In turn, no architecture can hope to be perpetually and meaningfully universal in its application without needing to eventually cohere with another regulatory architecture. 3.2 Incoherent Means to the Same End Yet, critically, even where expressly stated regulatory objectives are ostensibly in sync, the design of the architectures may still render the pursuit of these goals at odds with one another. To exhibit this notion, we can continue examining the application of the GDPR to blockchain technologies. The first recital of the GDPR evokes both Article 8(1) of the Charter of Fundamental Rights of the European Union and Article 16(1) of the Treaty on the Functioning of the European Union in stating that it is a fundamental right that “everyone has the right to

Law and IT 

51

the protection of personal data concerning him or her”.5 Similarly, the Self-Sovereign Identity (SSI) model seeks to grant users greater control of their personal digital identities through utilising blockchain technologies (Preukschat & Reed, 2021, p. 1). Although SSI models do not necessarily require blockchain technologies to function, blockchains are able to provide a critical solution as to how to purpose a serviceable framework for ensuring trust in data integrity in a decentralised network.6 We might briefly outline the functionality of SSI models to explain this. SSI models enable users to manage their own digital identities, rather than relying on third parties. Through using cryptographic techniques, verifiable credentials (VCs) can prove the validity of a person’s identity while maintaining privacy. For example, a user would not need to send their passport around to a number of different sources each seeking to confirm the user’s identity, but could instead simply utilise VCs to provide cryptographic proof of a user’s identity without another party ever needing to actually have access to the user’s private information.7 Another perk of SSI models would be that a user would need only update any changed information about their identity one time, such as a change of address, rather than re-enter this new information in every individual centralised database dependent on up-to-date identity details.8 By the same token, a user would be able to differentiate separate digital identities for separate contexts, all of which would be immune to any central authority removing them or altering them.9 Seemingly, the ideologies inherent to SSI models directly align with the ideologies of the GDPR in their pursuits of assisting users in maintaining digital sovereignty over their personal data (Herian, 2020, pp. 2–3). However, the functionality of the architectures themselves are broadly incompatible. The GDPR seeks to facilitate user control by rendering “controllers” of personal data as accountable to the subjects of the data they control.10 However, a data “controller” is defined in the GDPR as “a natural or legal person … [who] determines the purposes and means of the processing of personal data”. Given the significant obligations that the GDPR then places upon “controllers” towards the data they control, it is apparent that the GDPR’s drafting was primarily conscious of relatively centralised data systems, whereby a person or persons could exhibit meaningful control over a subject’s data (Blockchain and the General Data Protection Regulation: Can Distributed Ledgers Be Squared with European Data Protection Law?, 2019, pp. I–II). Thus, where data is managed upon systems that are decentralised, as those built upon blockchain technologies are to varying degrees, ascertaining entities who could be considered as GDPR-defined “controllers” who “determine the purposes and means” of personal data can be difficult (Blockchain and the GDPR, 2018, pp. 24–25). This is especially true of public permissionless blockchains, as they are distributed

 5  Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2018).  6  See The European Union Blockchain Observatory and Forum. (2019). Blockchain and Digital Identity. Retrieved from https://www​.eublockchainforum​.eu​/sites​/default​/files​/report​_identity​_v0​.9​.4​ .pdf, p. 16.  7  Ibid., p. 13.  8  Ibid., p. 17.  9  Ibid., p. 14. 10  Art. 4(7), Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (2018).

52  Research handbook on law and technology peer-to-peer networks that anyone with the means to access them can participate in. As the data is shared across each of the nodes, it may be that either every individual node would be considered a data “controller” for the purposes of the GDPR or none of them (Berberich & Steiner, 2016, p. 424; Finck, 2018, p. 100).11 Neither option is desirable. It is not surprising then that much of the regulatory guidance provided to cohere blockchain technologies with the GDPR has focused on private permissioned blockchains, which given the variability of governance structures that they are able to support and their self-contained infrastructures, allow for greater architecture control. They therefore have the potential to be more amenable to existing laws drafted with the internet in mind, provided they are designed compliantly. Within private permissioned blockchains, data management roles and responsibilities may be clearer and notions of data control may be more readily determined, such that the designation of data “controllers” is possible unlike in public permissionless blockchains (Van Eecke & Haie, 2018, p. 532). As such, the pursuit of regulatory harmony between the law and blockchain technologies has led to recommendations and guidance that avoid reconciling public permissionless blockchains with the GDPR and instead champion the design of GDPRcompliant private permissioned blockchains which have clearly designated user roles from the outset to assist with GDPR interpretation.12 In essence, such guidance seeks to support the application of the GDPR, and as such to assist users in holding greater digital sovereignty over their personal data, through recommendations to further concentrate and centralise blockchain technologies’ architectures. Yet SSI models in pursuit of that same regulatory motive of the GDPR, the facilitation of user data sovereignty, rely on the exact opposite – namely, the integrity of the data being verified through its decentralised architecture.13 Indeed, blockchain technologies do not themselves inherently verify that a user’s identity is legitimate, nor do they inherently provide any security to the user. Rather, the quality of these attributes is directly linked to how decentralised the blockchain architecture is. Thus, calls to design more centralised architectures directly undermine blockchain technologies’ utility for SSI models, and are at odds with an intrinsic ideology of the blockchain technologies themselves – that decentralised architectures safeguard data integrity. As such, the pursuit of regulatory coherence between architectures can exhibit fundamental ideological incompatibilities, even when in pursuit of the same regulatory objective. In such an instance, the ideological purity of one or both of the architectures must dilute in order to create alignment and cohesion between the architectures. Where architectures are better valued disparately and incoherently than they are when coherently aligned, then it may come to pass that each continues in a separate regulatory reality, coexisting in feigned ignorance of the other and creating separate simultaneous regulatory narratives. Indeed, as increasingly more architectures are coming into being, we might contemplate whether there is a ceiling as to how many architectures can cohere with one another. If they are unable to, we may find ourselves in a fragmented regulatory world, where regulatory incoherence allows for a wide array of alternative regulatory domains. Berberich & Steiner: 424; Finck: 100. See European Parliamentary Research Service. (2019). Blockchain and the General Data Protection Regulation: Can distributed ledgers be squared with European data protection law? pp. IV–V. 13  See The European Union Blockchain Observatory and Forum. (2019). Blockchain and Digital Identity. Retrieved from https://www​.eublockchainforum​.eu​/sites​/default​/files​/report​_identity​_v0​.9​.4​ .pdf, pp. 12–14. 11 

12 

Law and IT 

53

4. THE REGULATORY MULTIVERSE AND THE LIMITS OF COHERENCE Where there are too many incompatible architectures coexisting, new architectures, usually in the form of legislation, are created to try to cohere pre-existing architectures into ideological harmony. Much of this tension often exhibits itself around ideologies concerned with liberty, control and security, and their respective limits. In the mutual pursuit of harmony, architectural ideological boundaries are tested and massaged to cohere with one another. Where there is tension, some ideological dilution or transformation will be necessary to achieve this. However, if the requisite transformation for harmony would critically undermine the utility of the architecture or where both architectures’ ideologies are disparately valued despite being inconsistent, it may be that the regulatory architectures are allowed to coexist despite their incompatibility. Naturally, the existence of disparate regulatory architectures that cannot cohere with one another creates a sense of regulatory confusion, with the potential for lack of clarity as to which will prevail under certain circumstances. 4.1 Regulatory Responses to Emergent Incoherence Technological architectural incompatibility with legal regulatory infrastructures has been referred to by De Filippi, Mannen, and Reijers in their notion of “alegality by design”, wherein the design of a technology’s architecture “enables acts that transgress the temporal, spatial, material and subjective boundaries of the law” (De Filippi et al., 2022, p. 5). Alegality can be distinguished from illegality, which “is not concerned with alternative legalities, but rather with reinforcing existing legal boundaries” (De Filippi et al., 2022, p. 8). Although the previous section used a specific example relating to blockchain technologies to exhibit architectural tension, there are any number of architectural ideological tensions developing between preexisting law and emergent technologies. The rapid speed with which artificial intelligence technologies are being integrated into day-to-day use is also responsible for a vast number of tensions between shifting technological architectures and the laws that seek to regulate them. New legislation in these instances seeks to serve as an instrument of coherence. Often this is abetted by the replacement of old legislation. In doing so, architectural boundaries can be redrawn to remove confusion, and ideologies can be amended to better align. The rapidly shifting technological architectural landscape has therefore precipitated a torrent of new legislation, especially in Europe, such as the European Data Act, the European Data Governance Act, the AI Act, the AI Liability Directive, the Digital Markets Act, and the Digital Services Act, among many others, seeking to create greater coherence between technological and legal architectural ideologies. For example, the AI Liability Directive hopes to cohere the ideologies inherent in artificial intelligence architectures, civil liability architectures and internal market architectures, by amending an uncertain regulatory landscape (developing a clearer means by which liability can be ascribed in relation to AI architectures) in an attempt to protect the functioning of the market (by reducing international legal uncertainty), while hoping to cohere AI architectural design with the functionality of the new regulation (by determining means of liability for different designs of AI system architectures, as defined in the AI Act). Thus, it incentivises certain architectural designs and certain uses of those architectures. However, while new legislative architectures are intended to serve as instruments of coherence, new technological architectures can challenge this coherence. This means that the more

54  Research handbook on law and technology regulation there is seeking to cohere, the more there is to challenge. As outlined in the last section, even regulation that is ostensibly technology-neutral can only be drafted to the architectures that it is conscious of. Modes of operation understood to be universal may no longer apply in new technological architectures. Thus, just as blockchain architectures are alegal by design, seeking to cohere these architectures with existing legislative architectures is far more difficult and requires far more ideological dilution where there is a massive amount of legislation that is in tension with the technology. As such, a deluge of legislation seeking to create greater cohesion between the law and technologies may in fact create the least cohesive regulatory landscape where a new technological architecture is incompatible with a deeply sophisticated and detailed regulatory landscape. The greater the level of detail, the more likely it will be that the law will be confused in trying to achieve its objectives when applied to the technological architecture. This is far from a new issue, yet traditionally cohesion is found through the replacement of one of the architectures, either the legislative or the technological, with one that is more coherent. What is striking about the current advent of emergent technologies, such as artificial intelligence and blockchain technologies, is their coexistence with pre-existing architectures, such as internet technologies. While it remains to be seen whether this will stay the case, or which technologies will remain in usage and which will disappear, we are faced with the possibility that the exponential rate at which technologies are being developed may lead to a future wherein a vast number of technological architectures, each challenging regulatory coherence upon their arrival, coexist in their own regulatory narratives. Such a landscape may be fundamentally fragmented in the ideologies it seeks to uphold, as different legislative means must be relied upon for different architectures, despite their pursuit of potentially similar objectives. Just as De Filippi, Mannen and Reijers argue that the best means of responding to the alegality of blockchain architectures would be through “regulatory sandboxes” so as to examine whether they can “comply with existing regulatory requirements (functional equivalence) or provide equivalent types of safeguards/guarantees to promote existing policy objectives (regulatory equivalence)”, they posit that it is this notion of “inclusion by exclusion”, whereby “the legal order is simultaneously expanding its scope to encompass these activities, and committing to not interfering with these activities” that we are able to ascertain how to cohere a technological architecture with the regulatory architecture (De Filippi et al., 2022, pp. 18–19).14 However, such a notion is dependent on transience. If the sandbox supports the functionality and ideologies of the technological architecture, yet the regulatory architecture constructed around other technologies is too sophisticated and functional in its sophistication to easily amend it to the sandbox, then the sandbox may quietly persevere. 4.2 Non-Replacement and Regulatory Chaos Where two or more seemingly incompatible architectures coexist independently, it is not until an event occurs that inspires demand for coherence between the architectures that the tensions may be reconciled by the prevailing authority, usually the courts. However, where means of coherence are not obvious or where they must entail a dilution of an architecture’s ideology or functionality that is undesirable, it may only be the most severe of events that would 14 

The notion of “inclusion by exclusion” is taken from Agamben (1998).

Law and IT 

55

precipitate such demand. We might consider any number of events related to artificial intelligence and blockchain architectures in recent years that one may have expected would lead to intervention to clarify seeming regulatory incompatibility. For example, when millions of dollars of ETH were stolen in the DAO Hack, a number of questions of regulatory coherence were presented in ascertaining the legal character of the DAO as an entity as well as its eponymous tokens, all of which would need to be clarified for the legal regulatory architecture to meaningfully enforce the law. Instead, coherence was broadly avoided, as the community around the technological architecture privately enforced their own justice rather than relying on the state. In turn, the case was never heard before any court (De Filippi et al., 2022, p. 14). Thus, the architectures continued to coexist, with many questions of coherence left unanswered, and coherence itself circumvented. The respective architectures were too highly valued to risk their transformation in pursuit of coherence. As law around securities and currencies and artificial legal entities has been developed without an understanding of “decentralised autonomous organisations” or their tokens, its application to fundamentally other things does not necessarily achieve the objectives of the legal architecture and may cause any number of undesired consequences. Yet, the coexistence of incompatible architectures cannot persevere indefinitely, lest we live in a state of profound regulatory confusion. Talk of the law “catching up” rests on this notion of eventual coherence. Much of this coherence comes through the replacement of the legal architecture or the technological architecture. Although the former is more common (for example, in the form of new legislation), the latter is critical in avoiding simultaneous incompatible regulatory narratives. If there is no technological architectural replacement (or lack of replacement, in the form of the emergent architecture dropping off), and the law is unable to regulate all of the architectures meaningfully simultaneously, then we have coexistence and regulatory confusion. If it comes to pass that there are a significantly large enough number of technological architectures coexisting, each of which fails to cohere with the broader regulatory architecture, then we are in the undesirable situation of being within a regulatory multiverse, and as such, a form of hyper-confusion, or regulatory chaos. For example, if the regulatory architecture is too co-dependent with internet architectures and is therefore reluctant to cohere with an emergent architecture, such as the blockchain, there will be some form of prolonged regulatory coexistence. If another architecture then emerges, which allows for further abstracted entities, concepts and fictions from those understood in our former regulatory architectures, then the multiverse is developing. If this were to happen over and over, we would reach regulatory chaos. 4.3 The Limits to Incoherence Such a notion runs counter to the commonly expressed orthodoxy that we need not fear emerging technologies, as they will never replace that which we already have. In recent years, such a sentiment was often heard in relation to a variety of uses for blockchain architectures never replacing those architectures already widely used. On the contrary, it is this non-replacement that would render them dangerous. Much of the regulatory coherence that came between the legal and internet architectures was precisely possible due to the internet’s replacement of former technologies, allowing for regulatory structures to be torn apart and rebuilt around the newly relevant architecture. Rather, it is only through a number of different architectures coexisting incoherently that regulatory chaos ensues. It is in fact the opposite of widespread

56  Research handbook on law and technology usage which is to be feared. User groups of millions utilising one or two architectures are far easier to regulate than user groups of hundreds utilising hundreds of incoherent architectures. There are a number of seeming limits that could be argued to render such an eventuality unlikely. One would be a limit of innovation. Here, one might argue that it is unlikely that technological architectures that fundamentally challenge our prevailing regulatory landscapes are likely to come into significant usage often enough for many to coexist incoherently. However, we might consider that, as evidenced earlier, the more that the regulatory architecture coheres with existing technological architectures, the easier it is for an emergent architecture to be incoherent. With increasing technological design sophistication, it may not be quite as difficult to design architectures which do not cohere with current regulation. This leads us to another relevant limit – a limit of utility. Here, it may be argued that it would be unlikely for too many architectures to coexist as there are only so many functions of utility that they might be capable of providing. If an emergent architecture is unable to provide a disparate utility to one already in existence, it seems intuitively unlikely to find any type of significant usage. Thus, even innovative architectures that fail to provide new utility are likely to revel in obscurity. However, we might consider the inherent utility of incoherence. If an architecture cannot meaningfully cohere with other architectures, then it can be exploited. In such an instance, those designs with the most abstraction from regulatory norms have the potential to provide the most utility. In such instances, obscurity may work in such a design’s favour. We may find in the coming decades that increased tech-savviness across society presides over a greater influx of architectural development. While certain architectures will centralise and cohere with the law, the greater this coherence, the easier the development of incoherent architectures. If too many incoherent architectures coexist, the challenge for the law will be to meaningfully cohere these architectures with its own ideologies without undermining the coherence it is reliant upon for order.

5. CONCLUSION Despite the cyberutopian belief in the inherent sovereignty of internet architectures, the past few decades have rather presided over an ever-increasing coherence between the internet and the law. Today, the internet is not only the very antithesis of a space of lawless anarchy, but the law itself is finding itself increasingly dependent upon the internet. As courts have sought to ameliorate stress on the judicial systems, internet architectures have been able to provide efficient mechanisms to offset some of the workload, such as through private internet dispute resolution mechanisms. Given the outcomes of such mechanisms may fail to mirror those where the dispute is to be presided over by a court, as they need not necessarily even aspire to those same outcomes, this outsourcing of function has profound implications for the regulatory landscape. Regardless, coherence between technological and legal architectures is growing stronger. Given the fundamental differences in the design of these architectures, dependence upon technologies in the administration of justice decidedly transforms its functionality, and therefore its intrinsic ideologies. As the regulatory architectures of the internet and other technologies have become ever more intertwined with those of the law, they are beholden to one another with neither holding clear dominance.

Law and IT 

57

However, the more that these architectures cohere with one another growing ever more sophisticated and detailed in their coherence, the easier it is for an emergent architecture to be incoherent. We contend that the increasing sophistication of technological architectures creates unique regulatory challenges for the law. These challenges do not come from an inability for these architectures to be regulated. On the contrary, the law and the technologies will pursue coherence with one another. Nor do the challenges come from emergent technologies being radical in their means of changing how we interact, or how we used to interact. On the contrary, if these technologies were radical enough in their replacement, the law could recohere itself with them. Rather, it is in their non-replacement of architectures which the law has already integrated into its own that the regulatory landscape is challenged. As the preeminent architectures develop co-dependence, they become resistant to new entrants. This may have the undesired consequence of multiple incoherent architectures coexisting in feigned ignorance of one another, each developing its own simultaneous regulatory narrative. Judgements may be avoided that would seek coherence between the architectures for fear that any such motion may dilute or undermine their functionality or in order to preserve their respective architectural ideologies. If this were to occur repeatedly, we may find ourselves faced with a kind of regulatory multiverse, wherein a centralised controlled framework distracts from a peripheral landscape where fragmentation and incoherence abound. Although such a future is likely limited by requisite innovation or utility of emergent technologies, we note that the inherent utility of incoherence may be exploited by intelligent design that takes advantage of the abstraction of norms within innovative architectures, such that they cannot be meaningfully regulated by existing law. As the law becomes increasingly co-dependent with the internet and emergent technologies, we should consider how this increasing coherence allows for easier incoherence elsewhere. It may be through the law’s increasingly happy marriage to the internet, in all of its co-dependence and ideological coherence, that true emancipation from its grasp is more possible. We must therefore remain creative in our predictions of the future and not rely upon belief in linearity of the development of technology or the law, lest we develop systems unmalleable and easily exploited.

BIBLIOGRAPHY Agamben, G. (1998). Homo Sacer: Sovereign Power and Bare Life. Stanford: Stanford University Press. Artzt, M. (2020). Identifying Controllers and Processors in a Blockchain Environment in the Light of GDPR. International In-House Counsel Journal, 13, 1. Barlow, J.P. (1996). A Declaration of the Independence of Cyberspace. Electronic Frontier Foundation. Retrieved 12.12.2022 from https://www​.eff​.org​/cyberspace​-independence. Berberich, M. & Steiner, M. (2016). Blockchain Technology and the GDPR-How to Reconcile Privacy and Distributed Ledgers. European Data Protection Law Review, 2, 422. Carlson, A.M. (2017). The Need for Transparency in the Age of Predictive Sentencing Algorithms. Iowa Law Review, 103, 303. Daoui, S., Fleinert-Jensen, T. & Lemperiere, M. (2019). GDPR, Blockchain and the French Data Protection Authority: Many Answers but Some Remaining Questions. Stanford Journal of Blockchain Law & Policy, 2, 1. De Filippi, P., Mannan, M. & Reijers, W. (2022). The Alegality of Blockchain Technology. Policy and Society, 41(3), 358–372. Dutta, R., Das, A., Dey, A. & Bhattacharya, S. (2020). Blockchain vs GDPR in Collaborative Data Governance. Cooperative Design, Visualization, and Engineering: 17th International Conference,

58  Research handbook on law and technology CDVE 2020, Bangkok, Thailand, October 25–28, 2020, Proceedings 17, Springer International Publishing, 81–92. Easterbrook, F.H. (1996). Cyberspace and the Law of the Horse. University of Chicago Legal Forum, 207. Eichler, N., Jongerius, S., McMullen, G., Naegele, O., Steininger, L. & Wagner, K. (2018). Blockchain, Data Protection, and the GDPR. Blockchain Bundesverband eV, Tech. Rep. [Online]. Available: https://jolocom ​ . io ​ / wp ​ - content ​ / uploads ​ / 2018 ​ / 07​/ Blockchain ​ - data ​ - protection ​ - and ​ - the ​ - GDPR-​ -Blockchain​-Bundesverband​-2018​.pdf. The European Union Blockchain Observatory and Forum. (2019). Blockchain and Digital Identity. Retrieved from https://www.eublockchainforum.eu /sites /default /files/report_identity_v0.9.4.pdf, p. 16. Finck, M. (2018). Blockchain Regulation and Governance in Europe. Cambridge: Cambridge University Press. Fisher, D.R. & Wright, L.M. (2001). On Utopias and Dystopias: Toward an Understanding of the Discourse Surrounding the Internet. Journal of Computer-Mediated Communication, 6(2), JCMC624. Greenberg, A. (2016). It’s Been 20 Years Since This Man Declared Cyberspace Independence. Wired. Retrieved 14.12.2022 from https://www​.wired​.com​/2016​/02​/its​-been​-20​-years​-since​-this​-man​ -declared​-cyberspace​-independence/. Herian, R. (2020). Blockchain, GDPR, and Fantasies of Data Sovereignty. Law, Innovation and Technology, 12(1), 156–174. Koulu, R. (2019). Law, Technology and Dispute Resolution: The Privatisation of Coercion. London: Taylor & Francis. Lessig, L. (1999). The Law of the Horse: What Cyberlaw Might Teach. Harvard Law Review, 113(2), 501–549. Lodder, A.R. (2013), Ten Commandments of Internet Law Revisited: Basic Principles for Internet Lawyers. Information & Communications Technology Law, 22(3), 264–276. Mannan, R., Sethuram, R. & Younge, L. (2019). GDPR and Blockchain: A Compliance Approach. Int’l J. Data Protection Officer, Privacy Officer & Privacy Couns., 3, 7. Mulcahy, L. (2013). The Collective Interest in Private Dispute Resolution. Oxford Journal of Legal Studies, 33(1), 59–80. Preukschat, A. & Reed, D. (2021). Self-sovereign identity. New York: Manning Publications. Roberts, S. (2009). ‘Listing concentrates the mind’: the English civil court as an arena for structured negotiation. Oxford Journal of Legal Studies, 29(3), 457–479. Van Eecke, P. & Haie, A.-G. (2018). Blockchain and the GDPR: The EU Blockchain Observatory Report. European Data Protection Law Review, 4, 531. Verma, B. (2018). Blockchains in the GDPR Era. Int’l J. Data Protection Officer, Privacy Officer & Privacy Couns., 2, 15. Wu, T. (2016). Strategic Law Avoidance Using the Internet: A Short History. Southern California Law Review Postscript, 90, 7.

5. Formalising law, or the return of the Golem Burkhard Schafer1

1. INTRODUCTION Artificial Intelligence applications for law, for many decades (perceived as) a niche pursuit of academic researchers with a dearth of commercial success stories, have recently begun to capture the public imagination. Riding sometimes on the coat-tails of headline-grabbing advances of AI in other fields, such as DeepMind’s victory over the South Korean Go master Lee Sedol in the game of Go, the possibility of a “robo-judge” seems for many an inevitable and desirable future application of the technology (see, e.g. Addady, 2016; Mills, 2016).2 Some of the headline-grabbing applications promise significant improvements for the administration of justice, in particular, improved access to justice. DoNotPay for instance, a legal chatbot, helped to contest 160,000 parking tickets in London and New York for free3 (Sparkes, 2023, p. 8). If it were possible to “scale” this type of application across a broader range of legal disputes, we could see people historically excluded from professional legal advice on grounds of cost, complexity or other socio-cultural barriers being able to enforce their rights much more systematically. At the same time, there is persistent concern that “robo-judges” could harm the justice system, not just by amplifying systematic biases and prejudices, but through an illegitimate power grab by software developers usurping the role of legislators and judges. AI developers create de-facto new laws without the accountability and contestability that the traditional legislative process provides, and potentially also “design out” the human, empathetic element in legal decisions (for an overview see Hildebrandt, 2015). As the judges in the US case of Keppel v. BaRoss Builders put it: “Above all, it showed that a judge is a human being, not the type of unfeeling robot some would expect the judge to be”.4 A recent edited collection by Deakin and Markou (2020) brought this unease to the point by asking the question: is law computational? This chapter will contribute to this discussion through the lens of the formalisation of the law. It discusses if, and if so to what degree, law is amenable to formalisation of the type found in a broad range of legal technologies, or legal AIs. It will argue that while the question is much older than current interest in legal technology, and in many ways as old as law itself, it is nonetheless misleading.

1  Work on this chapter was supported by EPSRC grant EP/T027037/1 AISEC: AI Secure and Explainable by Construction. I benefited greatly from my discussions with Laurence Diver and Pauline McBride, though all mistakes are my own. 2  See Law Society of England and Wales (2018). Artificial Intelligence and the Legal Profession. Retrieved from https://www​.lawsociety​.org​.uk​/policy​-campaigns​/articles​/artificial​-intelligence​-and​-the​ -legal​-profession​-horizon​-scanning​-report. 3  https://donotpay​.com/ 4  Keppel v. BaRoss Builders, Inc., 509 A.2d 51, 56.

59

60  Research handbook on law and technology Legal AI or computational law came historically both with a promise, and a conception of justice: if law could be applied with the cold rationality of a machine, following nothing but the logic of the rules, justice would be enhanced by reducing arbitrary, untransparent and discriminatory decisions. It would enhance legal decisions by eliminating the impact of human biases, limitations of our memory, our short attention span or often failing reasoning capacity. This vision of law as inherently computational, and that of the ideal judge as an automaton, a “mere voice of the law”, predates computers by several centuries. The Enlightenment had developed the idea of a clockwork universe, governed by strict and mechanical rules that guarantee predictability, and with that ultimately also human control. With the new mechanical worldview came also a new capability to build automata. Technical skills merged with philosophical reflection in the work of René Descartes. Descartes (in)famously suggested that (nonhuman) animals are nothing more than complex machines. Thus, the mechanism became the standard prism to see nature and organisms. No-one though took the automata model and applied it to humans more than Julien Offrey de La Mettrie. Mettrie had previously advanced a proto-evolutionist argument that saw humans and animals as closely related, the latter merely using a somewhat more complex mechanism. That mechanical argument took centre stage in L’homme machine (Mettrie, 1748/1994). Crucially for our discussion, La Mettrie explicitly mentions the human ability to make ethical and legal judgements. In his account, legal reasoning that applies rules to facts is ultimately not different from any other attempt to reason about the world: “To be a machine, to feel, think, know good from evil like blue from yellow” (Mettrie, 1748/1994, p. 71). Colour recognition and moral discernment are equally within the capacity of deterministic machines, both are nothing but mechanical responses to material inputs. Again La Mettrie: “Even if man alone had received a share of natural law, would he be any less a machine for that? A few more wheels, a few more springs than in the most perfect animals” (Mettrie, 1748, p. 72; see also Campbell, 1970; for an application to law, Thomson, 2016). It is important to note that La Mettries’s vision of the ideal judge was born out of a normative agenda: with incompetent, corrupt, biased and cruel judges a lived experience for him and his contemporaries, the idea of a mechanistic application of the law became liberating. By putting ourselves under the law, our lives become plannable and predictable, just as the laws of nature allow planning and prediction. This mechanistic vision of the law subsequently dominated legal theory especially in continental Europe during the 19th and early 20th century. “Mechanical jurisprudence”, legal formalism and the conception of the judge as a mere passive mouth of the legislator became the dominant legal ideal. A century later, and again the confluence of technology with a specific vision of justice promises if not an end, then at least a much-reduced role, for human lawyers (Susskind, 2008). After initial enthusiasm in the legal expert systems of the 1980s was interrupted by a short “AI winter”, we are seeing a resurgence of interest in “law tech”, the idea to assist, or maybe even to replace, lawyers and judges by automata (see, e.g. Sourdin, 2018; Ulenaers, 2020). This chapter will not focus so much on the capabilities of these systems, or their capabilities relative to a given body of legal knowledge. A comprehensive overview of their recent history can be found in Bench-Capon et al. (2012). Rather, it looks at an often-neglected aspect of developing (legal) AI, the early steps of the development cycle where decisions are made on which aspects of a law to formalise, which language to choose for representation, how to document and justify these decisions and also how to document any risks that these design choices can create. This links the technical discussion on formalising and computerising law

Formalising law, or the return of the Golem  61 with the emerging discussion on the ethical and regulatory aspects of AI in general, and legal AI in particular, such as the AI4People principles on responsible legal technology that the author helped to draft (Schafer et al., 2020). In particular, it aims to sensitise the reader to the human element, and the normatively salient decisions, that often invisibly feed into the development of legal technology. The idea of the neutral, logical, mechanistic AI judge fails, or needs to be treated with caution, because it can hide the all too human aspects that went into its design. Currently, under the label of “black box society”, this problem is discussed in the machine learning environment as a problem of the algorithm and its training data itself. By contrast, this chapter will look into the design process, and a different type of “black box” – not impenetrable algorithms, but design decisions taken behind closed doors, when laws are formalised. In next section of the chapter, some key technical terminology is introduced, using a small number of case studies as illustrative examples. In particular, we will look at recent attempts to formulate road traffic laws in a way that makes automated vehicles (AVs) law-compliant by design. The dual aim of that section is to introduce some basic concepts of legal AI and legal formalisation, but also to alert the reader to the normative decisions that the programme developer has to make. In the second section of the chapter, we will go back in time, to the oldest reflections on the mechanical nature of laws and rule-following. In particular, we will use the ancient myth of the Golem to illustrate and elucidate some of the issues that we face when thinking of the law as a computational artefact. In the final section, we return to the Golem, and ask what normative conclusions we can draw from this discussion, and how we can start developing a more responsible approach to legal AI and legal formalisation. The Golem is chosen as a lens partly because the myth was always also one of law. We learn about the original Golem in the “cases and materials” of rabbinic law, the Talmud, its reported maker was an accomplished lawyer and law reformer, Abba Ben Rav Hamma. Moreover, in the subsequent Golem stories, we also see many of the tropes that still inform the discussion on autonomous systems and legal technology today. The Golem was built to abide by commands and to follow rules, and therefore would have required some internal representation of these rules that “make sense in Golem” and are executable by it. Today, autonomous systems such as self-driving cars have become a new “audience” for legal rules, and we encounter again the idea to programme legal rules directly into their governing algorithms. This approach will allow us to make a number of interrelated arguments: 1. The question of whether law “can” be formalised is misleading and in its simplicity potentially dangerous. Rather, we should ask: for a given normative conception of justice and a vision of a good legal system, how does a specific formal language and a specific approach to formalisation enhance or hinder achieving this vision in a specific intended application? 2. “Formalising” law is a process of translation, not dissimilar from any other translation between natural languages. This means among other things that every formalisation of the law is also an interpretation that will at best be “faithful to a degree” to the original – “traduttore tradittore”, the translator is also always a traitor. But while translation studies have developed systematic, detailed and comprehensive rules on how to translate best, there is comparatively little research done about the process of

62  Research handbook on law and technology formalisation. One attempt at such a systematic theory of formalisation was made by Georg Brun (2003), but it too was mainly a discussion of known problems rather than a systematic attempt at resolving them. 3. As a consequence, the “translator”, i.e. the programmer who formalises law, inevitably has to make choices. Some of these choices will have consequences that are normatively salient and affect either individual citizens or (our perception of) the functioning of the legal system as a whole. This means the question of appropriate formalisation and the ethics of legal technology are intertwined. In this respect too, legal formalisation can learn from translation studies, where discussions about the ethics of translation have become a mainstay of the meta-methodological debate since the 1980s (see, e.g. Baker & Maier, 2011; Chesterman, 1997). 4. One emerging candidate that seems to sidestep the problems associated with legal formalisation, “law as code”, i.e. the proposal to enact machine executable versions of legislation through the normal parliamentary process, mitigates some of the problems, but also creates new ones.

2. A SHORT PRIMER IN LEGAL FORMALISATION While most of the current interest in, and concerns about, legal AI centre on machine learning approaches (ML), this chapter focuses on examples of “good old-fashioned AI” (GOFAI), the symbolic-manipulation paradigm that rose to prominence in the 1980s, but still is the legal knowledge representation method behind a number of the most high profile and popular law tech apps these days. The reasons for this choice are twofold. The first is that while ML-based approaches to legal technology have received most of the public attention, a significant number of systems actually in use have at their core a GOFAI representation of legal knowledge or use GOFAI as “guardrails” to enforce law-compliant behaviour of the underlying ML system. Second, there is a widespread misconception that GOFAI systems are less risky because of their inherent transparency, or do not pose the same regulatory and ethical challenges, as data-driven approaches. One aim of this chapter is to argue that this perception is mistaken, and that at key stages of the development also of GOFAI legal technology, design decisions are taken that can adversely affect citizens and their rights, and which are shielded in the design process from public scrutiny and legal contestability just as much as the “black box” systems of ML. In the first step, we are now introducing some key concepts and ideas that are needed as background to follow the discussion. Despite differences in detail that we will discuss below, a GOFAI will have a) A formal language into which the knowledge expressed in natural language has to be translated. b) A set of inference or rewriting rules that tell us how to derive an output, the “inference engine”. A formal language means an explicitly defined alphabet together with “grammar rules” that tell us how to combine symbols from the alphabet into longer “well-formed” expressions. The alphabet contains a set of symbols for logical constants (examples are “if-then” or “or”) which

Formalising law, or the return of the Golem  63 have a fixed, explicit meaning and a set of non-logical symbols such as “F” or “p”, parameters for external objects and properties such as “driving fast” or “Peter”. The grammar, or formation rules, then tell us that we can combine for instance the symbol c with the symbol F to form F(c) as a sentence in that language, here with the intended meaning, or interpretation, “Peter is driving fast”. We will write such an intended interpretation as: F: driving fast; p: Peter. This interpretation depends entirely on the context, and F(p) could also stand for “2 is even” or “Cyanide is healthy”. Some symbols, however, keep their meaning across all contexts, the logical constants such as → (“if-then”) or ∀ (“for all”). Their meaning is defined explicitly, for the “if-then” through a truth table that tells us for all possible combinations how the truth value of the complex “if-then” sentence can be derived from the truth value of its component parts – a sentence of the form “if A then B” is true if and only if A is not false while B is true, regardless of the content of A and B. For a logic-based AI, only these constants have meaning, so that we can think of First Order Predicate Logic (FOPL) also as the theory of the meaning of the words “all”, “none”, “if-then” and “or”. This is important to remember as an antidote to the “Eliza effect”: a legal chatbot may give us the impression that it “understands” legal concepts such as property, crime, intent, or contract, by using these terms correctly in its answers. In reality, the AI “understands” just the logical constants and treats the non-logical terms as meaningless strings of symbols. The choice of formal language prejudges how much of the natural language sentence we can analyse, and how much we have to leave unanalysed in the parameters. If our language is FOPL and the sentence we are interested in is “Peter should drive fast”, there is no direct way to express the meaning of “should” and we then have to assign the sentence the same structure as above, but with a different intended interpretation: F(p) F: should drive fast; p: Peter However, logicians soon discovered that terms such as “should” or “must not” have similar invariant properties like the sentence connective “if-then” or the quantifier “For all”, an invariant meaning that can be made explicit and formally captured. So if in a given application, it is desirable to “unpack” the meaning of the deontic operators such as “should”, it may be better to choose a richer language, a language of deontic logic, where additional symbols for the new logical constants like “should” or “is prohibited from …” are introduced and given a fixed and explicit meaning. Similar decisions can be made for terms like “after” and “before”, leading to temporal logics (Mackaay, 1990), “necessarily X” and “possibly X”, leading to alethic modal logics, and “believes that X” and “doubts that X” leading to epistemic modal logics (for an overview, see Prakken & Sartor, 2002). These, crucially, are methodological choices that are not “right” or “wrong” as such, but rather “useful” or “not useful” for a given intended application (Gabbay, 1992). This gives us a first hint of the creative choices that are available to the programmer. As all choices, they can carry normative consequences and ethical or legal obligations: who makes the choice, on what authority, how can they be justified if contested, and how do we know they were “good” choices? These questions are the ultimate focus of this chapter, as an often

64  Research handbook on law and technology overlooked but crucial aspect in the question of how we should regulate AI in general and legal technology in particular. While in the example above, the different languages had different expressive powers, typically “adding to” classical logic, some formal languages that have been designed for legal users specifically are “doing the same thing” as a general-purpose language such as PROLOG. Examples include PROLEG (PROlog-based LEGal reasoning support system, Satoh et  al. 2011) or Catala (Merigoux et al., 2021). Their aim is to make the process of formalisation, and also the checking for correctness, easier and more intuitive for lawyers who may be lacking the time, skill or experience to learn a “multipurpose” language. Here too the choice can be normatively salient, for instance, to meet the transparency duties of the developers: can they design the system in such a way that a domain expert who is not also a computer scientist can check the way the system is working? In that case, we can ask if there should be a legal duty to use of all the available and equivalent formal languages ones that are most intuitive. This would allow a greater number of citizens, with little or no formal training, to scrutinise the formalism and to contest, if appropriate, its adequacy. In addition to having a formal language that regiments how expressions are formed and knowledge is represented, an AI also needs a way to “do things” with these sentences. That is the role of the inference engine that prescribes how one string of symbols (the input) can be rewritten as another string of symbols (the output). Intuitively, in legal reasoning contexts we often think of this as an argument, where we move from a set of premises (input) to the conclusion (output), but the output can also be an answer to a question, or an instruction to a machine to perform an action (e.g. to lower the speed of the car in response to a sensor input). A legal GOFAI then, following an influential definition by Trevor Bench-Capon, is an AI where some or all of the formulas that represent legal knowledge computationally are isomorphic to a corresponding legal norm in natural language. That means that their syntactic structures correspond to each other (Bench-Capon & Coenen, 1992). Let us illustrate this idea by looking at the legal norm 152 from the UK Highway Code:5 “You should drive slowly and carefully on streets where there are likely to be pedestrians, cyclists and parked cars”. To give a formal account of this sentence, we would first rephrase it as a rule: If there are pedestrians, cyclists and parked cars, then the driver must drive slowly and carefully”. We can then formalise this sentence in FOPL as

(

)

"xD ( x ) Ù $y P ( y ) Ú C ( y ) Ú PC ( y ) ® SDS ( x )



with D: drives, P: is a pedestrian; C: is a cyclist; PC: is a parked car; DS: should drive slowly “Read out” this formula now states roughly: for everyone, it holds that if they are driving, and there are pedestrians, cyclists or parked vehicles, then they must drive slowly. This seems a good approximation of the legal norm that we try to model, at least at first sight. If we now have another sentence, one that says that Peter was indeed driving and there were pedestrians (or cyclists …)

(

)

D ( p ) Ù $y P ( y ) Ú C ( y ) Ú PC ( y )



5 

https://highwaycode​.org​.uk​/rule​-152/

Formalising law, or the return of the Golem  65 we can derive that he had to drive slowly, by the inference rule of modus ponens: SDS (p) If we now add another rule that describes the consequences for not driving slowly, e.g. a fine, and also tell the system in the same formal language that he was in fact driving fast, the AI can infer that he is now liable for this fine, and print out a fixed penalty notice. This is how in a nutshell a GOFAI legal AI works. This approach to legal AI rose to prominence in the 1990s onwards, but it is still today the way in which many legal apps and chatbots reason about the law. As indicated above, in this formalisation we leave a crucial aspect of the “norm-likeness” of the rule implicit and hidden in the “SDS” part, it is not visible, from the systems perspective, that we are dealing here with a norm that directs behaviour, an “ought” rather than a description. So as formalised, the system could not infer automatically that from SDS (p), which we interpreted as “the driver should drive slowly”, we can also infer that it is therefore prohibited not to drive slowly, or in other words to drive fast. In some applications, we may want the program to perform this inference automatically, and one possibility is to use a richer logic, deontic modal logic, where the above formula would appear as

(

)

"xD ( x ) Ù $y P ( y ) Ú C ( y ) Ú PC ( y ) ® SDS ( x )

The  means here “ought to”, an operator that applies to the sentence following it and modifies it. Just as it was possible to define formally and for all contexts the meaning of the “if-then” arrow through a truth table, the meaning of “ought” can be formally defined (for an overview see Meyer, 1993). Whether the AI developer opts for this more expressive language or leaves the “ought” operator unanalysed and hidden in the “S” parameters will often be a question of convenience, driven by the needs of the application. It makes more sense for instance in a system that assists judges than one that regulates the driving of an AV. Importantly though, hidden behind these technical considerations is another deep jurisprudential issue: what, really, are the norms that we formalise? In the influential Austinian understanding of norms, laws are commands directed from the sovereign to the citizen, backed by sanctions (Austin, 1880). Here, their nature as “command” is essential, and we may want to represent it. In a very different approach, popularised by Herbert Hart (2012) in a common law environment, but also suggested by Karl Binding (1872) for the civilian tradition, legal rules often look descriptive for a reason. The law of homicide in many jurisdictions does not say: “Thou shalt not kill”, it says “Someone who intentionally kills another human being without justification is a murderer”, possibly with another sentence of the form “The punishment for murder is a prison sentence ranging from 3 years to life”. These sentences read on their surface like ordinary statements of facts, though we intuitively understand them as also implying that killing is wrong. For Binding and Hart (and Mettrie above), the main audience for legal norms are not citizens, but judges and other legal officials. Depending on the audience, just as with any translation, different formalisations can be more or less appropriate. This means that the decision of the AI developer, even if driven mainly by technical considerations, cannot but take sides in complex jurisprudential questions: every legal technology inevitably is aligned with some conceptions of justice and the law, and silent on others.

66  Research handbook on law and technology

3. TRADUTTORE TRADITTORE – THE PITFALLS OF FORMALISATION While the above formalisation of a simple sentence may look straightforward, it is anything but – and in some sense cuts a number of corners. In what follows, I will disclose some of the “cheats” and discuss why they matter. 3.1 Cheat 1: “And”, “Or” and the Issue of Legislative Intend First, the natural language norm said “pedestrians, cyclists and parked cars” However, our reformulation changed this into a nonexclusive “or” (“v”). For the legally trained reader, the reason is obvious: the norm aims to protect vulnerable traffic participants such as cyclists. If we had used the “and”, the rule would only “fire”, i.e. allow us to infer the desired action, if there were simultaneously pedestrians, cyclists and parked cars on the road. Our Peter could speed at his heart’s content around children playing in the street, as long as none of them is on a bike. The reformulation seems therefore to be perfectly adequate in the light of our background knowledge about traffic, UK legislators, their attitude to pedestrians and other such factors. Still we should ask who, and with what authority, should make this decision during the development process, especially when the end product was to be used to generate speeding fines in a semi-automated way. After all, we have chosen here a reading that is more burdensome for the driver. If the intended application is one of criminal law, such as issuing a speeding fine, the “in dubio pro reo” rule normally requires us to choose between different possible interpretations the one most advantageous to the accused. Do we need to find a legal precedent that authoritatively supports the way we formalised the sentence? Can any programmer make that decision, or does it need to be “signed off” by someone licensed to practice law? How should that design choice be documented, giving for instance the transparency duties in the proposed EU AI Act? Conversely, if the intended application is as a “guardrail” that controls the AIs on an autonomous vehicle, where the outcome is merely to lower the cruising speed, the “in dubio” metarule is irrelevant. At worst the car drives in that case safer than the legislator required. We can see here a recurrent theme of this chapter: to determine if a formalisation of the law is adequate requires knowledge of the intended deployment of the AI, and can’t be decided in isolation. 3.2 Cheat 2: From Probabilities to Facts A second cheat was our omission of the word “likely” in “where there are likely to be pedestrians, cyclists and parked cars”. As formalised, the rule only applies when there are actual pedestrians present. To formalise the idea that something was not the case, but “could have been the case” requires either a “calculus of probabilities” (Robertson & Vignaux, 1993), or “alethic” additions to our language that express the idea that something was necessary, possible, impossible or likely to happen (see generally Cresswell & Hughes, 2012; historically for the relation between these and legal reasoning, see Lenzen, 2005) 3.3 Cheat 3: A World without Pedestrians – What a Thought The final “cheat” happened when we translated the reformulated natural language sentence into a formal one. Our formal version of the rule is much more demanding than the natural

Formalising law, or the return of the Golem  67 language version also in another aspect than the change of “or” to “and”. Read literally, everyone has to drive slowly, always. That’s because the condition $y P ( y ) Ú C ( y ) Ú PC ( y ) is already fulfilled if there is a single pedestrian, somewhere on this planet. The “∃” only means “there is at least one”, and nothing in the rule as formulated expresses the idea that this one person needs to be anywhere nearby. Intuitively, we understand under which conditions the legal rule matters: the driver is in an inherently dangerous environment, with sufficient risks in close proximity to require extra care. The law, however, does not specify just how many pedestrians or cyclists are needed to constitute such a risk, or how close by they have to be, this is left implicit and relies for interpretation on the intuitive background knowledge and common sense of the norm addressee. As written, the formalisation is therefore plain wrong. And yet, for many possible applications of a legal AI, it would still function perfectly. If the system were to be used, for instance, to assist a court in issuing speeding fines, ensuring correct operation is easy: the system requires input from the operators. These operators act in modern parlance as an “Oracle”, the interface between the world and the algorithm. Remember that we said above the AI only understands the meaning of the logical constants. The rest is “invisible” to the system and requires input from the user. In our case, that is the fact that indeed, Peter was driving and there were pedestrians and cyclists in the vicinity. Now, no competent human operator would give the system as input “there are pedestrians …” when the event in question happened at 4 am on an empty countryside road, and then justify this by pointing out that there was a pedestrian, but 500 miles away and six hours earlier. The competent human operator knows this is not what is meant, and in this way compensates for the shortcomings of the formalism. But what about an incompetent operator who just “ticks boxes” unthinkingly and as a result tells the AI that yes, there are pedestrians (somewhere in this world)? To be able to contest the fine the system would generate, the driver could request an explanation, but who exactly in our scenario owes the explanation, and can any single person satisfy the driver’s query? The developer can point to the fact that while the rule was not a perfect translation of the legal text, it was a sufficiently adequate one, assuming a modicum of common sense by the operator. The operator can point to the fact that the data they inputted was literally true. The system itself could generate the formal proof, and demonstrate in this way that it was working correctly – the advantage of GOFAI is that from the machine perspective it is highly transparent and interpretable precisely because it uses isomorphic, symbolic representations of the rules that inform its decisions. All the problematic mistakes and choices have been made long before it carried out that operation, or rather many decisions by different actors, decisions that were individually not “that wrong”, caused in their interaction the wrong outcome. The situation becomes more complicated if the intended use is not that of a legal decisionmaking system, but as a way to control an automated vehicle. In that case, there is no longer a place for human involvement, rather the car uses as input readings from its sensors. However, AVs lack a human operator’s common sense and background knowledge. So for this purpose, the above formalisation would indeed be inadequate. Rather, the rule would have to be rephrased by something like:

(

)

If you are in driving mode, and your speed is X and your sensors identify a pedestrian within n1 metre distance, or a cyclist within n2 metres distance, or a parked car within n2 metres distance, then slow down the speed to Y.

68  Research handbook on law and technology The values for n1 … n3, how many pedestrians, how far away, etc., would need to be determined by someone, but they can’t be read directly from the statute. Several new choices become available at this point: The values could come from the law in a different way, for instance, past court decisions in speed-driving cases. The task then becomes to extract from the decided legal cases those features and aspects that disambiguate the meaning of the statute (Borges et al., 2023). This way, a considerably richer and more fine-grained model of the operating law is possible than using the governing statute only. However, in this approach legal knowledge is still understood as a set of law-like rules, the use of case law for disambiguation happens “behind closed doors” by the development team during the process of reformulating the law prior to formalising it. The resulting formalised rules do not carry any indices that make the cases from which they originate visible and explicit. Contesting their correctness now requires consulting material external to the software and the statute that is formalised. This could be any documentation that describes the choices that the developers made in selecting and analysing the cases. Alternatively the challenger could carry out their own legal analysis of the case law and develop an alternative formalisation from scratch A different approach is to make the reasoning with precedents an explicit part of the formal representation, and shift it from the preparatory stage of building a legal AI to the formal system itself. This is the approach taken in case-based reasoning systems, which emerged contemporaneously with rule-based systems from the 1980s onwards (see, e.g. Ashley, 1992, 2002). While rule-based systems such as the TAXMAN (McCarty, 1976), Divorce Advisor (Duguid et al., 2001) or ADVOCATE (Schafer & Bromby, 2005) focus on the application of an abstract legal rule to the facts of a case, case-based reasoning systems for legal applications such as CATO (Rissland et al., 2005) or IBP (Brüninghaus & Ashley, 2003) analyse the way in which a past decision is used in a process of analogical reasoning to guide the decision in the new case. For this task, there has to be a way to formalise not just legal rules, but also court decisions. In a legal CBR system, cases are formally represented through a more complex structure that contains the names of the parties, the outcome or disposition of by the judge, and a range of “factors” that describe the fact situation that yielded the outcome (see for an overview the papers in Atkinson, 2009; for an application to statutory interpretation, see Araszkiewicz, 2013). “Factors” are factual aspects of a case that are pertinent for the decision to some degree or other, and are likely to appear across a range of similar cases. In our example, relevant factors could be “number of pedestrians” and “closeness to the car”, but also whether or not it was raining on the day, the road conditions, or whether the driver had reasons to believe that the pedestrian had seen them. A non-relevant factor would be, arguably, the hair colour of the driver, their hometown or their gender (for the importance of legal justification, see Atkinson et al., 2020). So while the driver in our case may well have been a male redhead from Edinburgh, this would not be formally represented in the case structure; that there was a slight drizzle on the day, but visibility otherwise good, might be. The precedent case (PC) and the current case (CC) are then both formally represented in such a case template. The reasoner then calculates if the overlap between the factors of each case is strong enough, quantitatively and qualitatively, to transfer the decision reached in the PC to the CC. CBR is still using symbolic representations of “the law”, but it differs from rule-based systems in its understanding of what “the” law is. In some ways, it is already closer to data-driven

Formalising law, or the return of the Golem  69 approaches that use machine learning, and also shares some of the problems with them. For instance, which factors to represent in a case as important, and which one to disregard and render invisible inevitably reflects the views of the formaliser. This constitutes a very similar inroad for biases as those found in ML when weight is assigned to factors. Rule-based systems have another advantage. They make it easy to ascertain if the AI is correct at least in this sense: we can determine if all the rules in the AI are authoritative. For this, we simply link the formal rule to its corresponding natural language rule (regardless of whether we think it is the best way to represent that rule). Second, we can also determine if the knowledge base is complete. As there is a finite number of rules in a given statute, we can check if each of them has a formal counterpart. With CBR, we can still check if each of the cases in the knowledge base is authoritative, that is it has been decided by a competent court and not overruled by a higher court. But it becomes normally impossible to also check for completeness. In many jurisdictions, only a small number of court decisions get published. Appeal court cases are much more likely to get published than cases of courts in the first instance, but in order for a dispute to reach the appeal courts, the parties must have the social capital and financial resources to continue litigating. Other biases can be the result of uneven distribution of digital equipment and skills between courts in different regions of a country. Another decision is whether to include cases from other jurisdictions. In the paper by Borges et al. cited above, only German court decisions were used. For a German lawyer, this is such an obvious choice that it is not even worth discussing, of course for a formal representation of a German statute, only domestic courts are relevant. But for a UK lawyer, the answer would be far from obvious. The common law uses foreign decisions much more liberally, at least as persuasive, if not binding, precedent. The reason for this different attitude is again deeply connected to historically-grown conceptions of law and justice: a child of the modern nation-state imposed by a central authority for the former, an organically grown expression of informal conceptions of justice rooted in human nature for the latter. For a computer programmer to decide which foreign cases, if any, to include is therefore far from trivial, the choice inevitably reflects deeply ingrained commitments to a vision of the law. Whatever approach we chose though, we will have changed the meaning of the original natural language sentence. The law, as formulated, was left intentionally vague and “opentextured”. While there are clear examples where the driver should have slowed down, there will also be borderline cases. By giving precise values to the various parameters (the “n”s) we introduce precision that the original was lacking. This increase in precision through formalisation has been lauded by some as an added benefit, as it achieves the value of predictability and legal certainty (Allen, 1956), the very thing Le Mettrie hoped to achieve through legal automation. But we can also see vagueness as a necessary human element that allows us to mitigate the harshness of the law by mercy. If we consider vagueness not as a bug to be fixed, but a positive feature of law, we may want a formalism that has a greater degree of flexibility. It is for this reason that at some point fuzzy logic was seen as a better medium for legal formalisation, as it reduces the “increased precision” that formalisation otherwise would bring (see, e.g. Mazzarese, 1993; Philipps & Sartor, 1999). The point for us is to emphasise that what may look superficially like a mere technological question happily left to computer scientists, in reality reflects deeply ingrained and culturally mediated normative assumptions about justice and the nature of the justice system. Depending

70  Research handbook on law and technology on these philosophical commitments, vagueness in law is a problem that the AI developer should fix, or an important aspect of a humanist perception of law that could be distorted by legal AIs. 3.4 Cheat 4: Just Passing through Your Country … We note in the passing another “cheat” with our formalisation above – typical for most legal AIs, it does not represent explicitly from which jurisdiction it comes. It is another aspect of the law that we “intuitively know” and never consider necessary to state, or state only in the manual. As in the examples above, the formal rule, as stated, is false – but normally, we can rely on the user to compensate for this loss in translation. If a system is developed by and for German lawyers, for adjudicating German cases, we may not need to formally represent the jurisdiction. But if we build an AV that might cross borders while in operation, the introduction of formal parameters and indices for jurisdictions may be needed, and new sets of formal rules that determine which country’s laws apply. We have used this example of a very small fragment of road traffic law to introduce some basic issues and vocabulary. The theme that emerged is that while the aim of legal technology is to automate legal reasoning, the process of formalising law is not in turn a mere mechanical, automated process. Instead, it is an exercise in normative reasoning that touches upon intuitions about justice, fairness and the nature of legal rules. What counts as an adequate formalisation can differ between jurisdictions as much as between intended applications. And every design choice will inevitably reflect often implicit assumptions about the nature of justice and the role of law in society – which becomes an issue if these are a) outsourced to software developers and b) remain in the “black box” that is the development process.

4. LAW’S GOLEMS This section will be using the myth of the Golem as a foil for contemporary discussions on legal AI to deepen some of the ideas introduced above. The Golem story resonates with current discussions on law and robotics on several levels. The first Golem was also a “black box”. It was not given a voice, and with the inability to speak came an inability to account for its actions. So when another rabbi, Rav Zeira, asked the Golem what it was and why it was doing what it did, it could not answer. The penalty for failing the first-ever Turing test was sharp and fast: “You were created by the sages; return to your dust”. Today we are too worried about explainable or interpretable AI: it is not enough for many autonomous systems that they deliver the right result, we also want to know why exactly they behaved as they did. Increasingly, for many AI-enabled or supported applications, this is becoming also a legal requirement. We encounter it in the duty to give reasons for fully automated decision-making under the GDPR, though its full scope there is contested (for an overview, see Kaminski, 2021). Even more detailed requirements are stipulated in the proposal for an EU AI Act (Hacker & Passoth, 2022). The story of the Golem has been retold countless times over the centuries. In them, the Golem is typically performing any task given to it, but performing it literally and unthinkingly. In the Golem of Prague, this task is to protect the Jewish community. Denied a proper legal status by the Christian majority, and subject to constant discrimination and harassment, the

Formalising law, or the return of the Golem  71 best they can hope for is a benevolent tyrant. But even a benevolent tyrant is a tyrant, and what he gives as protection on a whim he can equally rescind on a whim. In such an environment, “living life lawfully” becomes impossible, and the pursuit of a coherent life plan and a life with integrity is brittle and fragile (Bankowski, 2001; Bankowski & Schafer, 2007). In such a chaotic environment, advocating for “mechanistic” legal rule-following is not a bug that dehumanises the legal system, it is a design feature that holds the promise of justice, and with that freedom, for all. Humans like Emperor Rudolf may be unpredictable, but with a Golem, we can know, by design, exactly what it will be doing. Or can we? Because of course, this is not how the best-known of the Golem stories end. Rather, the Golem becomes inevitably dangerous for its human owner. In one such story, the owner forgets to switch the Golem off on a Friday evening. As a result, it continues to perform its assigned task when the Sabbath begins. This, however, breaks Jewish law, the law it is designed to follow, always and unwaveringly. It now faces a normative conflict: the law “obey the command given to you by your owner” conflicts with another rule it is programmed to abide by, “Respect the Sabbath”. Ultimately, this destroys the Golem, when the internal rule conflict becomes too much to bear. The underlying idea had seemed sound – ensure the safety of an autonomous device by – literally – hard baking the legal rules into its clay, the “Shem”. However, all this assumes that our legal rules are consistent, and while this is an aspiration for modern legal systems, in reality we know of course that rules often (seem to) contradict each other. We find the same problem with the golems of our age, autonomous systems. Let us modify our AV example above. Imagine the following: a police officer spots an abandoned car parked in front of a primary school. On inspection, he realises that it contains a bomb, programmed to go off soon. He can’t safely diffuse it in situ, the only way to avoid the death of hundreds of innocent people is to risk his own life and drive the car as fast as he can away from bystanders. In this situation, we would not want the AV to tell him “I can’t let you do that, Dave” and artificially slow down the car to local speed limits. Golem-like rule-following, without human override, is one of the ways in which the historical man of clay and its modern reincarnation can cause harm. Can we just add another rule, one that says: “allow to drive in an emergency as fast as possible”? If we use PROLOG or a similar programming language based on classical logic, the outcome would be similar to that in the Golem story. Classical logic, and the AIs that are built with it, are as unforgiving towards contradictions as the Golem was, a problem known since the Middle Ages as a “logical explosion” (Priest & Routley, 1982). From any contradiction in the program, every statement becomes provable, as a counterintuitive and undesirable side result of how formal logic works. From the contradictory set of rules {“drive below local speed limits if there are pedestrians nearby”; “drive as fast as possible if the car contains a bomb”}, the car could also derive, as counterintuitive as this sounds, “drive on the pavement and aim to hit as many people as possible”.

5. LAW FOR GOLEMS So, what should Rabbi Loew have done, what should a modern AV designer do, to keep an automaton law compliant and safe? There are a couple of strategies, each with its own advantages and disadvantages.

72  Research handbook on law and technology 5.1 Making Legal Formalisms Golem Proof First, the designer could have disambiguated the law before formalising it. Rather than using literal, direct translations of two contradictory norms, they should have looked for the intended meaning behind the rules, and found a translation that avoids the conflict. Maybe the second rule could be formulated as an exception to the first rule: “If the speed limit is X, AND the driver does not push the ‘override button’, reduce speed as soon as the speed sensor gives the reading ‘X+n’”. This rule is not found anywhere in the UK Road Traffic Act, but it is how the legislator would have wanted the law to be understood. To achieve this result in a legal AI, it may be necessary to use a more expressive formalism as well, in the case at hand one of the many forms of “non-monotonic” logic that can express the idea that often, prima facie applicable rules can be “defeated” if an exception applies (see, e.g. Johnston & Governatori, 2003). For our mini-formula introduced above, this typically means replacing the “if-then” arrow with a new constant, “~”, so that “A ~ B” intuitively reads: “If A, then typically B” or “If A, then B unless challenged by a sound counter-argument”. Sometimes, the relevant exceptions can be found in the legislation itself. Very often, legal rules explicitly refer to other norms, and a faithful formalisation needs to preserve this element. Even more often, a law may state explicitly in one of its introductory sections that all the norms that follow should be constructed as an exception to another law, or that conversely, they do not apply when a named, higher-ranking law also applies. In this case, it is not enough to formalise every individual rule in isolation, rather the programmer has to read the norm in its context and ensure that references to other parts of the same law, or references to other laws, are formally represented. This is how lawyers are trained to read a statute, and replicating this process in the design stage of building a legal AI seems normatively unproblematic, though we move progressively away from a simplistic notion of legal formalisation and rule isomorphism that allowed us to read the correctness of a proposed legal AI directly from its code. On automatic identification and formal representation of legal cross-references see, e.g. de Maat Winkels and van Engers (2006) or Maxwell et al. (2012). Neither strategy would work however in our example. Here, the higher-ranking norms that allow the violation of the Road Traffic Act are not mentioned directly anywhere in the statute. Rather, lawyers understand the hierarchy of norms and values that turn our laws from a mere list of rules into an organised system, and know for instance that the general rules regarding the “necessity defence” trumps in our case the UK Road Traffic Act. This indicates a much more significant problem in the task of formalising law: the legal system is first and foremost a system, it aims to promote a coherent set of values, and as a result the meaning we assign to a given legal norm may depend on the meaning we assign to any other norm in the system. Some jurisdictions give this insight itself the status of a meta-norm, as an instruction to choose that interpretation of a norm that fits best with the interpretation of all other norms (Felix, 1998). Here we can see a parallel to translations between natural languages – should a translator aim for a literal sentence-by-sentence translation, or one she thinks most closely matches the effect the author wanted to have on their audience overall? A good translator will at the very least avoid introducing additional ambiguities. If, e.g. a term for a character trait in the original language has two possible translations in the target language, one with positive and one with negative connotations, they will choose the one that is consistent with the way the person is described elsewhere in the same book. Would they, or

Formalising law, or the return of the Golem  73 should they, go further than that and also aim at maximum consistency across different novels about the same character (e.g. the Sherlock Holmes character across the stories), even if this risks “repairing” a real inconsistency in the original? Or is the risk too great that this obscures a character development that was intended by the author? How much should they consider the background knowledge of the target audience, which may have been different from the one that the original author had in mind? In translation studies, the technical skills, and professional and ethical implications of these decisions have created a comprehensive body of knowledge, most recently also as a response to the rise of machine translation (see, e.g. Coban, 2015; Floros, 2020). There is regrettably not a similar body of knowledge when it comes to formalising law, despite the similarities. In law, these issues raise additional questions of legitimacy and transparency. The programmer takes on a role that normally society assigns to the legislator or the judiciary. This may be less of an issue if the aim is to build a car that adheres to road traffic law. Just as we as citizens have to decide if a given law applies to a situation and in that sense constantly “interpret” the law, the programmer has to decide what action a given situation requires of the car. The situation is different when the AI replaces judicial or other law-based decision-making by a public authority, such as a decision on whether a citizen is entitled to certain benefits. Here we face the danger that software developers, in the process of formalisation, take on a role that they are neither qualified nor authorised to do – deciding “what the law is”. 5.2 Making Golems Inconsistency-Tolerant So far, we discussed how during the formalisation process, the “raw data” of the law has to be reformulated and “cleaned up” first, not unlike the way machine learning approaches to AI first require extensive data preparation that also happens “behind closed doors”. A very different strategy is to keep the inconsistency, at least initially, represent it faithfully in code, and prevent logical explosion through a modified logic. There are a number of formal systems that have been designed to achieve this. Paraconsistent logics, for instance, tolerate local inconsistencies, and allow to represent how these can be resolved over time (see generally Priest, 2002; for an application to law, see Ausin & Pena, 2000). Rather than asking the programmer to sanitise the law prior to formalisation, the process of disambiguation itself gets represented in the programme This may also involve formalising those meta-rules that lawyers deploy to resolve inconsistencies, for instance, the legal rule “lex priori derogat lex posterior”. Rules like this are then not used informally by the programmer prior to formalisation, but become part of the computer representation of the law. This gives a richer – and less idealised – account of the law. It also transfers part of the “invisible” process of reformulating the law by the developer into an explicit and visible part of the operation of the legal AI. As with classical logic, the vocabulary of paraconsistent logic too can be extended with deontic operators, so that we can express inconsistent obligations (McGinnis, 2007). 5.3 Give the Golem a Voice: From Monological to Dialogical Formalisation So far, we have treated the law in our examples as a set of instructions directed at humans, machines or human-like machines. The Golem is told which laws to abide by, and not having a

74  Research handbook on law and technology voice, they are not open for debate. Similarly, an AV that is designed to be law compliant will simply reduce its speed if the road traffic law so requires. The AV in this case is unlikely to explain itself, let alone argue with the driver or developer about the merits of the law. More ambitious but still “monological”, are legal expert systems that assist legal decision makers. They will produce a decision as output, but also give a valid legal reason for it. In our example, the output can read as: You are given 3 penalty points on your licence, because the law says in sec 152 of the Road Traffic Act that “in the presence of pedestrians, a driver must lower their driving speed appropriately”, and you did not do so. But are these adequate justifications? It tells me why I was found guilty, but it does not tell me why my arguments – for instance, that speeding was necessary to prevent even more danger – were rejected. What this indicates is that a monological understanding of the law omits elements that are important, maybe even constitutive, in other contexts. We mentioned above the different conceptions of law found in Austin and Hart. If we understand law just as a command, directed at a citizen or a Golem, by an all-powerful sovereign, thinking of it as a monologue makes sense. But if laws are mainly instructions to officials, in particular judges, then the contested nature of law becomes more prominent. In the trial, it is essential that both sides have a voice, and the process of deliberative evaluation of their respective arguments by the judge is constitutive for a fair trial. “Explaining” the decision in this case also means to explain why some arguments failed. Such a richer notion of the trial and the type of explanation it generates can be found, e.g. in Brownlee’s (2011) development of Duff’s communicative theory of the trial. Going back to our speed-driving example from above, we now shift from an AI that simply reasons: 1) You should drive slowly and carefully on streets where there are likely to be pedestrians, cyclists and parked cars 2) there were pedestrians, cyclists and parked cars Therefore 3) you should drive slowly to one that can make a disagreement between two parties (Defendant (D) and Prosecutor (P)) explicit and reason about the argument they are making: P: proposition 1: you should drive slowly and carefully on streets where there are likely to be pedestrians, cyclists and parked cars. There were pedestrians, cyclists and parked cars but you did not drive slowly, therefore you did something you should not have done. D: proposition 2: while true, I transported a bomb away from pedestrians, and there is a general necessity exception that allows breaking the law if needed to save a life (“attacks” the antecedent of proposition 1). P: proposition 3: while 2 is generally true, it only applies when the value of the law that was broken is less important than the value that was protected. Here however speed limits are there to protect life, and balancing life against life is not permitted under the necessity defence (attacks the attack-proposition 2). While early legal AIs in the 1980s and 1990s followed the first, monological model of legal reasoning, the limitations of this approach became quickly visible: much of the law is in the

Formalising law, or the return of the Golem  75 form of debate and disagreement, and if the AI developer has to resolve all these disagreements even before the formalisation can begin, they

a) act outside their competence b) potentially usurp the rule of the judges c) fail to produce adequate explanations of the decision d) simply miss much of what makes law unique.

Formal systems were therefore developed that again extend the simple language we encountered above. Formal dialogue systems come in a huge variety of forms, all with different expressive power and abilities, but they typically share the idea that an “attack” relation and a “defence” relation between arguments can be formally expressed (see, e.g. Prakken & Sartor, 2015; Walton, 2005; for an application to case-based reasoning, see Prakken et al., 2015) The output is then no longer the “one right answer” but rather a complex map of interrelated arguments. While it also becomes possible to formally define the “winner” of the debate as the party that has at least one undefeated argument, it nonetheless makes it easier for the losing side to contest the decision.

6. NEW LAWS FOR OLD GOLEMS So far, our strategy was to either remedy any inconsistencies prior to formalisation, or to make the reasoning about inconsistencies explicit within the AI. A more radical strategy is to simply choose one of the rules and disregard the conflicting norm altogether. Maybe the benefits of reducing road traffic accidents through AVs that strictly adhere to speed limits are worth the loss of flexibility in a very small number of exceptional cases. Maybe the protection that the Golem provides is worth the theoretical risk of its malfunctioning, provided we can mitigate the harm. After all, we are also happy to use ATM machines that will never give us money that is not in our account – even if in a hypothetical case, it may be needed to pay off a kidnapper who threatens to kill his hostage. A human bank manager might have been swayed by this argument, but still, as a society, we consider the advantage of ubiquitous access to money higher, and deal with the exceptional cases through alternative strategies. So far, we treated this as a problem of the AI: if the formal language that the computer understands is insufficiently expressive for a given task, the “fault” is with the approach to computation, and we have to develop better and more expressive languages. But it is of course also possible to invert this argument: maybe, if our legal system is riddled with contradictions and ambiguities, that is the problem to fix. Maybe a good legal system should be easily rendered computational, for the very same reasons that we develop legal AI in the first place: because this is the way to achieve justice and in particular formal equality. What the critics consider as pathological “legalism”, may just be a particularly radical form of legality (see, e.g. Bankowski & Schafer, 2007; Diver, 2021). Legal AI from its inception was closely linked to such a vision of formalism. Early legal technology in the 1980s and 1990s, in particular, often took the formalist account of law as descriptively correct and mirrored it in the design of expert systems. The lack of success of these systems was then seen as a consequence of this association of legal technology with

76  Research handbook on law and technology an (inadequate) legal theory (Zeleznikow & Hunter, 1995; Zeleznikow, 2019). Legal expert systems failed, in this analysis, due to descriptive inadequacy – lawyers could not use them because the reality of the law was too different. But there was of course always an alternative: formalism might fail as a descriptive theory of the law, and as a result legal AI may distort its subject matter as it currently is. But maybe the law ought to be logical, simple and rule-based. Rather than developing more and more complex formalisms to capture the law as it is, maybe we should change the law to make it more amenable to formalisation. This was the idea behind the “EDV – compliant legal drafting” movement in Germany, promoted by academics like Herbert Fiedler and implemented in some low-level legislative projects (so Fiedler, 1976).6 The idea was to incorporate a future formalisation already at the drafting stage, and write legislation simple enough for 1970s natural language parsers. The project was ultimately unsuccessful, also because the limitations of the technology at that time would have meant to force legislators to use highly unintuitive language, while any possible benefit remained purely speculative. Today we see a similar idea in the “law as code” movement (programmatically Lessig, 1999). Law as code complements the idea of using “code as legal enforcement” by changing the way legislators enact and promulgate rules, so that in addition to the natural language version directed at citizens, an “authorised” translation into software code is enacted as well (Waddington, 2021; Waddington, forthcoming).7 This approach to formalising law addresses one of the normative concerns expressed in this chapter. As every formalisation of the law also distorts its meaning, with what authority can AI developers make the necessary decisions? How can they formalise the law without usurping the role of the legislator? With an approved formalisation enacted in parallel with the natural language text, this danger can be mitigated. However, as we have seen, the formal version of the law is inevitably more austere, simple and rigid – it serves one specific aspect of our intuition about justice, but in the past, this intuition was always balanced with other, competing values. So, we have rules, but also discretion, harsh punishment but also amnesties, we treat like cases alike, but try to be responsive to the particularities of a given case. Over time, we can see the influence of these philosophies wax and vane, but they never entirely excluded their opposite, not even during the high days of formalism in the 19th century. Or, the Supreme Court of Alabama opined in Alan v. State (1973): “We have not, and hopefully never will reach the stage in Alabama at which a stonecold computer is draped in a black robe, set up behind the bench, and plugged in to begin service as Circuit Judge”.8 Very clearly, “narrow, legalistic ways” are not enough, and equally, “being stone-cold” is not an endearing quality in a judge. Humans are good at balancing conflicting normative ideals and reason which and about rules that are in conflict But as we saw above, legal AI is much less accommodating. The problem then with formalised law is not necessarily that it

6  For an implementation see, e.g. the practice guidance of the ministries of Niedersachen: “Grundsätze für die Fassung automationsgerechter Vorschriften” der Niedersachs. Ministerien from the 1 June 1970. 7  For a practical project see the Jersey legislative drafting project https://www​.gov​.je​/Government​ /NonexecLegal ​/StatesGreffe ​/SiteAssets​/pages ​/ leg ​isla​t ive ​d raf ​t ing​office ​/ Introduction​%20to​%20the​ %20Computer​-Readable​%20Legislation​%20Project​.pdf 8  Allan v State 290 Ala 339 342 (1973).

Formalising law, or the return of the Golem  77 is wrong, or does not serve a normative good, but that it has to do so at the exclusion of all other aspects of our legal ideals. If we change what we expect of the justice system – or even worse, when the prevalence of legal technology subliminally and clandestinely lowers our expectation of what we can demand of justice – then formalising the (new conception of) law becomes easy(er).

7. CONCLUSION We have now discussed all the points that our initial question raised: Is law computational, or more specifically, is it formalisable? And it should be clear by now what our answer is: as worded the question is meaningless, as the answers will be both trivial and misleading. Can law be formalised? Yes of course! Define a function f so that the first sentence of a statute is assigned the propositional variable A, the second sentence the propositional variable B, the third sentence … Perfect formalisation and no problematic judgment calls by the formaliser are needed. But of course utterly useless for any practical purpose. So maybe we need more: Can law be formalised? Yes of course! We simply need to simplify the law, and lower our expectations of what the justice system does. We may even have good normative reasons for doing this. Both answers are correct, and both are misleading, at least if the purpose of formalisation is to build useful tools. Law is trivially formalisable, and even risk-free but the outcome is then useless. Or the law can be formalised as long as we change our expectations of what the law ought to deliver – the easiest, but also the most dangerous option. What we want in reality is something in the middle, a formalisation that preserves aspects of the law we find interesting or relevant, without distorting the law that it represents too much. For this we need to know who will operate it (and bring their understanding to the task, complementing the machine), in what context and with what aims. “Interesting”, “relevant” and “too much” are irreducible human judgements. So we need to ask a different question, one that is also reflected in the ethics principles for legal technology by AI4People. Not: “can law be formalised”. But: For a given intended application, and given the skills and knowledge of the human who will eventually operate the system, can we formally explicate enough aspects of the meaning of the legal terms that interest us so that the inevitable loss of meaning is not so harmful, to individuals or the justice system, that it outweighs the benefits of automation?

No formalisation is value-free; it always requires normative decisions and commitment to a specific vision of justice. No formalisation is “provably the right one”, but only ever “good enough for a given objective”. This evaluation needs to consider how the system will operate in practice, and at which points human input and judgement will be required – don’t evaluate legal AI, evaluate the socio-legal systems into which AIs are embedded. Law is based on the notion of contestability. AI applications in the legal domain are potentially undermining this contestability. This is the fear of the “black box” society, where intelligible “and hence contestable” human decisions are replaced by modern-day oracles, whose pronouncements can only be interpreted by the high priest of technology.

78  Research handbook on law and technology Our discussion however showed a very different form of “black boxing” that may be more difficult to remedy. Hidden in the early stages of the development process, crucial normative decisions are already taken when the law is translated into machine-readable form. To allow contestability of the value judgements and design decisions that come with it requires: • •

A theory of formalisation as a precondition for standards for both the formalisation process and, crucially, its documentation An understanding of the ethical and legal implications of the choices that are taken in the formalisation process, which requires subject expertise in law

If we take the metaphor that underpinned this chapter seriously, we can see the emergence of a framework that could address this issue. Translation studies as a profession has not only developed methods, standards and concepts that allow for evaluating and critiquing translations, it is also engaged in a process of ethical meta-reflection that sensitises practitioners to the ethical implications of the translation choices. A similar professionalisation, including ethics training and appropriate certification, will also be needed for developers of GOFAI legal technology to ensure that we can reap the benefits of automation while respecting the rule of law ideal.

BIBLIOGRAPHY Addady, M. (2016). Meet Ross, the World’s First Robot Lawyer, Fortune. Retrieved from https://fortune​ .com ​/2016​/05​/12​/robot​-lawyer/. Allen, L.E. (1956). Symbolic logic: A razor-edged tool for drafting and interpreting legal documents. Yale Law Journal, 66, 833–879. Araszkiewicz, M. (2013). Towards systematic research on statutory interpretation in AI and law. In K.D. Ashley (Ed.). JURIX, Proceedings of the 26th Conference. Amsterdam: IOS, 15–24. Ashley, K.D. (1992). Case-based reasoning and its implications for legal expert systems. Artificial Intelligence and Law, 1(2–3), 113–208. Ashley, K.D. (2002). An AI model of case-based legal argument from a jurisprudential viewpoint. Artificial Intelligence and Law, 10(1–3), 163–218. Atkinson, K. (Ed.). (2009). Modelling Legal Cases. Proceedings of the Workshop Co-located with the 12th International Conference on Artificial Intelligence and Law. Barcelona: Huygens. Atkinson, K., Bench-Capon, T. & Bollegala, D. (2020). Explanation in AI and law: Past, present and future. Artificial Intelligence, 289, 103387. Ausín, F.J. & Peña, L. (2000). Paraconsistent deontic logic with enforceable rights. In D. Batens, Ch. Mortensen, G. Priest & J.-P. van Bendegem (Ed.). Frontiers of Paraconsistent Logic (pp. 29–47). Baldford: Research Studies Press. Austin, J. (1880). Lectures on Jurisprudence, or, The Philosophy of Positive Law. London: John Murray. Baker, M. & Maier, C. (2011). Ethics in interpreter & translator training: Critical perspectives. The Interpreter and Translator Trainer, 5(1), 1–14. Bańkowski, Z. (2001). Living Lawfully, Dordrecht: Springer. Bańkowski, Z. & Schafer, B. (2007). Double-click justice: Legalism in the computer age. Legisprudence, 1(1), 31–49. Bench-Capon, T.J. & Coenen, F.P. (1992). Isomorphism and legal knowledge based systems. Artificial Intelligence and Law, 1, 65–86. Bench-Capon, T., Araszkiewicz, M., Ashley, K., Atkinson, K., Bex, F., Borges, F. & Wyner, A.Z. (2012). A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law. Artificial Intelligence and Law, 20, 215–319. Binding, K. (1872). Die Normen und ihre Übertretung: eine Untersuchung über die rechtmässige Handlung und die Arten des Delikts. Erster Band, Normen und Strafgesetze. Leipzig: Wilhelm Engelmann.

Formalising law, or the return of the Golem  79 Borges, G., Wüst, C., Sasdelli, D., Margvelashvili, S. & Klier-Ringle, S. (2023). Making the implicit explicit: The potential of case law analysis for the formalization of legal norms. In G. Borges, K. Satoh & E. Schweighofer (Eds.). Proceedings of the International Workshop on Methodologies for Translating Legal Norms into Formal Representations. Retrieved from https://research​.nii​.ac​.jp/​ ~ksatoh ​/ LN2FRproceedings​.pdf. Branting, L.K., Pfeifer, C., Brown, B., Ferro, L., Aberdeen, J., Weiss, B. & Liao, B. (2021). Scalable and explainable legal prediction. Artificial Intelligence and Law, 29, 213–238. Brownlee, K. (2011). The offender’s part in the dialogue. In R. Cruft, M.H. Kramer & M.R. Reiff (Eds.). Crime, Punishment, and Responsibility: The Jurisprudence of Antony Duff (pp. 54–67). Oxford: Oxford University Press. Brun, G. (2003). Die richtige Formel: Philosophische Probleme der logischen Formalisierung. Berlin: de Gruyter. Brüninghaus, S. & Ashley, K.D. (2003). Combining case-based and model-based reasoning for predicting the outcome of legal cases. In K. Ashely, D. Bridge (Eds.) Case-Based Reasoning Research and Development: 5th International Conference on Case-Based Reasoning, ICCBR 2003 Trondheim, Norway, June 23–26, 2003 Proceedings 5 (pp. 65–79). Berlin: Springer. Campbell, B. (1970). La Mettrie: the robot and the automaton. Journal of the History of Ideas, 31(4), 555–572. Chesterman, A. (1997). Memes of Translation: The Spread of Ideas in Translation Theory (Vol. 22). Amsterdam: John Benjamins Publishing. Coban, F. (2015). Analysis and training of the required abilities and skills in translation in the light of translation models and general theories of translation studies. Procedia-Social and Behavioral Sciences, 197, 707–714. Cresswell, M.J. & Hughes, G.E. (2012). A new introduction to modal logic. London: Routledge. de Maat, E., Winkels, R. & van Engers, T. (2006). Automated detection of reference structures in law. In T. van Engers (Ed.). Legal Knowledge and Information Systems (pp. 41–50). Amsterdam: IOS Press. Deakin, S. & Markou, C. (Eds.). (2020). Is Law Computable? Critical Perspectives on Law and Artificial Intelligence. London: Bloomsbury Publishing. Diver, L. (2021). Computational legalism and the affordance of delay in law. Journal of Cross-disciplinary Research in Computational Law, 1(1). Retrieved from https://journalcrcl​.org​/crcl​/article​/view​/3. Duguid, S., Edwards, L. & Kingston, J. (2001). A web-based decision support system for divorce lawyers. International Review of Law, Computers & Technology, 15(3), 265–279. Felix, D. (1998). Einheit der Rechtsordnung: Zur Verfassungsrechtlichen Relevanz einer Juristischen Argumentationsfigur. Stuttgart: Mohr Siebeck. Fiedler, H. (1976). Automationsgerechte Rechtssetzung im Rahmen der Gesetzgebungstheorie. In: J. Rödig, E. Altmann, E. Baden, H. Kindermann, R. Motsch & G. Thieler-Mevissen (Eds.). Studien zu einer Theorie der Gesetzgebung (pp. 666–678). Berlin: Springer–. Floros, G. (2020). Ethics in translator and interpreter education. In M. Zhou (Ed.). The Routledge Handbook of Translation and Ethics (pp. 338–350). Routledge. Gabbay, D.M. (1992). How to construct a logic for your application. In H. Ohlbach (Ed.). GWAI-92: Advances in Artificial Intelligence, 1992 Proceedings. Lecture Notes in Computer Science (Vol. 671, pp. 1–29). Berlin, Heidelberg: Springer. Hacker, P., Passoth, J.H. (2022). Varieties of AI explanations under the law. From the GDPR to the AIA, and beyond. In: A. Holzinger, et al. (Eds.). xxAI – Beyond Explainable AI. Lecture Notes in Computer Science (Vol. 13200). Cham: Springer. Retrieved from https://doi​.org​/10​.1007​/978​-3​- 031​ -04083​-2​_17. Hart, H.L.A. (2012). The Concept of Law. Oxford: OUP. Hildebrandt, M. (2015). Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology. Cheltenham: Edward Elgar Publishing. Johnston, B. & Governatori, G. (2003). Induction of defeasible logic theories in the legal domain. In G. Sartor (Ed.). Proceedings of the 9th International Conference on Artificial Intelligence and Law. New York: ACM (pp. 204–213). Retrieved from https://doi​.org​/10​.1145​/1047788​.1047834. Kaminski, M.E. (2021). The right to explanation, explained. In S.K. Sandeen, C.W. Rademacher & A. Ohly. Research Handbook on Information Law and Governance (pp. 278–299). Cheltenham: Edward Elgar Publishing.

80  Research handbook on law and technology La Mettrie, J.O.D. (1994). Man a Machine and Man a Plant, In: R.A. Watson & M. Rybalka (trans.). Indianapolis: Hackett Publishing. Lenzen, W. (2005). Leibniz on alethic and deontic modal logic. In D. Berlioz, F. Nef (Eds.). Leibniz et les Puissances du Langage (pp. 341–362). Paris: J Vrien. Lessig, L. (1999). Code and Other Laws of Cyberspace. New York: Basic Books. Mackaay, E., Poulin, D., Frémont, J., Bratley, P. & Deniger, C. (1990). The logic of time in law and legal expert systems. Ratio Juris, 3(2), 254–271. Maxwell, J.C., Antón, A.I., Swire, P., Riaz, M. & McCraw, C.M. (2012). A legal cross-references taxonomy for reasoning about compliance requirements. Requirements Engineering, 17, 99–115. Mazzarese, T. (1993). Fuzzy logic and judicial decision-making: A new perspective on the alleged norm-irrationalism. Informatica e Diritto, 2(2), 13–36. McCarty, L.T. (1976). Reflections on TAXMAN: An experiment in artificial intelligence and legal reasoning. Harvard Law Review, 90, 837. McGinnis, C.N. (2007). Paraconsistency and Deontic Logic: Formal Systems for Reasoning with Normative Conflicts. University of Minnesota ProQuest Dissertations Publishing. Retrieved from https://www​.proquest​.com​/docview​/304840273​?fromunauthdoc​=true. Merigoux, D., Chataing, N. & Protzenko, J. (2021). Catala: a programming language for the law. Proceedings of the ACM on Programming Languages, 5, 1–29. Meyer, J.J.C. (1993). Deontic logic: A concise overview. In J.J.C. Meyer & R.J. Wieringa (Eds.). Deontic Logic in Computer Science: Normative System Specification (pp. 3–16). Chichester: Wiley. Mills, M. (2016). Artificial Intelligence in Law: The State of Play. Retrieved from https:// britishlegalitforum​.com​/wp​-content​/uploads​/2016​/12​/ Keynote​-Mills​-AI​-in​-Law​-State​-of​-Play​-2016​ .pdf. Philipps, L. & Sartor, G. (1999). From legal theories to neural networks and fuzzy reasoning. Artificial Intelligence and La, 7, 115. Prakken, H. & Sartor, G. (2002). The role of logic in computational models of legal argument: a critical survey. In F. Sadri (Ed.). Computational Logic: Logic Programming and Beyond (pp. 342–381). Berlin: Springer. Prakken, H. & Sartor, G. (2015). Law and logic: A review from an argumentation perspective. Artificial Intelligence, 227, 214–245. Prakken, H., Wyner, A., Bench-Capon, T. & Atkinson, K. (2015). A formalization of argumentation schemes for legal case-based reasoning in ASPIC+. Journal of Logic and Computation, 25(5), 1141–1166. Priest, G. (2002). Paraconsistent logic. In D.M. Gabbay & F. Guenthner (Eds.). Handbook of Philosophical Logic (pp. 287–393). Berlin: Springer. Priest, G. & Routley, R. (1982). Lessons from pseudo scotus. Philosophical Studies, 42(2), 189–199. Rissland, E.L., Ashley, K.D. & Branting, L.K. (2005). Case-based reasoning and law. The Knowledge Engineering Review, 20(3), 293–298. Robertson, B. & Vignaux, G.A. (1993). Probability—the logic of the law. Oxford Journal of Legal Studies, 13(4), 457–478. Satoh, K., Asai, K., Kogawa, T., Kubota, M., Nakamura, M., Nishigai, Y. & Takano, C. (2011). PROLEG: an implementation of the presupposed ultimate fact theory of Japanese civil code by PROLOG technology. In T. Onada, D. Bekki, E. McCready (Eds.) New Frontiers in Artificial Intelligence: JSAI-isAI 2010 Workshops Tokyo, Japan, November 18–19, 2010, Revised Selected Papers 2 (pp. 153–164). Berlin: Springer. Schafer, B. & Bromby, M. (2005). Wie Tajomaru seine NemeSys fand: Expertensysteme zum Augenzeugenbeweis. In B. Schünemann, M.-T. Tinnefeld, R. Wittman (Eds.). Gerechtigkeitswissenschaft (pp. 259–277). Berlin: Berliner Wissenschaftsverlag. Schafer, B. et al. (2020). Legal services industry. In AI4People 7 AI Global frameworks (pp. 171–209). Retrieved from https://ai4people​.eu​/wp​-content​/pdf​/AI4​Peop​le7A​IGlo​balF​rameworks​.pdf. Sourdin, T. (2018). Judge v robot? Artificial intelligence and judicial decision-making. University of New South Wales Law Journal, 41(4), 1114–1133. Sparkes, M. (2023). AI will advise a defendant in court. New Scientist, 257(3421). Susskind, R. (2008). The End of Lawyers (pp. 121–123). Oxford: Oxford University Press.

Formalising law, or the return of the Golem  81 Thomson, A. (2016). French eighteenth-century materialists and natural law. History of European ideas, 42(2), 243–255. Ulenaers, J. (2020). The impact of artificial intelligence on the right to a fair trial: Towards a robot judge? Asian Journal of Law and Economics, 11(2). Waddington, M. (2021). Rules as code. Law in Context, 37, 179–186. Retrieved from https://journals​ .latrobe​.edu​.au​/index​.php​/ law​-in​-context ​/article​/view​/134. Waddington, M. (2022). Rules as code: drawing out the logic of legislation for drafters and computers. In C. Stefanou (Ed.). Modern Legislative Drafting-A Research Companion. Routledge (Forthcoming). Retrieved from https://ssrn​.com​/abstract​= 4299375 or http://dx​.doi​.org​/10​.2139​/ssrn​.4299375. Walton, D. (2005). Argumentation Methods for Artificial Intelligence in Law. Berlin: Springer. Zeleznikow, J. (2019). Reflections on my journey in using information technology to support legal decision making-from legal positivism to legal realism. Law in Context, 36(1), 80–92. Zeleznikow, J. & Hunter, D. (1995). Reasoning paradigms in legal decision support systems. Artificial Intelligence Review, 9, 361–385.

6. How not to get bored, or some thoughts on the methodology of law and technology Przemysław Pałka1 and Bartosz Brożek2

1. INTRODUCTION Do you remember reading a paper, or listening to a talk, that changed the way you think about the world? A contribution that rigidly put in words something you intuited but could not name, or that questioned an assumption you considered a dogma, or that drew your attention to something you never noticed before? That is what excellent scholarship does. Law and technology features many such works.3 Sometimes, they empirically demonstrate a striking characteristic of the world, like Latanya Sweeney’s Discrimination in Online Ad Delivery (2013) or Yannis Bakos and colleagues’ Does Anyone Read the Fine Print? (2014). Sometimes, they clearly explain the intricacies of technology, using language, and sensitivities illuminating for the lawyers, like Harry Surden’s Machine Learning and Law (2014) or Joshua Kroll and colleagues’ Accountable Algorithms (2017). Sometimes, they document the unexpected roles that the law has played in shaping the sociotechnical reality surrounding us, like Julie Cohen’s Between Truth and Power (2019) or Katharina Pistor’s The Code of Capital (2020). Yet other times, they draw attention to a regulatory challenge and discuss a solution that sparks strong emotions or even opposition, like Lawrence Solumn’s Legal Personhood for Artificial Intelligences (1992), Joshua Fairfield’s Virtual Property (2005), or Omri BenShahar’s Data Pollution (2019). Finally, sometimes they provide a conceptual framework for making sense of the world, like Lawrence Lessig’s The Law of the Horse (1999) or Shoshana Zuboff’s Surveillance Capitalism (2019).4 Though there are many kinds of questions that can be approached in many different ways, a common feature of excellent works is that they leave a mark. 1  The research leading to these results has received funding from the Norwegian Financial Mechanism 2014–2021, project no. 2020/37/K/HS5/02769, titled “Private Law of Data: Concepts, Practices, Principles & Politics.” For the incredibly helpful feedback, huge thanks go to Nikolas Guggenberger, Olia Kanevskaia, and Thomas Streinz. 2  The research was conducted in the framework of the project “The Legal Imagination,” financed by Polish National Science Center (Grant No 2021/43/B/HS5/01509). 3  The following list is obviously very subjective and non-exhaustive, drawing on the personal experience of one of the chapter’s authors and his perception of what works left a mark on his communities of discourse. Many others could and/or deserve to be mentioned. Some were omitted as they were written by the people the author considers his friends and, given that friendships are not always public, he felt uneasy hailing their writing as excellent. Many others were not included given the author’s ignorance. The other author has always struggled to be hugely impressed by anything, but readily admits that the list includes scholarly works of the highest standard. 4  Note that for conceptual works to be excellent, they don’t even have to be judged by the community as “correct” in the end, as it is often through sparking disagreement that they move the scholarship forward.

82

How not to get bored, or some thoughts on law and technology  83 However, law and technology scholarship (as, indeed, any scholarship) also contains numerous predictable, repetitive, or unoriginal works. Do you remember attending a conference where, upon seeing the talk’s title, you knew exactly what the speaker would say?5 Or do you recall having to review a paper that added nothing new to the knowledge, even if it was factually and logically correct? Or, even worse, do you remember that draft that was simply factually or logically flawed? Such an output testifies to wasted time and potential, suboptimal individually and socially. Why would one spend months writing a paper that adds or changes nothing? Why would the government, or anyone else, fund such work? Granted, for statistical reasons, there must be a huge gray background for the few colorful dots – innovative ideas – to be visible at all. Moreover, there is no recipe for coming up with novel and brilliant ideas. However, some general methodological precepts can help avoid mistakes and think more creatively, even if they do not guarantee success. They include, we argue, guidelines for asking good questions: careful formulation, and testing the validity of the question’s assumptions, as well as some strategies for offering answers: caution when following one’s intuition, exemplification, and variation (Brożek, 2016). These precepts – which we will explore in detail below – constitute a remedy for some natural inclinations of the human mind that make creative and innovative thinking difficult. Our minds employ various epistemic safety mechanisms: we tend to be cognitively conservative, gravitate toward known solutions, and “freeze” the first idea that fits (Kruglanski, 1989). Moreover, we all experience the need to be recognized as a part of a group, making us prone to herd behavior and social pressure (Kameda & Hastie, 2015; Shiller, 1995; Sunstein & Hastie, 2014; van Gestel & Micklitz, 2014). Further, we all fall prey to numerous biases and mental shortcuts, sometimes useful yet often standing in the way of creative reasoning (Kahneman, 2011). These tendencies are not characteristic of law and tech considerations only – they inhibit creativity in all domains, from everyday thinking to the most sophisticated mathematical analyses. However, law and tech is a sphere of reflection where these tendencies are clearly visible, since the problems one encounters here pertain to rapid social developments and are often quite unexpected. Under such circumstances, cognitive safety, group pressures, and other biases have a perfect environment to flourish, and to counteract these forces one needs a well-equipped methodological toolbox. Below, we will describe the basic tools belonging to this toolbox. One important caveat: in this chapter, we focus solely on the cognitive mechanisms operating in the human mind, and not on the institutional constraints and incentive structures that many (especially young and underprivileged) scholars face. We acknowledge that many researchers experience the pressure to publish and, in some circumstances, might be expected to comply with the ways of thinking specific to a given community, in order to be accepted as its members.6 We admit it could be much easier to experiment and go about questioning paradigms or pursuing risky projects, once one is tenured, recognized, and on a comfortable payroll. Moreover, we do realize that in judging what scholarship has proven to be “excellent,” one must be aware of hindsight bias and the fact that it is not just the merits of the papers but, also, the ability to “market” one’s own work, that play a role. However, keeping this 5  During a conference made up of mostly of such presentations, one of us drafted a joke blogpost capturing the approach: https://przemyslaw.technology/2018/11/30/how-to-write-a-law-and-technology-paper/. 6  See Hans Micklitz’s chapter in this Research Handbook.

84  Research handbook on law and technology fundamental inequality in mind, we believe that everyone can benefit from the considerations we offer in this chapter. Especially when the game is harder for you, or even when the odds seem stacked against you, a critical reflection on one’s own creative process can prove fruitful. Hence, it is not the goal of the chapter to judge anyone’s excellence. On the contrary, it is simply to assist those trying to think outside of the box.

2. ASKING QUESTIONS A good paper has a clear argument; or at least that is what supervisors tell their Ph.D. students (Andrews, 2007; Parry, 1998). But what is a “clear argument”? An argument is simply a chain of reasoning which provides support for a thesis. It is “clear” if it is logically valid and each of its steps is made explicit or otherwise can easily be reconstructed.7 However, this is not sufficient. One can provide a clear argument that ‘p is equivalent to p’ or that according to Polish law murder is punished with imprisonment. These arguments would not generate good papers. What is missing? Rather than providing an overview of things that came to the author’s mind when thinking about a subject, e.g., “technology X and the law (Y),” a good paper should have a clear argument that answers an interesting question or addresses a genuine problem. The question does not even need to be mentioned explicitly, but from a paper with a clear structure, it can always be reverse-engineered. As with all meaningful scholarship, a clear argument – and in consequence a good paper – begins with a genuine problem, a question that is worth answering. Now, what makes up an interesting problem is the million-dollar question here. The answer will depend on what is the state of the art, what social needs seem pressing, and where a given community of discourse seems stuck. Nevertheless, when trying to formulate such a problem or question, the considerations below might prove helpful. 2.1 The Varieties of Questions Generally, in scholarship, one can distinguish between positive questions (what is?) and normative questions (what should be?) (Kennedy, 1985; Parisi, 2004; Posner, 1981; Smits, 2012).8 For example, “How does machine learning work?” is a positive question one will answer using different methods than a normative question, like “Should the European Union regulate systems based on machine learning?” One can also speak of hermeneutic questions: how to understand a linguistic expression (e.g., “Vehicles are not allowed into a public park”) or some other symbolic representation (e.g., an emoticon sent in a text message) (Brożek, 2016).9

7  However, philosophers and other scholars do debate the meaning of all these concepts, see, e.g., Davies (2011; Walton, 1990). 8  See, also Solum at: https://lsolum​.typepad​.com​/ legal​_theory​_lexicon​/2003​/12​/ legal​_theory​_le​ .html 9  One may argue that hermeneutic questions are ultimately reducible to either positive ones (when the interpreted message is a description of something) or normative ones (when the interpreted set of symbols pertains to what should be the case). However, there are also good evolutionary reasons to claim that all questions are hermeneutic, and their differentiation into positive and normative is only an ex-post theoretical reconstruction. Cf. Brożek (2016, pp. 118–119).

How not to get bored, or some thoughts on law and technology  85 Positive questions could be further divided into descriptive, e.g., “How many adults in Poland are active Facebook users?” and explanatory, e.g., “Why do less, or more, adults in Poland use Facebook compared to the EU average?” Normative questions, in turn, encompass evaluative questions, e.g., “What are the pros and cons of wide social usage of social media?”, and prescriptive questions regarding the goals, e.g., “Should the government try to combat the spread of disinformation on social media?”, and regarding the means, e.g., “How can the government best fight the fake news online?”. Of course, a research paper might – and often does – address several types of questions simultaneously, as prescriptions presuppose answers to positive and evaluative questions. Notably, one can ask the same kind of question regarding different categories of objects: legal norms, discourses, technologies, social practices, etc. For example, “Are there any rules in Poland governing placing AI systems on the market?” is a descriptive question about the law, “How do facial recognition systems work?” is a descriptive question about technology, and “How many Polish businesses use facial recognition systems on their premises?” is a descriptive question about social practices. Similarly, “Is financing social media through data-driven targeted advertising good for consumers?” is an evaluative question about social practices, “Is ‘surveillance capitalism’ a theory properly explaining corporate data collection and usage?” is an evaluative question about a discourse, and “can existing rules of consumer law protect consumers from data-driven manipulation by targeted ads?” is an evaluative question about the legal rules. Again, a research paper in law and technology will often address several types of questions regarding several kinds of objects. This matters since a clear understanding of what type of question one is dealing with (and what object it concerns) influences the way the question should be answered. In particular, positive and normative questions concerning law and extralegal reality may require different types of expertise. A legal scholar will be well-equipped to approach some normative questions (e.g., those pertaining to the meaning and relationship of legal norms) and some positive questions (e.g., those pertaining to the validity and content of legal provisions), while in responding to some other positive questions (e.g., how do blockchains work?) or normative questions (how to assess a given law under various conceptions of justice?) they should usually rely on the knowledge provided by experts in other domains. These ideal-type distinctions sometimes get complicated by the reality of legal scholarship. For example, in some communities of discourse, the distinction between positive and normative questions might become blurred. There are legal scholars who see their role as providing the “scientific interpretation” of the existing legal provisions (the so-called “de lege lata” scholarship). Within this view, a legal expert would analyze hard cases in the abstract and subject their method of interpretation to peer scrutiny, on the assumption that the legal method is somehow akin to the scientific method. For example, it is not uncommon for legal academics in some communities to address questions like “What is the legal status of cryptocurrencies under the law of country X?” even if (or precisely because) the law does not explicitly specify that status. The primary goal of such a contribution would be neither to assess whether the law is optimal nor to propose any changes but to argue how a judge should decide a hypothetical case given the law as it is. Depending on how firmly one believes in the “scientific” nature of legal interpretation, one could see such questions either as normative: “How should the law be interpreted?”, or positive: “What is the actual content of the legal norm when one properly interprets the law?”

86  Research handbook on law and technology Some readers might find the notion of the law being anything like a science perplexing; it is not our intention in this chapter to engage this problem.10 However, what we do want to point out is that the kind of questions one asks, by default, is often influenced by the habits, assumptions, and practices of the community of discourse one is a part of. In certain national or professional contexts, it simply does not cross a legal scholar’s mind that their job is to propose a change of regulatory paradigm or to criticize the political choices made by the legislature. “That is what philosophers do, or activists, not legal scholars,” one might think. This is a reflection of some tendencies deeply rooted in human biology. As cognitive scientists point out, in our decision-making processes we are usually driven by – and often misled – by our social identity and groupthink (Turner & Pratkanis, 1998; Van Bavel et al., 2020). The research shows that to identify with a group (e.g., of legal scholars working in the same legal tradition) and experience the feeling of belonging, one is prone to judge situations and make decisions based on group interests rather than objective reality (Jönsson et al., 2015; Manstead, 2018; Pospíšil & Macháčková, 2021). The dynamics of making choices within a group are particularly visible in the phenomenon of groupthink. The term itself was coined in the 1970s by Irving Janis (Janis, 1982). Since then, much evidence has been amassed to identify the mechanisms responsible for making (bad) decisions by groups of people. They include information signals (restraining one from expressing their view because of the conviction that others have better knowledge regarding the discussed issue) and reputational pressure (refraining from expressing one’s views because of the fear of anger or dissatisfaction from others) (Sunstein & Hastie, 2014). This shows that a particular style of doing legal scholarship and the kinds of questions that are addressed in the tradition, are not only outcomes of some accidental historical choices, but are strengthened by the operations of fundamental mental mechanisms. As a result, a legal scholar encountering a novel socio-technological phenomenon – e.g., social media moderating content posted by politicians, consumers directly investing in cryptocurrencies, AI systems replacing white-collar workers, etc. – will often, by default, approach it from a specific perspective, typical to their community of discourse. Some, as indicated above, will write as if they were a judge who must correctly solve a hard case given the law as it is. Others, and this might be the prevailing approach in anglophone law and technology scholarship, will look at such a phenomenon from the policymaker’s perspective. The question “Should we regulate this technology, and if yes, how?” often animates law and tech papers. A typical paper of this kind would open with a description of how a given technological advance has changed social relations and practices, scrutinize the benefits and the dangers the change poses, sometimes look at whether the existing regulations might be sufficient to mitigate the harms, and propose some regulatory intervention. There is a high chance that the readers have written, or are thinking of writing, a paper following this exact structure. In some communities of discourse, this structure is a norm. Imagine, however, someone being perplexed by the idea of writing a paper like this. One could wonder, for example, what criterion should be applied to decide what constitutes “benefits” and what “harms” or how to balance them. For instance, generative AI systems like DALL-E 2 or ChatGPT could lead to many copyrighters or illustrators losing their jobs, yet at the same time, lower the costs of running countless businesses. Would such a mass job loss be a “harm?” Or should one see it simply as a change that will free up creative energy for 10 

For a reconstruction of the debates concerning the problem, see Smits (2012).

How not to get bored, or some thoughts on law and technology  87 other purposes? Can the law give an answer? And if not, how to ensure it is not the writer’s political convictions influencing the assessment? Interpreting laws that already embody the political choices made by the legislature – one might think – is what legal scholars can do, but what gives them the right to advocate for any choices with the label “academia” giving them an aura of expertise? Possibly, one way to move the scholarship forward is to ask a kind of question that other members of one’s community of discourse do not ask. For a black-letter, doctrinal scholar, this could mean asking how the law should regulate a certain phenomenon (given a specific normative perspective) and only then checking if the existing law is delivering optimal solutions. For a policy-oriented scholar, this could mean abandoning the quest to propose the regulatory intervention and scrutinizing how the existing law would apply to a case at hand. This could not only feel fun, challenging, and liberating; it could also lead to previously unnoticed insights. Importantly, what is a paradigm-shifting perspective in one community might be the standard approach in another, or vice versa. Crucially, however, one should be mindful of the vast array of perspectives available, especially when writing in a community used to only one, predominant approach. And of course, the two perspectives discussed above – the judge-like “How should the law be interpreted?” and the policymaker-like “Should this technology be regulated and how?” – are not the only two available ones. Law and technology, seen globally, features very diverse types of contributions. Some ask how technology can assist individuals or the government in some legal tasks, like the attempts to automate the oversight of consumer law (Lippi et al., 2019). Others ponder the role that the law has played in the emergence of a given socio-technological reality (Cohen, 2019). Yet others use technology as a lens to scrutinize the implicit assumptions made in the existing law (Mayson, 2018). All these perspectives are valid and can be valuable; all come with their limitations and traps. However, one danger common to all kinds of questions in law and technology (and beyond) is the validity of their assumptions. Let us now have a look at this problem. 2.2 Beware of Assumptions Law and technology scholarship, most of the time,11 requires knowledge and expertise in areas other than law: in engineering, philosophy, sociology, etc. For example, when asking, “How should the European Union regulate the issuing of crypto assets?” the scholar needs knowledge not only of what the law already is or what regulatory options exist but also of how blockchain works and what are the social practices involving this technology.12 This would seem obvious. Yet, it is not uncommon for papers in law and technology to make questionable normative assumptions or even incorrect factual assumptions. One way to avoid making them is to examine the assumptions of one’s question.

11  There will be papers dealing solely with thought experiments, e.g., “If there existed sentient robots, should we grant them legal personality?” Such papers can provide interesting insights into legal scholarship based solely on the imagined reality; however, their authors (or readers) should be mindful not to start treating such thought experiments as if they were genuine regulatory challenges (what sometimes happens). 12  On the importance of understanding “technology” not only as the inventions and their features but also the individual and collective practices of using them, see Balkin (2015).

88  Research handbook on law and technology Let us analyze the assumptions of the question: “How should the European Union regulate the issuing of crypto assets?” First, there are the question’s presuppositions, i.e., statements that must be true for the question to be meaningful at all, such as “crypto assets exist and can be issued.”13 Yet, while posing our question one assumes much more than simply existence: concrete features that crypto assets have, particular motives of those who issue and purchase them, etc. One also, most probably, tacitly makes predictions about the future.14 Psychologically, this is where one needs to be mindful of the need for cognitive closure. Let us imagine a scholar asking such a question. They hear, in the news, about bitcoin’s value rapidly rising or about dozens of coins being issued and invested in. Some people quickly become rich, some people lose money, and it seems like the financial system might be in danger. What is thrilling about such developments is not only the potential benefits and pitfalls that crypto assets might bring about but also their perceived novelty. Put simply: one does not fully understand what is going on.15 The natural next step is to try to understand the intricacies of the technology and its social use. People, especially scholars, do not like not understanding things. This makes us uncomfortable. Hence, our scholar will probably google “How does crypto work?” and come across several videos, blog posts, and maybe scientific articles. The last ones could include a myriad of terms one does know and, to the horror of many readers, some mathematical equations. So, one will quickly close them, forget about them, and go to YouTube. There, in a 15-minutelong video, an excited young person will “explain blockchain” to them. Now they feel like they understand it, “know” enough to write an explanatory paragraph at the beginning of the article, and proceed to the legal analysis they feel comfortable with. Obviously, our scholar does not know or understand enough to engage in serious policyoriented scholarship.16 But they might feel like they do. They “freeze” at the first explanation that fits within their worldview and make it their own, one very difficult to undermine or revise. This is the danger. There are works in law and technology that do not deal with reality but with one’s simplified and incorrect idea of it. While a paper based on such flawed assumptions may still theoretically make an interesting contribution, such an outcome would result from luck. Hence, one needs humility and time to properly understand the socio-technological phenomena one wants to suggest regulating. One should also remain open to the revisions of one’s beliefs, even if such openness is opposed by the powerful forces of the need for cognitive closure and cognitive safety. A good way to achieve it is to engage in an interdisciplinary collaboration with scholars from other disciplines – engineers, computer scientists, marketing experts, etc. – possessing a much better understanding of the to-be-analyzed phenomenon than lawyers. Yet, such partnerships are less common than one would expect. The problem of questionable factual assumptions can be found in many areas of law and technology. They result not only from the need for cognitive closure but also from societal and group pressures, as well as some heuristics and biases. For example, when writing about 13  The question also presupposes the existence of the European Union. Albeit a fascinating problem in social ontology, we leave it aside for the purposes of this paper’s argument. 14  Most of the scholarship about law and blockchain, given that the latter is not yet widely used by anyone, is based on the assumption that soon it will transform the world. 15  The same is true with all the other technologies: How does a self-driving car know that a pedestrian entered the road? How does ChatGPT write poems? 16  Or doctrinal scholarship, for that matter.

How not to get bored, or some thoughts on law and technology  89 targeted advertising, scholars tend to assume that the algorithms for ad delivery have the power to influence the behavior of a consumer or a voter in a way that goes beyond their consciousness. Such systems, allegedly, strip individuals of their autonomy and agency, and so obviously must be regulated. This conviction animates the entire “surveillance capitalism” literature. But do they? A growing body of empirical evidence suggests that ad delivery algorithms are much less effective than this community of discourse assumes (Hwang, 2020). Interestingly, if one compares the discourse to the lived experience – how many people do you know that bought stuff they saw online, only to wonder later how it happened? – one would notice the exaggeration. But, sometimes, paradoxically, we like thinking that there is this terrible danger out there that our scholarship will solve, either because that is expected of us by our group or else we rely on some heuristics (such as representativeness or accessibility heuristics) which lead to bias. Many areas of law and technology make such assumptions. We must regulate how selfdriving cars choose whom to sacrifice when an accident is imminent! But are they actually being programmed to make such choices? We need to regulate the rights of sentient robots! But are there such robots out there? We need to regulate the consumer use of smart contracts! But are consumers actually using smart contracts? We need to regulate fridges ordering eggs when one runs out of them! But are there such fridges at all? Notice how many of these areas, when seen in more nuance, do pose real social problems in need of scholarly reflection. Maybe targeted ads are not magic spells that turn people into purchasing machines, but the very fact that some advertisers believe this assumption gives companies like Google or Meta an incentive to design products like YouTube or Instagram to be addictive (Pałka, 2021b; Rosenquist et al., 2021; Zakon, 2020). Maybe self-driving cars are not constantly choosing whom to kill, but the social perception of them not being accountable hinders the development of potentially safer or more environmentally friendly means of transport. To ask such questions, however, one needs to spend much more time in the uncomfortable space of “I do not yet understand the technology that fascinates me” and “maybe the problems that first come to my mind are not what the society actually struggles with.” However, it is not just unwarranted factual assumptions that law and technology scholars need to be aware of. Let us return to the question: “How should the European Union regulate the issuing of crypto assets?” What else is assumed here? This question – prescriptive about the means – assumes the affirmative answer to the question prescriptive about the goals, namely that the European Union should regulate crypto assets. But should they be regulated at all? And why should the Union regulate them and not its Member States? Very importantly: we, the authors of this chapter, are not saying that technologies should not be regulated. We are not advocating for laissez-faire. What we are saying, however, is that the need for regulation is not always self-evident. It is often assumed out of habit due to group pressures or heuristics and biases. In many communities of discourse, such answers tend to be assumed by default. If there “is” a new technology, and it poses some potential benefits and risks and is not yet regulated, it should be regulated. We tell ourselves many stories about why this is obvious. For example, we have had a laissez-faire approach to the internet since the 1990s, and look where it got us! Or: unless we provide the legal frames for the socio-technological developments, they will not happen.17 However, for the scholarship 17  This assumption, contradictory to the former, drives the “data intermediary service” or “data altruism” ideas in the European Union’s Proposals for the Data Governance Act and Data Act.

90  Research handbook on law and technology to truly move forward, such assumptions must also be critically examined. Notice how questioning the dogma does not necessarily need to lead to its rejection. One might, after careful analysis, conclude that, yes, indeed, technology X needs to be regulated. Yet, concluding this puts a scholar in a different position than simply assuming. Moreover, some legal scholars – especially those used to commenting on the provisions of codes and statutes – might feel much more comfortable once there is a text to analyze. In Europe, for example, various groups like to propose “model rules” or “guiding principles” for novel socio-technological phenomena.18 Such outputs, styled to look like statutes, assume not only that technology should be regulated but also what the goals of the regulation should be and how to achieve them best. Whether they advance scholarly understanding of reality or influence the policymakers remains an open question. However, they bring comfort to these legal scholars who like to comment on the rules. The cognitive safety is achieved, as is the illusion of understanding. But maybe genuine insights came from sitting with the intellectual discomfort. Finally, notice how the question “How should the European Union regulate the issuing of crypto assets?” presupposes that it should be the European Union, not the Member States, that issues the novel rules. Again, many communities of discourse simply assume this, through group pressures and complex groupthink. Some will say that small Member States cannot fight the power of the Big Tech firms like Google or Meta. Others maintain that uniform law is obviously better, as it guarantees equal rights of the European Union’s residents and lowers compliance costs for businesses. And maybe these statements are true. But maybe, in some cases, they are not. The European Union’s legislative process has its drawbacks – it’s painfully slow and highly lobbied by corporations (just as much as it is cheaper to comply with one set of rules rather than 30, it is cheaper to lobby one legislature). Moreover, defaulting to the European Union assumes that lawmakers know what they are doing. Notice how letting the Member States figure out how to regulate social media, or AI, or anything else, on their own, also has advantages. It allows us to experiment with various approaches and compare their effects. It makes it easier to roll back the regulations that seemed great but proved wrong. Again, let us be clear: we, the authors of this chapter, are not against the European Union’s regulation of technology. We merely point out that one should not automatically assume that this is the best course of action. Posing an interesting question, one embodying a genuine problem, is no easy task. One needs to know what kind of question is put forward and what is the assumption behind it. In the erotetic logic, i.e., the logic of questions, it has been analyzed how sets of sentences (e.g., a set of one’s beliefs) evoke a question (Wiśniewski, 1995). The kinds of questions we ask are therefore logically determined (at least to an extent) by the beliefs we have, both factual (descriptive, explanatory) and normative (evaluative, action-guiding). These beliefs, in other words, form a set of assumptions behind the question we pose. It is therefore crucially important not only to consider the question itself, but also the set of assumptions, since our beliefs that form those assumptions may easily be false or unfounded, leading to asking the wrong kind of questions. It is difficult for the following reason: the inertial forces that underpin the operations of our minds have been evolutionarily shaped to select cognitive safety and in-group identity instead of novelty and innovation. However, we can counterbalance those

18 

See, for example, https://www​.pri​ncip​lesf​orad​atae​conomy​.org/

How not to get bored, or some thoughts on law and technology  91 forces through understanding of how our minds work and through the resulting methodological awareness.

3. LOOKING FOR AN ANSWER What is the purpose of legal scholarship? Why do we pay academics to study, read, and write about the law, instead of relying (solely) on other kinds of experts: practitioners, governmental agencies, consulting firms, etc.? This is a profound question that we do not purport to answer; nor does it necessarily have one answer or, for that matter, a correct answer. However, one could argue that each legal scholar tacitly assumes an answer and that this answer could be discerned from the way they write. What are the goals of the paper you are writing right now? Not the mundane goals of helping in one’s promotion, obtaining or closing a grant, increasing one’s fame and recognition, etc., but the social goals? Do you want to solve a problem you identified, propose an interpretation that a judge will adopt, or the policymakers will enact? Do you want to promote the values you believe are good for the people? Or the solutions you and yours will benefit from? Do you want to change the way others think about the world or to increase the understanding of some socio-technological phenomenon? The readers might find it perplexing that we open the section on answering questions with several questions left unanswered. Yet, we believe that the first step in approaching the problem – whatever the problem is, and for whatever reason one chooses to address it – is to reflect upon the endeavor itself. To get out of the habitual ways of thinking, to notice the herd behavior and peer pressure, and to acknowledge the limitations of one’s mind. 3.1 Variation as a Method One way to come up with a novel contribution is to play with several possible answers to a question, especially those that intuitively strike us as wrong. Consider an example. A problem widely discussed in law and technology scholarship is the liability for harms caused by AI systems (Cauffman, 2018; Erdelyi & Erdelyi, 2021; Vladeck, 2014). Who should be held accountable if a self-driving car causes a crash, or a chatbot spreads defamatory statements about some person, or the (in)famous autonomous fridge ends up ordering 1000 eggs when its owner only needs 10? Note how this problem can be approached both from the judge-like perspective of “Who will be liable for such harms, given the law as it is?” or the policymaker-like perspective: “Who should be liable, do we need to change the law?” Due to the way our minds function (Brożek, 2019), upon seeing such a question, we already have some intuition regarding its possible or preferable answer. Intuition is a mechanism that works at the unconscious level. Through years of experience, one’s mind embodies (acceptable) reactions to typical problems one encounters. Those reactions (a solution to a problem) come “as if from nowhere:” they are not generated through painstaking reasoning with distinct argumentation steps, but are produced by one’s unconscious mind. Intuition is a heavy-duty tool: it operates constantly. Whenever one faces a problem – a practical issue that interrupts a daily routine, a moral dilemma, or a theoretical question pertaining to law and tech – one’s intuition provides an answer, e.g., that self-driving cars (as they are cars) should be governed

92  Research handbook on law and technology by the same principle of strict liability as regular cars, with mandatory insurance payable by the owner. Such intuitions are powerful tools – they are mental shortcuts allowing one to tap into one’s experience in dealing with various problems, an experience resulting from valuable prior reflection. But they should also be approached with caution. One should remember that intuition is well-suited to deal with typical problems, similar to ones already (and successfully) solved in the past. The danger is therefore that intuitions might overfocus on the similarities, making one overlook the critical differences between the past problems and the present challenge. Hence, first, it is always prudent to acknowledge one’s intuitive judgment. Second, a good tool to test one’s intuition is to examine the alternatives. Let us come back to the question about liability for AI harms. What are the possible answers, even the absurd ones? Maybe no one should be liable? If it’s a machine “taking decisions,” isn’t this a little bit like the act of God? Isn’t getting hit by a self-driving car similar to getting struck by lightning? Or, maybe, the government should cover the damages? As we all want innovation to thrive, and people to get compensated for harm, isn’t this a prime example of a public good that should be funded publicly? Or maybe the ad industry, or the science-fiction writers should pay? They are the reasons why people have developed a preference for using such technology in the first place! There are many more options than risk-based or fault-based schemes, with various possible insurance mandates, that one could hypothetically choose from. Notice that many of such answers will prove flatly unacceptable. However, the process of explaining why such answers are inappropriate can be a source of novel and valuable insights, not only about the question at hand but about the law in general. Is it morally just that people should not receive any compensation for the act of God? Is incentivizing private insurance the best way to go? Or, empirically, is it true that persons are not de facto protected from harm caused by the act of God? When there is a flood or a hurricane, the government also does come in with public funds. Why should the government pay for reconstructing a house destroyed by a flood and not by a single lightning? Further, should the entities responsible for developing preferences bear some liability for harm caused by persons acting upon these preferences? These kinds of questions come up only when one has to answer why an on-its-face weird solution is wrong. Notice also how playing with various answers makes explicit the implicit regulatory goals of the existing or potential legal rules. The answer will differ depending on what one sees as the regulatory objective: supporting innovation, creating an incentive structure for an efficient distribution of risk, compensating all the harmed, specifically protecting the vulnerable, etc. Of course, the law realizes several objectives simultaneously (Pałka, 2021a), though the way it balances or prioritizes them might be different and unobvious (Calabresi, 1964). Further, notice how playing with different possible solutions, bringing to the fore different possible regulatory objectives, might lead one to critically reexamine the question itself. While thinking about AI systems, one might intuitively arrive at the question of liability: as there are new kinds of “agents” out there, acting “autonomously” beyond the direct control of any person, and private law assumes such control, we (seemingly) have a problem of disconnect. Yet, once one gets deeper, thinking about the regulatory goals of choosing one liability scheme or another, one might realize that the question is, in fact, not what one truly cares about. Imagine, for example, the following reasoning. Suppose one takes as the normative premise that the legal system should promote innovation yet make sure that people are not hurt. In that case, one quickly realizes that rules of liability are only one, among many other, possible tools to

How not to get bored, or some thoughts on law and technology  93 achieve this regulatory goal. Maybe a more effective means would be some sort of ex-ante certification scheme or re-designing the environments in which AI systems operate. Such a mental exercise can take place regarding any question one poses. The first question one asks might only imperfectly capture the intuitions of the scholar. And it is through applying variation – playing with different, even absurd possible answers, and spelling out why exactly they fail – that one works toward perfecting the question. Finally, there is one more methodological precept for answering questions, namely exemplification. Notice how at the beginning of this section, right after introducing the problem of liability for damages caused by an AI system, we followed it with three more specific questions: “Who should be held accountable if a self-driving car causes a crash, or a chatbot spreads defamatory statements about some person, or the (in)famous autonomous fridge ends up ordering 1000 eggs when its owner only needs 10?” If you wish, spend a while confronting the intuitions you had while reading about the possible liability schemes for AI systems (in the abstract) and the intuitions you have while reading these specific questions. Are they the same, or do you feel some kind of disconnect? Many legal scholars like general and abstract rules. They are familiar and hence cognitively safe. Yet, the world these rules govern is specific and concrete. The reality is messy and knowable only to some extent. The principle of exemplification invites a scholar to continuously switch between general statements and specific examples (Brożek, 2019) to ensure that the reasoning accounts for both simultaneously. Think about it: is it obvious that the liability rules for an AI-powered self-driving car causing a crash, an AI-powered chatbot defaming a corporation, or an AI-powered smart fridge placing an erroneous order, should be the same? Is it obvious that the whole regulatory scheme for these systems should follow the same logic or structure? By no means! These examples deal with very different legally protected goods (human life, good name, household’s budget) and, if a human caused such harms, would be governed by very different areas of law (tort and traffic, freedom of expression and tort, and consumer law and contract, respectively). We do not have one liability scheme for all kinds of actions undertaken by humans; that would be absurd. So why would we have one for all AI systems? One reason why a scholar might consider such an idea proper is that, as long as one stays on the general and abstract level of “AI systems causing harm,” the examples one imagines or intuits remain simple. But the world is not simple. And one way to account for the complexity of the reality the proposed rules would have to govern is to apply not only variation but also exemplification. Again, this exercise can be undertaken regarding any question. How should the contract law approach smart contracts? It depends on whether you mean a completely on-chain transaction regarding some minor coins or combined on- and off-chain operations working with conditions and effects outside the blockchain. Should AI-generated works enjoy copyright protection? It depends on whether the systems producing were trained on specific copyrightprotected sources or the public domain, the amount of money invested in their production, etc. In the real world, there is no such thing as “the AI system causing damage,” “the smart contract,” or “the AI-generated work.” There are “only” thousands of concrete cases that might, though do not have to, be governed by the same general rules. For all these reasons, while looking for an answer to a question, the principles of variation, exemplification, and caution toward one’s intuition are helpful. These are not all methodological precepts out there – on the contrary – but based on the chapter authors’ experiences they tend to be underused. But what should count as a valuable answer?

94  Research handbook on law and technology 3.2 Variation as an Answer What is the comparative advantage of academia over other institutions thinking about law and technology, like private law firms, policymakers and their think tanks, or judges? Are academics smarter? Probably not. Are they less biased? Ideally, though history teaches that many influential professors did use their position as scholarly authorities to advance their own political preferences.19 Do academics have better resources? Well, not in terms of money, but here the answer depends on what one considers a resource. Academics have the time. Academics, by the design of the profession, can be wrong. A scholar can spend several years developing a theory only to conclude that the idea was a bad one, and this can still be a valuable lesson and time well spent. One can write an article, publish it, get criticized, and a year or two later conclude that “indeed, I was wrong.” And every participant in this activity – those who write, those who criticize and write the replies, those who read – will come out wiser. On the contrary, a judge must decide a case at hand, and, if human freedom or property is at stake, they better get it right. Policymakers must solve a problem now, and, if they get it wrong, they will be held accountable during the next election. Law firms need to represent their client presently, and they better choose a winning strategy, or they will lose their position in the market and future contracts. “Practitioners,” broadly understood, have a strong incentive not to err, as the short-term stakes are very high. Academics do not. This difference, by the way, can be one of the explanations for the problem the chapter opened with, i.e., the fact the law and technology, on top of many excellent contributions, also features some works that are not great. Yet, this state of affairs is also a great opportunity. It allows academics to think about things that are risky and pursue projects that might turn out to be dead ends. Put differently: as scholars can err, they can invest time and energy into creativity, playing with ideas others would not even consider. This leads us back to the question: what counts as a valuable output in academic writing? Some scholars like to solve the problem; arrive at one and the best solution. Case X should be decided as Y for reasons Z, or challenge A should be regulated as B for reasons C. This is one approach to take. Another, however, is not to take sides in the debate on what solution should be adopted by authoritative actors like judges or legislatures but rather to broaden the mental horizons of what is possible and deepen the normative understanding of the respective pros and cons of different solutions. To conclude not with a selection but with a variation. Notice that variation, as a method, can lead to various kinds of conclusions. In some cases, upon considering six possible ways to answer the question, the correct one will emerge. That might be the case sometimes. Yet, it is also possible that a scholar arrives at several different possible answers where choosing between one or another is a matter of political judgment, not scholarly expertise. Consider an example. Another subject sparking a lot of discussion in law and tech is the phenomenon of targeted advertising. Companies like Meta or Google offer their services to consumers without asking for monetary compensation. Instead, they collect data about their customers and use this data to capture their attention with personalized content and serve them 19  To mention just the most famous (and dead) ones: Milton Friedman, John Rawls, or Ronald Dworkin.

How not to get bored, or some thoughts on law and technology  95 fine-tuned ads (Mik, 2016). This comes with risks to privacy (Cofone & Robertson, 2017), autonomy (Susser et al., 2019), equality (Xenidis, 2020), and creates perverse incentives for these companies to addict their users to their products. At the same time, even though it is unpopular to say so these days, this creates a pretty egalitarian market. Not everyone can afford healthy food or healthcare, but everyone with a smartphone can “afford” to use Facebook, Instagram, Google, or YouTube. As we quickly get used to the familiar (McDiarmid et al., 2019), we tend to forget how astonishing this is. Everyone with an internet-connected device can search for almost all the writing, knowledge, and ideas ever generated by humanity, connect with billions of people around the world, and watch entertainment or educational videos without having to pay. Hence, the problem is complicated. On the one hand, we have business models posing risks to values we hold dear. Yet, on the other, these business models created one of the most egalitarian (in terms of access to the product) consumer markets in the history of humanity. Now, imagine approaching this problem as a scholar. One way to do so is to consider all the pros and cons, balance them somehow, and arrive at a conclusion: targeted ads must be banned, or these corporations must offer a choice between the existing model and ad-and-data-collection-free subscription model, or let us leave them as they are. Another way to do so is to consider all the available options, discuss their pros and cons, and spell out where the choice is a political one. Both approaches might have their merits, but we believe the latter is underappreciated. There is this drive, in many communities of discourse, to arrive at one “right” answer. That is, indeed, what judges and policymakers must do. But must academics? What do we gain, and what do we lose when we approach all the problems with this mentality? These questions run deep into the issue of the role of academia and scholarship in society. At the beginning of this section we asked the following question: Why do we pay academics to study, read, and write about the law, instead of relying on other kinds of experts? We also said that we do not purport to answer it. We were lying. Society agrees to fund universities and pay scholars for (at least sometimes) being wrong, because cultural evolution – the development of science, art, and social institutions – requires, as any evolution, both variation and selection. The primary role of academia is to provide variation. This is possible only when scholars are able to break away from the inertial thinking habits strengthened by human cognitive conservatism. To be creative and innovative – sometimes solely for creativity and innovation’s sake – is the main task of a scholar. The wonderful process of cultural evolution will take care of the rest.

4. CONCLUSION Our goal in this chapter has not been to advocate for any specific method of “doing law and technology.” Many approaches are available, and the question of which is better is often more a question of personal preference than methodology. Yet, we believe while choosing the approach, various methodological precepts can help. When asking the question, it is often prudent to examine one’s assumptions, both factual and normative. Moreover, one can benefit from bearing in mind that the kind of questions one intuitively thinks of asking – the judge-like “How to solve this case?” or the policymaker-like “What should be the law governing this?” – are not the only available options. We gave an overview of different kinds of questions, though many others – inevitably – escaped us.

96  Research handbook on law and technology When answering the questions, one can benefit from applying caution to one’s intuitive judgments and applying variation and exemplification to the process of providing an answer. We argued that, only by considering many possible solutions to a problem, even or especially the seemingly absurd ones, one could notice the nuances escaping one’s mind. Moreover, we claimed that, to be a good piece of scholarly work, law and tech papers do not have to arrive at one answer (like a judge or a policymaker). Instead, broadening the scope of what is imaginable, and deepening the understanding of the pros and cons of various options, is often more valuable than we tend to assume. The comparative advantage of academia is to provide variation, not necessarily selection.

BIBLIOGRAPHY Andrews, R. (2007). Argumentation, critical thinking and the postgraduate dissertation. Educational Review, 59(1), 1–18. Retrieved from https://doi​.org​/10​.1080​/00131910600796777. Bakos, Y., Marotta-Wurgler, F. & Trossen, D.R. (2014). Does anyone read the fine print? Consumer attention to standard-form contracts. The Journal of Legal Studies, 43(1), 1–35. Balkin, J.M. (2015). The path of robotics law. California. Law Review Circuit, 6, 45. Ben-Shahar, O. (2019). Data pollution. Journal of Legal Analysis, 11, 104–159. Brożek, B. (2016), Myślenie. Podręcznik użytkownika. Kraków: Copernicus Center Press. Brożek, B. (2019), Legal Mind. Cambridge: Cambridge University Press. Calabresi, G. (1964). The decision for accidents: An approach to nonfault allocation of costs. Harvard Law Review, 78(4), 713–745. Cauffman, C. (2018). Robo-liability: The European Union in search of the best way to deal with liability for damage caused by artificial intelligence. Maastricht Journal of European and Comparative Law, 25(5), 527–532. Retrieved from https://doi​.org​/10​.1177​/1023263X18812333. Cohen, J.E. (2019). Between Truth and Power. Oxford: Oxford University Press. Cofone, I. N. & Robertson, A.Z. (2017). Consumer privacy in a behavioral world. Hastings Law Journal, 69(6), 1471–1508. Davies, M. (2011). Concept mapping, mind mapping and argument mapping: What are the differences and do they matter? Higher Education, 62(3), 279–301. Retrieved from https://doi​.org​/10​.1007​/ s10734​- 010​-9387​- 6. Erdelyi, O.J. & Erdelyi, G. (2021). The AI liability puzzle and a fund-based work-around. Journal of Artificial Intelligence Research, 70, 1309–1334. Retrieved from https://doi​.org​/10​.1613​/jair​.1​.12580. Fairfield, J.A. (2005). Virtual property. Boston University Law Review, 85, 1047. Hwang, T. (2020). Subprime Attention Crisis: Advertising and the Time Bomb at the Heart of the Internet. FSG Originals. Janis, I.L. (1982). Groupthink: Psychological Studies of Policy Decisions and Fiascoes (2nd edition). Wadsworth: Cengage Learning. Jönsson, M.L., Hahn, U. & Olsson, E.J. (2015). The kind of group you want to belong to: Effects of group structure on group accuracy. Cognition, 142, 191–204. Retrieved from https://doi​.org​/10​.1016​ /j​.cognition​.2015​.04​.013. Kahneman, D. (2011). Thinking, Fast and Slow. New York: Penguin. Kameda, T. & Hastie, R. (2015). Herd behavior. In Emerging Trends in the Social and Behavioral Sciences (pp. 1–14). Hoboken: John Wiley & Sons, Ltd. Retrieved from https://doi​.org​/10​.1002​ /9781118900772​.etrds0157. Kennedy, D. (1985). Positive and normative elements in legal education: A response symposium: The 1984 federalist society national meeting. Harvard Journal of Law & Public Policy, 8(2), 263–268. Kroll, J.A., Huey, J., Barocas, S., Felten, E.W., Reidenberg, J.R., Robinson, D.G. & Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165, 633. Lippi, M., Pałka, P., Contissa, G., Lagioia, F., Micklitz, H.-W., Sartor, G. & Torroni, P. (2019). CLAUDETTE: an automated detector of potentially unfair clauses in online terms of service. Artificial Intelligence and Law, 27(2), 117–139. Retrieved from https://doi​.org​/10​.1007​/s10506​- 019​ -09243​-2.

How not to get bored, or some thoughts on law and technology  97 Manstead, A.S.R. (2018). The psychology of social class: How socioeconomic status impacts thought, feelings, and behaviour. British Journal of Social Psychology, 57(2), 267–291. Retrieved from https:// doi​.org​/10​.1111​/ bjso​.12251. Mayson, S.G. (2018). Bias in, Bias out. Yale Law Journal, 128(8), 2218–2301. McDiarmid, T.A., Yu, A.J. & Rankin, C.H. (2019). Habituation is more than learning to ignore: Multiple mechanisms serve to facilitate shifts in behavioral strategy. BioEssays, 41(9), 1900077. Retrieved from https://doi​.org​/10​.1002​/ bies​.201900077. Mik, E. (2016). The erosion of autonomy in online consumer transactions. Law, Innovation and Technology, 8(1), 1–38. Retrieved from https://doi​.org​/10​.1080​/17579961​.2016​.1161893. Kruglanski, A.W. (1989). The psychology of being “right”: The problem of accuracy in social perception and cognition. Psychological bulletin, 106(3), 395. Pałka, P. (2021a). Private law and cognitive science. In B. Brożek, J. Hage & N. Vincent (Eds.). Law and Mind: A Survey of Law and the Cognitive Sciences (pp. 217–248). Retrieved from https://doi​.org​/10​ .1017​/9781108623056​.011. Pałka, P. (2021b). The world of fifty (Interoperable) facebooks. Seton Hall Law Review, 51(4), 1193– 1239. Retrieved from https://scholarship​.shu​.edu​/shlr​/vol51​/iss4​/5/. Parisi, F. (2004). Positive, normative and functional schools in law and economics. European Journal of Law and Economics, 18(3), 259–272. Retrieved from https://doi​.org​/10​.1023​/ B​:EJLE​.0000049197​ .08740​.e8. Parry, S. (1998). Disciplinary discourse in doctoral theses. Higher Education, 36(3), 273–299. Retrieved from https://doi​.org​/10​.1023​/A​:1003216613001. Pistor, K. (2020). The Code of Capital: How the Law Creates Wealth and Inequality. Princeton: Princeton University Press. Posner, R.A. (1981). The Present Situation in Legal Scholarship. The Yale Law Journal, 90(5), 1113– 1130. Retrieved from https://doi​.org​/10​.2307​/795943. Pospíšil, J. & Macháčková, P. (2021). The value of belongingness in relation to religious belief, institutionalized religion, moral judgement and solidarity. Religions, 12(12), Article 12. Retrieved from https://doi​.org​/10​.3390​/rel12121052. Rosenquist, J.N., Morton, F.M.S. & Weinstein, S.N. (2021). Addictive technology and its implications for antitrust enforcement. North Carolina Law Review, 100, 431. Shiller, R.J. (1995). Conversation, information, and herd behavior. The American Economic Review, 85(2), 181–185. Smits, J.M. (2012). The Mind and Method of the Legal Academic. Cheltenham: Edward Elgar Publishing. Solum, L.B. (1992). Legal personhood for artificial intelligences. North Carolina Law Review, 70(4), 1231. Surden, H. (2014). Machine learning and law. Washington Law Review, 89, 87. Sunstein, C.R. & Hastie, R. (2014). Wiser: Getting Beyond Groupthink to Make Groups Smarter (First Edition). Boston: Harvard Business Review Press. Susser, D., Roessler, B. & Nissenbaum, H. (2019). Technology, autonomy, and manipulation. Internet Policy Review, 8(2). Retrieved from https://policyreview​.info​/articles​/analysis​/technology​-autonomy​ -and​-manipulation. Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44–54. Turner, M.E. & Pratkanis, A.R. (1998). Twenty-five years of groupthink theory and research: Lessons from the evaluation of a theory. Organizational Behavior and Human Decision Processes, 73(2), 105–115. Retrieved from https://doi​.org​/10​.1006​/obhd​.1998​.2756. Van Bavel, J.J., Reinero, D.A., Harris, E., Robertson, C.E. & Pärnamets, P. (2020). Breaking groupthink: Why scientific identity and norms mitigate ideological epistemology. Psychological Inquiry, 31(1), 66–72. Retrieved from https://doi​.org​/10​.1080​/1047840X​.2020​.1722599. van Gestel, R. & Micklitz, H.-W. (2014). Why methods matter in European legal scholarship. European Law Journal, 20(3), 292–316. Retrieved from https://doi​.org​/10​.1111​/eulj​.12049. Walton, D.N. (1990). What is reasoning? What is an argument? The Journal of Philosophy, 87(8), 399– 419. Retrieved from https://doi​.org​/10​.2307​/2026735. Wiśniewski, A. (1995). The Posing of Questions. Logical Foundations of Erotetic Inferences, Dordrecht: Springer.

98  Research handbook on law and technology Vladeck, D.C. (2014). Machines without principals: Liability rules and artificial intelligence. Washington Law Review, 89, 117. Xenidis, R. (2020). Tuning EU equality law to algorithmic discrimination: Three pathways to resilience. Maastricht Journal of European and Comparative Law, 27(6), 736–758. Retrieved from https://doi​ .org​/10​.1177​/1023263X20982173. Zakon, A. (2020). Optimized for addiction: Extending product liability concepts to defectively designed social media algorithms and overcoming the communications decency act. Wisconsin Law Review, 2020, 1107.

7. Grounding computational ‘law’ in legal education and professional legal training Mireille Hildebrandt1

1. INTRODUCTION The availability of generative natural language processing (NLP), such as ChatGPT,2 is creating turbulence in academia, both in the humanities and in the natural and life sciences. In their article in Nature, ‘ChatGPT: five priorities for research’, Van Dis et al. (2023) take the position that ‘the use of this technology is inevitable, therefore, banning it will not work. It is imperative that the research community engage in a debate about the implications of this potentially disruptive technology’. The authors outline some of the key issues and a proposal on how to address them. They demonstrate a keen awareness of the drawbacks that define the limits of large language models (LLMs) or conversational AI, for instance regarding bias and accuracy and of the broad scope of confabulation, or more precisely the uninformed stochastic parroting (Bender et al., 2021) they entail. Van Dis et al. (2023) also point out the lack of referencing, which implies that whoever deploys ChatGPT may inadvertently plagiarise the work of authors whose text has been recycled by the tool, while also noting that when asked for references ChatGPT often ‘fakes’ them. Imagine developing a legal opinion based on this kind of technology without having a clue as to the precise sources on which the model was trained, let alone a reference from what legal text a specific text block was ‘recycled’. Even if the training data were to be made available for inspection, it would be next to impossible to detect whether and how it is actually citing case law, doctrine or statutory law. Van Dis et al. (2023) call on the research community to ‘hold on to human verification’ and to ‘develop rules for accountability’ that involve prioritising open-source AI, based on the idea of open innovation (which in point of fact invites ‘users’ to help train the software under the heading of providing a free service). They call on academic publishers to offer those who build these systems access to academic publications, to ensure the models are trained on the right data (though this will raise legal issues relating to intellectual property rights of the authors and data protection rights, see, e.g. Stephensen Harwood, 2023). Their call to action, however, is premised on a combination of technological (or economic) determinism, as 1  The research for this chapter was funded by European Research Council (ERC) under the HORIZON2020 Excellence of Science program ERC-2017-ADG No 788734 for the project on ‘Counting as a Human Being in the Era of Computational Law’, see www​.cohubicol​.com 2  GTP stands for ‘generative pre-trained transformer’. ChatGPT is a large language model (LLM). See the website of OPENAI: https://openai​.com​/ blog​/chatgpt/. Note that this chapter was written before the release of GPT4, which caused further turbulence due to its uncanny fluency and allegedly enhanced credibility. The uptake of LLMs in the context of legal practice asserts the relevance of the chapter and the need to engage with legal technologies in the context of legal education and legal research, see for instance Casetext CoCounsel https://casetext​.com, or Allen & Overy’s Harvey https://www​.allenovery​ .com ​/en​-gb​/global​/news​-and​-insights​/news​/ao​-announces​-exclusive​-launch​-partnership​-with​-harvey

99

100  Research handbook on law and technology evidenced by their stating that ‘banning it will not work’. Their position seems to be informed by the usual kind of ‘technological eschatology’ (Burdett, 2017), requiring us to ‘embrace the benefits of AI’, basically deploying a clever rhetorical strategy where the benefits are taken for granted while the risks require evidence, thus quietly inversing the burden of proof. Some could, for instance, suggest that the issue of references could be ‘fixed’, thus making the model amenable for integration into legal search engines, asserting the idea that these technologies will be beneficial in principle, provided some hiccups are resolved. Others might claim that it does not really matter where a system like ChatGPT gets its answers from, as long as it gets them right. Bommarito II & Katz (2022) demonstrate that when ‘GPT takes the Bar Exam’ it scores high above guessing on the multiple-choice questions and they profess that ‘that these results strongly suggest that an LLM will pass the MBE component of the Bar Exam in the near future’. In this chapter, I will address the challenges of working with computational technologies when enacting, searching and deciding the law, without assuming that they are reliable, effective or even helpful, while also resisting the equally naïve assumption that they are unreliable, ineffective or necessarily threaten law and the rule of law. From the perspective of law, one of the key questions concerns the burden of proof: who must provide evidence of the relevance and trustworthiness of new types of technologies and what counts as proof in this context? Or, even more to the point: who gets to decide whether, for instance, legal search engines deploying LLMs are dependable, based on what ‘evidence’? To address the challenges posed by legal technologies in the context of legal education and legal research, I will first address the ‘traditional’ way of studying and practising law, noting how both the practice and the study of law depend on a particular technological infrastructure. To that end, I map the rise of modern positive law in relation to the proliferation of the technologies of the word, more precisely the information and communication technologies (ICT) of the printing press. It is important to acknowledge that modern law-as-we-knowit today is technologically embedded and not some free-floating essence. The relationship between law and technology, however, is not deterministic; technologies make things possible or impossible and much depends on their design and the way they are deployed. This chapter is not concerned with the ‘digitalisation’ of law, such as the use of a pdf or an electronic file to store and exchange text. It is also not concerned with knowledge management systems meant to store and retrieve in-house knowledge to make it findable and searchable across different actors within a legal organisation such as a law firm, court or public prosecutor’s office. And, though disclosure is a major domain of application for computational legal technologies in the context of gathering evidence, it is not part of this chapter either. Instead, the focus is on the integration of computational technologies in law itself, impacting its ‘mode of existence’ and its ability to instigate and sustain countervailing powers as required by the constitutive normative goals of the rule of law. For this reason, I zoom in on three types of legal technologies that have a transformative impact on law and the rule of law: (1) data-driven legal search, (2) prediction of judgements based on machine learning and (3) the representation, drafting and execution of law in computer code. I investigate what these technologies are claimed to accomplish and whether, and if so how, these claims can be substantiated. The investigation does not limit itself to intellectual discourse, instead, it engages with the granular detail of upstream design decisions that are made when developing legal technologies. Based on a philosophy of technology that rejects thinking in terms of Technology and invites keen attention to the intricacies of the design and uptake

Grounding computational ‘law’ in legal education and legal training  101 of specific technologies, I will take a deep dive into how these systems operate, what assumptions they involve and what implications they have. This should enable us lawyers to question the effectiveness and efficiency that is oftentimes taken for granted and provide us with the conceptual and methodological tools to assess the impact on downstream checks and balances that are key to law and the rule of law. Finally, I propose a set of learning objectives and teaching approaches, meant to address the need for students of law to engage critically with computational legal technologies. Such objectives and approaches are also highly relevant for academic legal research and legal practice. They will allow us to ground the use of these technologies in legal practice and legal research, based on a critical engagement with their actual affordances. They should also allow us to reject investment in and deployment of technologies that diminish human agency and jeopardise the checks and balances of the rule of law. By way of conclusions, I call on lawyers to engage in effective scrutiny of computational tools, noting that some of them are already widely deployed (notably in legal search). I propose a new hermeneutics of computational ‘law’ to reverse both naïve endorsement and uninformed rejection of such ‘law’. The question of whether those technologies or their output qualify as law can be answered in different ways; it depends on how one understands ‘law’ and on the extent to which legal technologies have legal effect, or effect on legal effect.3 To acknowledge this, I use inverted comma’s when referring to computational ‘law’ in order to clarify that I remain agnostic as to whether computational legal technologies should be qualified as ‘being’ or as merely ‘informing’ law.

2. LEGAL EDUCATION AND RESEARCH IN THE ERA OF MODERN POSITIVE LAW To assess the impact of computational legal technologies on ‘traditional’ methodologies of teaching and researching law, we need to first understand how current law is technologically embodied. In this section, I explain the point of departure of this chapter, clarifying what kind of legal order legal education should serve. This implies a normative position, aligned with a procedural, formal and substantive conception of the rule of law. I will briefly discuss the pivotal importance of distinguishing between different legal domains and how they interrelate (private law, public law, criminal law), followed by a reflection on the nature of legal research and its relationship with legal practice. 2.1 The Point of Departure4 Modern positive law is a historical artefact that developed in contrast to customary, feudal and natural law. Though in the context of this chapter we cannot investigate the entire history of the kind of Western law that proliferated from Europe throughout the world together with the rise of the nation-state, we must nevertheless be aware that positive law was an invention, based on the idea that law can be posited, as opposed to being negotiated or merely imposed (Berman, 1983; Glenn, 2014; Hildebrandt, 2015b). Even though positive law is imposed, it is 3  On the issue of effect on legal effect, see, e.g. https://www​.cohubicol​.com​/about​/philosophers​-seminar​-2021 4  See also Hildebrandt (2020c) with references.

102  Research handbook on law and technology more than an order backed by threats, as claimed by Austin and Rumble (1995). Even though the monopoly of violence is key to the rise of the modern state, positive law differs from the mere exercise of military or economic power and from natural law (morality) by its combination of primary and secondary norms. Primary norms are those that prescribe, prohibit or permit actions. Secondary norms are those that determine the validity of the primary norms, define who has what legal powers to impose primary norms, to sanction their violation or void the legal effect of a juridical act (Hart, 1994a). The importance of law’s positivity does not, however, imply either formal or sociological positivism. On the contrary, it requires an understanding of the performative effect that is inherent in the use of language and whose scope and reach are extended with the rise of the script and – later – the printing press. Legal effect is not a matter of causation or brute force but a specific type of the performative effect inherent in natural language usage (Anscombe, 1958; Taylor, 1995). This explains that positive law is closely related to the rise of written law, more specifically to the era of the printing press that enabled more detailed written instructions to the civil servants who were bound to enforce the law (including those working in public administration, courts and later the police). Written law thus enabled an enormous extension of the scope of the performative effect of enacted legal norms (Hildebrandt, 2023b), both in time and space – no longer bound by face-to-face interaction. In turn, this extension triggered a new type of multi-interpretability of legal norms, because those subjected to written legal norms may live in other regions or even time periods than those who authored (and enacted) them (Glenn, 2014; Hildebrandt, 2015b; Lévy, 1990; Ricoeur, 1973). Interpretation became the hallmark of positive law, resulting in the kind of contestability that is inherent in the open texture of written text (Hart, 1994b). The ICT infrastructure of the printing press thus forms the material or technological underpinning of the rule of law that stipulates that the last word on the meaning of the law is with the courts and that the ability to contest the application of the law before an independent tribunal is key to constitutional democracies. In other words, modern positive law is and was contingent upon a dedicated ICT infrastructure that mediates how we perceive and cognise our world and interact with others that ‘make’ our world (Eisenstein, 2012; Goody, 1986; Ong, 1982). This technological infrastructure has also structured legal education and legal research. The study of law is the study of text and the practice of law is a practice grounded in ‘how to do things with text’, a modulation of Austin’s (1962) seminal work on speech act theory. Another way of framing this is to say that modern positive law is text-driven. To comprehend what lawyers do with words and how it affects our shared world, therefore, requires a theory on the use of language (Ricoeur, 1973; Wittgenstein & Anscombe, 2003) and on the difference between institutional and brute facts (Anscombe, 1958; MacCormick, 2007; Searle, 2011). In law, this has been discussed in terms of the open texture of legal concepts (ambiguity as a feature not a bug) and the legal effect that is attributed to actions, events or states of affair by positive law (Diver et al., 2023). Decisions on legal effect are taken by legislatures and courts, within the realm of national jurisdictions. Based on international or supranational jurisdiction (EU), legal effect is also determined by contracting states, customary international law, fundamental principles or jus cogens.5

5  For instance, criminalisation of genocide is an example of jus cogens, which applies whether or not states consent. See Hildebrandt (2020b).

Grounding computational ‘law’ in legal education and legal training  103 2.2 Legal Education and Legal Research Legal education in the era of modern positive law is focused on the analysis of binding legal text but varies between jurisdictions. Continental European law is often taught in a more systematic and deductive way, starting with a clear hierarchy between the sources of the law: constitution, parliamentary acts, delegated regulation, case law and doctrine. Anglo-American law is more focused on case law as a starting point, for instance, distinguishing between different types of torts and emphasising precedent. Though case law plays a key role in understanding legislation in continental Europe and statutory law has become paramount in the context of common law jurisdictions, differences remain. That is what democracy is about: a constituency decides the rule that binds it, and this ‘rule’ is not only a matter of top-down regulation but must be(come) rooted in the social fabric of those subject to law. The differences between continental European and Anglo-American law also inform the many hybrid solutions developed outside Europe and the United States, for instance, depending on the role of the state in deciding the law.6 Perhaps even more relevant for the topic of this chapter is how the administration of justice, legal remedies and the competence of international or supranational courts have been institutionalised. Who gets to decide the meaning of the law at the point of drafting, enacting and adjudicating legal norms? How can the ‘making’ and the application of legal norms be contested, based on what conditions and before which courts? These are issues that directly connect with the checks and balances of the rule of law, meant to ensure that nobody (neither government bodies nor powerful economic players) is above the law. The rise of authoritarian regimes and the instrumentalisation of the rule of law by powerful economic players confirm that neither the rule of law nor democracy can be taken for granted. This chapter starts from a normative position, taking into account that for a viable democracy and a sustainable rule of law, legal education is key. I will not advise how legal education should be organised in authoritarian contexts (Solomon, 2015; Stern & Liu, 2020), though I will point out how computational ‘law’ may facilitate law as administration or as rule by law (Diver, 2022; Lilkov, 2020; Morgus, 2019),7 as it is disentangled from the constraints of the rule of law. I will also not advise how legal education can serve as an efficient instrument to facilitate big economic players that manage to instrumentalise the law for their own good, often by ensuring that checks and balances apply to others but not them, under the veil of a neoliberal ideology (Mirowski & Plehwe, 2009; Pistor, 2019), though I may point out how computational ‘law’ may play into that agenda (Cohen, 2019; Johns, 2022). Such authoritarian or neoliberal perspectives are also clearly normative; attempts to develop a neutral or objective perspective that includes all these normative perspectives would not be helpful, if at all possible. This does not imply confessing to the beliefs of critical legal studies that are often heavily indebted to the work of Foucault. In other work (Hildebrandt, 2004, 2008), I have explained why and how Foucault’s understanding of the rule of law is limited. My understanding of law

6  These hybrid systems are often integrated with legal traditions that do not operate as modern positive law. A seminal work on how these other traditions relate to the monopolistic claims of positive law is Glenn (2014). 7  Meaning that law is seen as an instrument to achieve policy goals, easily replaced by other – more convenient or efficient instruments. Rule by law is usually taken to refer to a regime where the government is not under but above the law. For another perspective, see Waldron (2020).

104  Research handbook on law and technology and the rule of law assumes a critical stance that is however deeply constructivist (Diver et al., 2023; Hildebrandt, 2015b, 2020d). The study of law, then, is not only about learning a trade, but also about acquiring the skills that fit a profession that serves the public interest – for instance, the public interest of protecting private interests and fundamental rights and freedoms. Law is not equivalent to ‘the legal services industry’. Instead, it forms the critical infrastructure on which other critical infrastructure depends, such as public healthcare, economic markets, education, employment, public administration, cultural institutions and ‘the industry’. For example, economic markets depend on property rights and enforceable contracts, as well as myriad other legal constraints, such as those of consumer law, competition law, environmental law and human rights law. Law is a complex system where (1) myriad relationships between legal subjects are defined (Achterberg, 1982) and (2) an intricate architecture of legal norms is instituted (Binding, 1890; Hart, 1994a). This relational and normative architecture depends on and feeds into the practice of law where both legal subjects and legal norms interact, thus continuously reconfiguring the space created by, for instance, legal powers, substantive rights and default or mandatory obligations. To understand, create and interpret the law, a set of foundational legal concepts must be ingrained in the practice of law, notably those of legal norms, rule of law and positive law, but also concepts such as legal effect, sources of law and jurisdiction, legal subject, subjective rights, legal powers and legal reasoning and interpretation. These concepts have a performative effect which defines them as constitutive of law and legal practice (Diver et al., 2023; Hildebrandt, 2016, 2018). This in turn has far-reaching political implications, as these concepts define (create and limit) the powers of all legal subjects within a jurisdiction, including those of the state. This raises the question of how these concepts and their constitutive role in constitutional democracies fare when computational ‘law’ becomes entangled with current – text-driven – law. 2.3 Domains of Law8 Some might find this section cumbersome or irrelevant in relation to the advent of computational law, for instance because they assume that lawyers agree on these domains and are sufficiently aware of what they entail. Having taught law to master’s degree students of computer science has raised my awareness of the fact that many things that lawyers take for granted are key to their profession in a way that requires keen attention and discussion. Also, the kind of legal protection afforded by modern positive law depends to a large extent on the legal domain that is at stake, whereas the legal technologies that are being developed are often insensitive to jurisdictional and domain-specific issues, missing out on the foundational principles that inform and sustain the rule of law. This concerns, for instance, the legality principle in public law, information and investigation duties in private law and the presumption of innocence in criminal law. Before engaging with computational law, I will therefore first unpack the architecture of modern positive law, in full awareness that other perspectives would be possible, e.g. those of formal positivism or sociological realism. In most jurisdictions, law is understood in terms of three major legal domains: private law (contract, tort and property law, but also consumer law and competition law), public law (including both constitutional and administrative law, as well as international public law) and 8 

See also Hildebrandt (2020a).

Grounding computational ‘law’ in legal education and legal training  105 criminal law (noting that large swaps of criminal law have been transferred to administrative law, though in terms of human rights the presumption of innocence and the fair trial remain applicable). To situate legal technologies, they must be based on a proper understanding of these domains, which are not carved in stone and whose demarcation may differ across jurisdictions. The point is that legal ‘solutions’ are not universal or logical but depend on the normative choices made in the polity that grounds them. Though some lawyers may shrug at what they may feel is obvious or without relevance in the context of computational law, I believe that the universal claims inherent in the kind of logic that informs computing systems will affect the architecture of legal frameworks. This is why I devote specific attention to the structure of law in terms of the different domains that define the relationships between private parties and the relationships between the state and its subjects. Private law concerns the legal relationships between legal subjects acting as private subjects, that is without state authority. This concerns both natural persons and legal persons such as corporations. Even public bodies are subject to private law if they act as private subjects (for instance, when they buy supplies or own buildings). The subject matter of private law is largely defined by property law and the law of obligations, such as tort and contract. However, consumer law, competition law and labour law are also part of private law, addressing situations of inequality that must be compensated to ensure that parties can dispose of their property rights and their freedom to contract in a way that is not unduly constrained by power asymmetries. Private law is constitutive of modern economic markets, providing the kind of foreseeability and trust that enables markets to flourish. Though some authors (e.g. Lessig, 2006) take for granted that ‘market forces’ are natural or given, this is not the case. They are enabled, shaped and scoped by the performative effect of legal norms and depend on the state’s authority to effectively enforce these norms (Verstraete, 2018). Public law concerns the legal relationships between legal subjects acting with state authority and those subject to that authority. Legal subjects with state authority can be natural persons acting as officials representing the state or public bodies such as municipalities or other bodies with the legal power to conduct public administration. Whereas the default in private law is that legal subjects can pursue their own objectives and further their own interests, public law requires that government interventions are in the public interest. Though this may imply the need to protect the interests of the state, the latter is a derived interest as the state’s ‘raison d’etre’ is to serve the public interest (Meinecke, 1998). In a constitutional democracy, the public interest includes the interests of individual freedom and the state’s duty to treat its citizens with equal respect and concern (Dworkin, 1991). The existence and sustainability of the state is a means to an end, not a goal in itself, which also has implications for the rule of international law (Waldron, 2006). In the domain of private law, however, subjects are free to serve their own interest as long as they do not violate legal norms that restrict them. As a consequence, public law instantiates the legality principle, meaning that interventions by the state require a legal basis and must remain within the boundaries set by the law. We could summarise by saying that private subjects are free to do whatever they like unless prohibited by law, whereas public bodies must always act in the public interest and can only act if the power to do so has been attributed to them by positive law (Habermas, 1996). Things are more complex than this, but as a default this provides a fair understanding of the key difference between public and private law in a constitutional democracy. Criminal law concerns the lawful exercise of the ius puniendi, that is the right of the state to punish its subjects. In constitutional democracies this right is constrained by a series of

106  Research handbook on law and technology foundational principles that aim to ensure that nobody can be punished for an offence they could not have known to be punishable at the time of ‘offending’. This is the substantive criminal law legality principle. These foundational principles also concern procedural rights, such as the right to be treated as innocent until guilt has been established (presumption of innocence) and the right to an independent and impartial court where the criminal charge can be contested in a way that provides defendants with practical and effective rights to question, oppose and refute the state’s charge. Considering that the stakes for individual subjects are high, criminal law requires a high level of legal certainty about what kind of behaviour will make one liable to criminal intervention, compared to private law liability. This has implications for the burden of proof, which is in principle on the public prosecutor. Legal certainty also implies that the articulation of what constitutes a criminal offence must be sufficiently clear for the relevant subjects (lex certa). To summarise, criminal law takes a subset of unlawful actions and qualifies them as a criminal offence. Not all unlawful actions are criminal offences. For instance, breach of contract and negligence are unlawful but not necessarily punishable. Only if and to the extent that the law has criminalised behaviour that also qualifies as breach of contract or negligence, will such unlawful conduct be considered a criminal offence. For lawyers, all this is not only entirely obvious but even open to contestation, depending on the circumstances, the applicable jurisdiction, relevant legislation and case law. However, before one can contest these distinctions and engage in doctrinal debate, these distinctions must be mapped as the point of departure, taking into account the very different interests, rights and obligations that play out in these different legal domains and the different legal powers that the same subjects may have when acting within the context of one or more of these domains, noting they often overlap. Translating legal rules into computer code, or predicting judgements, without having an active understanding of the law as the complex, dynamic and open system it is, would easily miss all the signposts of what counts as an effective legal practice in the broad sense of that term (legislation, public administration, adjudication). That is why reducing legal practice to ‘the legal services industry’ (as law and economics may do) or to a closed hierarchy of legal rules (as formal positivism may do), misses out on what law is about. To serve the public interest and to protect private interests that merit such protection, those offering their services need to first understand their situated role in the architecture of the law, distinguishing rights and obligations from economic interests that may or may not warrant protection. 2.4 Legal Research and Legal Practice The normative point of departure sketched above, implies and feeds on a hermeneutic rather than a positivist understanding of law. Formal positivism requires a separation between law, morality and politics in a way that ignores the inevitable inner morality of positive law and the political role of the rule of law. Sociological positivism mistakes regularity for obligation; while aiming to tell us what law ‘really’ does, it ignores that law is about legitimate expectations and human interaction rather than a utilitarian calculation based on reductive behaviourism. This entails that legal norms are not given but always ‘in the making’, without naively suggesting that ‘anything goes’. Despite being in opposition to formal or empirical positivism, a hermeneutic understanding of both the study and the practice of law highlights the crucial importance of positive law as the one ‘thing’ that actually protects us, both against chaos and anomy and against authoritarian exercise of state power. From a positivist perspective,

Grounding computational ‘law’ in legal education and legal training  107 instead, the way forward would be to resolve the ambiguity that is inherent in hermeneutics, either by prioritising ex-ante certainty (formal positivism) or ex post inferences of regularities (social science positivism, e.g. certain strands of legal realism). One could say that legal informatics (Sartor, 1995) and the ‘rules as code’ movement (Waddington, 2020) thrive on the assumptions of formal positivism, whereas ‘AI and law’ in the sense of machine learning (Chalkidis et al., 2019) thrives on the assumptions of sociological positivism (now conveniently coined ‘social physics’ (Pentland, 2014) or computational social science (Helbing, 2010)). A hermeneutic approach is built on the idea that we should not prioritise either the prescriptive tenets from the past (formal positivism) or the inferred regularities from the past (social science positivism). Instead, we should sustain the tension between, on the one hand, the need to stabilise the meaning of legal norms over the course of time to protect legitimate expectations and, on the other hand, the need to take into account changing circumstances and changes to the meaning of legal norms. This requires continuous adaptation of the meaning of legal norms in the light of changing circumstances and – even more importantly – new understandings of what relevant legal norms require from both private and public subjects. In computer science such changing circumstances are addressed under the heading of ‘concept drift’ or ‘data drift’ (Mardziel, 2021), though this is necessarily about ‘drift’ in the past – algorithms cannot be trained on future data and those drafting legal norms in computer code cannot foresee the future (and certainly should not impose their preferred future on us). In light of the rise of computational law, one could raise the question of why the legal profession requires an academic degree. Academic legal research concerns the study of the sources of law and the way they develop. The sources of law are usually summed up in reference to art. 38(1) of the Statute of the International Court of Justice, as treaties, written or unwritten constitutions, legislation, case law, customary law, fundamental legal principles and doctrine. The nature of the sources of law, however, is not merely ‘information about’ legal norms. In point of fact, they constitute legal norms (Hildebrandt, 2016). This is clearly the case for legislation and case law, as both bind legal subjects and the same can be said of international treaties, though they may only directly bind states, depending on the extent to which a state has accepted the direct effect of treaties for their citizens. Less evident, but still arguably the case, is the binding effect of the unwritten principles that inform the law and provide both legislation and case law with a backbone of more abstract norms that may be incompatible in practice but nevertheless remain valid as the ground structure of a jurisdiction (Dworkin, 1991; Hildebrandt, 2015a; Radbruch, 2014; Waldron, 2008). At the most generic level, law instantiates the principles of legal certainty (foreseeability, positiveness), justice (equal treatment, fairness) and purposiveness (instrumentality, being goal-oriented). These three principles may conflict in practice but must remain the core orientation points for ‘law’ to qualify as law (Radbruch, 2006). Depending on the legal domain, subject matter, facts of the case, power relationships and choices made by the legislature, more specific principles apply, such as the legality principle in constitutional and administrative law, the presumption of innocence in criminal law, the principles of individual autonomy and trust in private law and so on. The binding legal force of customary law rests on the dual conditions of being part of a practice (usus) and being the result of a sense of legal obligation (opinion juris). Customary law is not only relevant in the context of international law, but also key to the nature of legal concepts such as ‘reasonableness’, ‘equity’, ‘legitimate expectations’, ‘relevance’ and ‘good faith’. To interpret such notions, courts (and citizens, corporations and public administration) must take into account valid, reasonable and legitimate expectations raised by those held to account. For

108  Research handbook on law and technology the law to be practical and effective, it depends on the open texture of this type of concept, which allows both adaptiveness and reliability, linking the interpretation to past decisions while anticipating how an interpretation would impact future decisions. Academic research is thus closely aligned with legal practice, because the development of legal norms takes place within legal practice, whether that of legislatures, courts, public administration, law firms, public prosecutors or – interestingly – those engaging in doctrinal research in academia. The relationship between the study and the practice of law is, in that sense, circular, because the practice of law will often turn to academic writings to sustain or challenge a line of argument. Taking a hermeneutic perspective, this circularity is not a bug but a feature; the circle is not a closed, vicious circle but an open, virtuous one. Positivist perspectives attempt to close the circle, for fear of violating legal certainty or introducing normativity. From a hermeneutic perspective, such closure is not only undesirable but also impossible – due to the flux of real-world circumstances and the development inherent in legal norms. Nevertheless, positivist approaches may have performative effects and these may be highly problematic. Not only because they could petrify the normative order in ways that stifle human flourishing but also because they afford hidden instrumentalisation of legal norms (Cohen, 2019; Pistor, 2019). Claiming to unearth the true and given meaning of binding legal text may hide the choices implied in the act of interpretation (Baude & Doerfler, 2017), besides ignoring the argumentative nature of legal interpretation (Waldron, 2008).

3. THE TRANSFORMATION OF LEGAL STUDY AND LEGAL PRACTICE In this section, I map the technologies involved in computational ‘law’ by distinguishing between (1) the use of computational techniques in legal search, (2) the anticipation of the outcome of legal cases with the help of machine learning and (3) the representation, drafting and execution of legislation by way of computer code. There are many other ways of mapping computational ‘law’, for instance by recalling previous distinctions such as legal informatics (computational legal reasoning, legal expert systems), jurimetrics (use of statistical computation in law), or more generally ‘AI and law’ (the former, now including the use of machine learning). My framing instead aligns with three domains that have always been key to the professional skills of lawyers, at least in the era of modern positive law: (1) searching the sources of law for relevant legal norms, based on keen attention to the way they bind legal subjects (2) anticipating how courts will decide a particular legal issue, while tracing how this relates to the meaning of relevant legal norms and (3) drafting legal norms to be enacted by the legislature and delegated regulatory bodies, taking into account how they will interact with the existing legal architecture, notably avoiding voidance due to incompatibility with constitutional and human rights law. 3.1 Typologizing Legal Techs9 Instead of succumbing to high-level discussions of legal technologies in general, I follow Don Ihde and other philosophers of technology to investigate the ‘affordances’ of specific 9  This section builds on the online tool that has been developed by a team of lawyers and computer scientists in the context of the research project on ‘Counting as a Human Being in the Era of

Grounding computational ‘law’ in legal education and legal training  109 technologies (Achterhuis, 2001; Ihde, 1993; Verbeek, 2005). Affordances refer to what these technologies make possible or not possible for specific types of agents (Gibson, 1986), in this case, lawyers and those subject to law. To this end I will deploy a typology that was developed to enable informed fundamental research into computational ‘law’,10 based on an in-depth study of legal technologies, including applications, systems presented in academic papers and curated training datasets. To obtain an overview of the domain, the typology incorporates both seminal work from the past and state-of-the-art applications, and both proprietary and open-source systems. The technologies have been mapped in terms of their intended endusers, the functionality offered and whether they use machine learning or not. The typology does not aim to be complete or provide the final word on any of the included technologies (an impossible and not very useful task), though as an approach or method the typology aims to be reasonably comprehensive. As a typology, rather than a taxonomy, it involves typical examples without suggesting that their categorisation is mutually exclusive. Instead, it welcomes overlap and highlights that legal technologies are a moving target; it is not meant as merely a registry, archive or repository, but as a method and a mindset. Each technology is interrogated as to what claims are made about the service it offers (application), could offer (paper) or may facilitate (dataset), based on how it is publicly presented. This is done in terms of ‘claimed essential features’, ‘claimed rationale and benefits’ and ‘claimed design choices’. The follow-up question is how these claims are substantiated, again based on publicly available information (on the website, in academic papers and on relevant repositories such as Github). The question of substantiation has been answered by joint research of computer scientists and lawyers, figuring out what techniques are used, whether they have been verified and tested and whether it is plausible or implausible that the system lives up to what it is claimed to deliver. In the case of commercial systems, the issue of trade secrets and IP rights oftentimes resulted in there being no substantiation at all, while many of the claims had to be read as PR or advertising rather than a serious assertion as to functionality. Nevertheless, with the help of computer scientists, it was often possible to infer the kinds of techniques deployed and thus obtain a better picture of what they actually offer. In the next subsections, I will draw on some of the research findings of the Typology, teasing out the difference between, on the one hand, text-driven legal search, anticipation of judgements and drafting of legislation as key to modern positive law and, on the other hand, their computational ‘counterparts’. This will inform the call to develop a new hermeneutics for legal research and education that will be explained in the next section. By visiting, investigating and assessing legal technologies, we avoid high-level discussions of ‘legal technology’, uninformed by the nitty gritty of their actual construction and deployment.

Computational Law’, see www​.cohubicol​.com. The typology, it objectives and methodology can be found here: https://publications​.cohubicol​.com​/typology/. Some of the text has been taken from this or previous drafts of the tool. 10  L. Diver, P. McBride, M. Medvedeva, A. Banerjee, E. D’hondt, T. Duarte, D. Dushi, G. Gori, E. van den Hoven, P. Meessen, M. Hildebrandt, ‘Typology of Legal Technologies’ (COHUBICOL, 2022), available at https://publications​.cohubicol​.com​/typology.

110  Research handbook on law and technology 3.2 Legal Search: Meaning and Information 3.2.1 The relevance of legal search The proliferation of written text that is inherent in the proliferation of the printing press has generated a large corpus of potentially relevant legal text. As a consequence of the proliferation of legal text over the past centuries, the academic study of the law is key to the level of abstraction that is required to (1) comprehend what type of legal norms are relevant in what types of cases, (2) locate and situate these norms as part of one or more legal domains and more specific subdomains and (3) explain them in alliance with prior decisions on their meaning while (4) taking into account how the preferred interpretation will affect future decisions. Law is a complex and dynamic architecture where legal norms interact with each other in specific ways, which implies that isolating legal norms to determine their meaning is not possible. Notably, constitutional norms or legal norms derived from international human rights law or constitutional law may affect the interpretation of any legal norm in any domain of national law. Establishing the meaning of a legal norm answers the question of what legal effect is attributed under what conditions – in that sense law is a prime example of Peirce’s pragmatist maxim that basically says that the meaning of a concept is to be found in the consequences of its use.11 Deciding the meaning of the criminal offence of theft or murder is to decide under what conditions a specific – priorly determined – punishment may be imposed on the offender. Therefore, ‘legal search’ is not an intellectual puzzle but prepares for an act of interpretation, with potentially far-reaching impact on both an individual’s rights and interests and on the way society deals with dedicated legal issues. The latter is the consequence of the need for legal certainty and justice. Treating similar cases similarly, to the extent of their similarity, is both a matter of creating foreseeability and of justice in the sense of fairness. As any lawyer knows, the decision on what counts as similar cases can be argued; it is neither given nor a matter of calculation. A jump has to be made from the rule to its application and this jump is more a matter of experience and judgement than one of logical inference (Chirimuuta, 2023). Though the jump must be justified in the form of a syllogism, the choice of the major and the minor cannot be dictated by the rule whose interpretation is at stake. Myriad considerations will inform this jump, which can be likened to an abduction but must be distinguished from an abduction in the context of the natural sciences insofar as a legal judgement does not concern the articulation of a law of nature. Legal search is a normative undertaking that should be informed by the underlying – often conflicting – principles of the law, as indicated above. Lawyers in search of relevant legal text must take into account many different issues, both on the side of the legal architecture that informs the meaning and scope of the relevant legal norm and on the side of the facts of the case that are not given but must be constructed in a way that respects the evidence, the burden of proof and their framing in terms of the applicable legal norms. Those who believe that human cognition is biased and should not be trusted (Kahneman et al., 2021), may be delighted to see computational systems take over the task of filtering the right legal norm and applying it to established facts. Those aware of the flaws of computational systems, whether related to the provenance and distribution 11  ‘Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object’ (Peirce & Turrisi, 1997).

Grounding computational ‘law’ in legal education and legal training  111 of the training data or to the choice of machine learning research design will warn against inflated expectations, noting that these systems have fallacies, and bugs and are necessarily contingent upon past behavioural patterns (thus integrating existing human bias while adding data fallacies,12 bugs and computational artefacts). In the meantime, computer scientists and developers of NLP systems have become aware of the proliferation of legal text that can be transformed into data, allowing them to frame ‘law as data’ (Livermore, 2019) – transforming written legal speech acts into training data to train and test their computational models. 3.2.2 Legal search technologies: claimed functionality and intended purpose The typology of legal techs introduced in the previous subsection involves various legal technologies that claim to offer ‘legal search’ functionality, including an open standard for data formatting (e.g. Akomo Ntoso) to enable drafting or representing legislation in the form of computer code (e.g. Catala), and ‘applications’, such as Case Text, Westlaw Edge, Moonlit and Jus Mundi. The ‘open standard’ facilitates the drafting or representation of legislation in the form of computer code, which in turn makes such legislation more easily findable and searchable (see below under Section 3.4). The ‘applications’ use machine learning and/or logic- or knowledge-based systems to detect supposedly relevant case law or legislation. These technologies are already in use (for instance Westlaw Edge) and it is important for lawyers to become aware of the fact that these kinds of legal search engines are becoming the gatekeepers of ‘legal information’. The typology allows one to filter systems that focus on case law or on legislation (noting many of the technologies enable both). If we take the example of Jus Mundi, the system is described as offering ‘tools for international legal and arbitrator research. It includes multiple services, such as ‘a multilingual search engine, a CiteMap for international treaties, and analytics’. The claimed essential features are ‘open access; gathering and structuring of legal data from around the world; providing analytics and monitoring of lawyers’ and arbitrators’ performance’; relevant results from multiple languages (multilingual search); pinpointing of relevant parts of decisions and treaties (CiteMap); and, finally, ‘peer-reviewed information from practitioners and academics’. The claimed rationale and benefits are ‘facilitating multilingual legal research; provision of authoritative summaries of key legal concepts and issues; improvement of due diligence practice’ and the claimed design choices are that ‘document contents are “structured and qualified” by paragraph; that complex queries are possible (exact string, NOT, AND, OR, and NEAR operators); and that identification of both direct and indirect relationships (Conflict Checker) is enabled’. 3.2.3 Legal search technologies: are these claims substantiated? These design choices are explained in more detail in terms of the ‘substantiation of claims’. This involves a detailed discussion of the different elements of Jus Mundi, notably the multilingual search engine, CiteMap, Jus Connect and Conflict Checker. To help the reader, this subsection on substantiation refers to three types of relevant technologies: named entity recognition, text classification and text retrieval. Named entity recognition is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into predefined categories such as person names, 12 

See here: https://www​.geckoboard​.com ​/ best​-practice​/statistical​-fallacies/

112  Research handbook on law and technology organisations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc. Named entity recognition aims to find highly variable information (such as names) that tend to occur in similar contexts. For example, the word ‘Mr’ is often followed by a name. A text classification system is an algorithm that adds one or more labels from a predefined set of labels to a piece of text or a document. The most common use cases are ‘thematic classification’ (‘this text talks about this/these topics’) but it can also be used for type classification (‘this document is a “court ruling” while another is a “contract”’), or prediction (‘based on this description of the trail, the judgement will be X or Y’). The underlying model is trained on a manually annotated dataset of examples, i.e. texts with their associated labels. The quality of these annotations has a great impact on the performance of the trained model. A text retrieval system is a set of algorithms used to help find relevant information in a document or a set of documents. It transforms the user’s input into a machine-processable representation which is then compared to the precomputed representations of the documents in the collection. This comparison results in a ranked and sorted list of documents that is shown to the user in the interface.

These three technologies are not specific to Jus Mundi, they are also used in some of the other systems, which may in turn use yet other technologies, such as ‘document generation’ and ‘domain-specific programming languages’. 3.2.4 Critical evaluation of legal search technologies For lawyers, it will be important to understand what the technologies they ‘buy into’ and deploy actually do, fleshing out their limits, reliability, robustness and potential bias. Named entity recognition may conflate two persons or organisations with the same name or miss data on a person or organisation whose name was misspelt. Text classification is contingent upon whoever decides on the labels and on whoever sit down to do the actual labelling; missing relevant labels will result in missing potentially crucial information, while those who label the text may have different understandings of the labels, resulting in a mixed bag that in turn results in unreliable output. Text retrieval is contingent on how the information one seeks has been translated into machine-readable code; the translation can almost always be done in different ways with different outcomes. The key insight here is that understanding this kind of ‘legal search’ must be based on a proper grasp of what kind of design decisions have to be made as they will define and limit the output. This allows us to remove the halo of objectivity that often attaches to whatever computers come up with, without however resorting to highlevel criticism uninformed by the granular details that define the concrete impact on legal protection and fundamental rights. When lawyers conduct legal searches, they are focused on content, based on their legal background, whether in academia or legal practice. They learn while searching, they calibrate between the legal norms and the facts of the case, which they need to qualify in terms of the applicable legal framework. This calibration will oscillate between alternative intuitions, constrained by the findings of the search, building up to a reflective equilibrium that offers a justifiable decision/advice/evaluation of the case at hand and the concomitant interpretation of the legal norm. The value of the search is not merely the outcome but rather the entire ‘operation’ of seeking, testing hunches, comparing alternatives, reframing the facts, moving deeper or throwing the net wider and consulting doctrine to see how the conglomerate of legislation and case law has been mapped and structured. All this is based on knowledge of and experience with the foundational principles that inform the law in a specific domain and within a specific jurisdiction. These principles will not necessarily scale to other domains or

Grounding computational ‘law’ in legal education and legal training  113 jurisdictions, unless we move to very high-level principles such as justice, legal certainty and purposiveness. These will, however, be concretised and compatibilised differently in different legal domains and jurisdictions. Respect for both the rule of law and democracy requires that we respect the situated nature of positive law. Computing systems can offer ‘legal search’ as a service. What they may offer compared to a lawyer’s legal search is two things: faster output and potentially more relevant output. Whether the output is obtained faster depends on whether the lawyer trusts the output or wishes to compare it to other legal search engines and own research via generic search engines and repositories of case law and legislation. Whether the output is more relevant depends on the reliability of the design choices made, the expertise of the lawyer compared to the service offered and to the ability of the lawyer to interact with the system in a way that is conducive to getting more relevant output. What these systems cannot offer is the learning and framing process that is inherent in a lawyer’s search, because this is what they aim to replace. While these search engines are becoming the gateways of legal search it will be increasingly difficult to compare their output to what a lawyer would have come up with on their own. We are moving from close reading of the sources of law to distant reading (Moretti, 2013), mediated by computational systems that are informed by logic and statistics rather than human experience and judgement. The proliferation of legal text corpora, however, renders a human search of all relevant text impossible. This implies that at some point the use of legal search engines will be the default, though possibly in combination with generic search engines and repositories of legislation, case law and doctrine that may offer their own services for search. Law firms may afford licences for several legal search engines (e.g. Westlaw and Lexus Nexus in the US), allowing their lawyers to test outcomes against each other, while still conducting other types of research. This could generate a new type of learning experience, as finding the relevant legal norm, case and doctrinal reasoning will become a contest informed by the skill of interacting with legal text via the interfaces of various legal search technologies. This implies a new type of distantiation between the lawyer and the legal text they need to mine for arguments. 3.3 Predictive Technologies: The Focus on Outcome 3.3.1 The relevance of anticipation in law Legal certainty is a key objective of law and the rule of law. For people to act, they must have a reliable understanding of the consequences of their actions. They need to have a proper idea of how other persons, corporations or government bodies will respond to their actions, precisely because in real life, we can never be sure. Legal certainty should, therefore, not be understood in a naïve way, as if it were either possible or desirable to close the future by fully determining it. The role of law is to establish legitimate expectations that coordinate interaction, based on the twin facts that laws can be enforced and that one can contest such enforcement in a court of law. When people speak of predicting the legal consequences of their actions, they can mean various things. First, they may refer to a legal analysis to establish what legal effect is attributed based on whether specific legal conditions apply. Second, they may refer to whether public administration, a contracting party or those suffering harm due to one’s negligence will choose to act upon unlawful action or not. Third, they may consider whether a court would decide for or against them, in case they or another may file suit. Legal certainty cannot

114  Research handbook on law and technology be reduced to the outcome of a court case, and since legal norms cannot interpret themselves, there is an inevitable element of uncertainty that defines the need for legal certainty. This uncertainty is what prevents petrification of the law, safeguarding the plasticity of legal norms without however succumbing to ‘anything goes’ or ‘whatever those in power decide’. Prediction of what a court will decide concerns a qualitative probability, not the kind of quantification that fits a calculable risk or a computable future. For a lawyer, predicting the outcome of a case is actually less interesting than the reasoning provided, because in a constitutional democracy the court must provide the reasons that inform its decision and it is therefore the reasoning that will affect future case law. Without situating the reasoning in relation to previous decisions, legislation and doctrine, the outcome of a case does not offer relevant information. However, from a computer science perspective, predicting outcomes can be seen as an interesting exercise, both because of the public availability of large legal text corpora to train NLP models and because this kind of prediction is a core feature of these models. For instance, the famous pre-trained NLP models built by OpenAI, such as GPT3, basically perform what some have called ‘next-word guessing’, purely based on sophisticated statistical correlations. Also, the claimed capacity of these systems to actually predict outcomes of court cases is what could be termed ‘click bait’ in the age of social media news feeds, attracting a lot of attention. The idea that a system that basically understands nothing and has no agency can nevertheless predict a court’s decision speaks to the imagination and may feel like magic. 3.3.2 Technologies used to predict judgements: claimed functionality and intended purpose The typology involves various legal technologies that claim to offer a prediction of judgements based on machine learning, notably a curated training dataset CAIL2018 (Chinese AI and Law dataset), two applications, JuriSays and Moonlit, and two papers ‘Exploring the Use of Text Classification in the Legal Domain’ by Sulea et al. (2017) and ‘Predicting Brazilian Court Decisions’ by Lage-Freitas et al. (2019). If we take the example of Moonlit, it is briefly described as ‘an online legal research platform powered by artificial intelligence, created for legal professionals, aiming to break down language barriers and revolutionise and democratise legal research’. Moonlit is a platform, developed in-house by Deloitte’s Dutch Indirect Tax team. It actually offers a whole range of services in the context of legal research, not just prediction. That is why we also encountered it in the previous section. The claimed essential features are predicting case law outcomes (with a focus on Dutch case law); providing document analysis, such as finding cases and other authorities on the same facts and legal issues in the legal documents uploaded by the user; creating summaries of the case law documents; providing access to laws, case law and other documents within the ECJ, Netherlands, Sweden, and Denmark through a document search engine and providing a community question platform.

The claimed rationale and benefits are to predict the future through case law outcome prediction; to improve understanding with feature analysis and provide a way to prioritise cases; to improve consistency in legal decisions; to provide indication of the value of the document though case summary extraction; to use case law of other jurisdiction with the document search engine; to improve access to justice in Europe.

Grounding computational ‘law’ in legal education and legal training  115 The claimed design choices are that the documents in the database are linked and can be translated into a chosen target language; that no distinction is made between different types of tax data for generalisation of the system; that there are Boolean functions for the search bar and the possibility to apply search filters in the document search engine and that the summarisation tool creates a summary by extracting the most important sentences in the document.

3.3.3 Technologies used to predict judgements: are these claims substantiated? The design choices are explained in more detail in terms of the ‘substantiation of the claims’. As should be clear from the claimed features, benefits and design choices, the platform offers a range of services, only one of which is the prediction of judgements. The latter is discussed in terms of the training dataset, the models deployed, the labels used and the performance. The relevant technologies are again named entity recognition, text classification and text retrieval, as described above. Moonlit is then described in terms of the training data, the model and the labels that are said to inform the predictions. The dataset of Dutch tax law cases, on which the model was trained, was taken from the case law repository Rechtspraak​.​nl and the model was tested on other tax law cases stored in the same repository. The prediction was claimed to be correct in 70% of the cases. The data was ‘cleaned’ to make sure that the data used to predict the decision did not contain references to the said decisions. As there is no information on how the ‘cleaning’ was done, there may be an issue of ‘data leakage’, meaning that the data actually contain information about the outcome (for instance because the court’s description of the facts is geared to a certain outcome). The model is described as a text classifier that finds patterns within the phrasing of the facts by the court, distinguishing phrases that are more characteristic of one outcome than another. As the model only uses facts extracted from published judgements this would imply data leakage, meaning that Moonlit does not really predict but rather identifies the outcome of the court. The labels used to train the model are described as content management system (CMS) text extraction methods informed by subject matter experts, which implies textual information retrieval of predefined keywords rather than case-based annotation by experts. Moonlit does not, however, specify how accurate the labels are. Finally, the performance is linked to a website that shows the performance of the model over time: https://tax​-i​-d​.deloitte​.nl/​#legal​-predictions. 3.3.4 Critical evaluation of this type of legal technology The key point here is the observation that performance is based on training data that contain published judgements, which makes ‘data leakage’ highly likely. Data leakage means that even though the outcome of the case is removed from the training data, the remaining data probably retain hidden indications of the outcome. In other words, the outcome the model aims to predict has leaked into the data on which it is trained, which results in the model classifying the outcome rather than predicting it. Such classification can locate relevant predictor features (e.g. specific factual constellations) but it cannot actually predict the outcome, as this was already known. The problem of data leakage has recently been highlighted as a key issue of machine learning research in Kapoor and Narayanan’s (2022) paper on ‘Leakage and the Reproducibility Crisis in ML-based Science’. Based on a survey of relevant literature the authors detected ‘17 fields where errors have been found, collectively affecting 329

116  Research handbook on law and technology papers and in some cases leading to wildly overoptimistic conclusions’. They also note that, ‘[b]ased on our survey, we present a fine-grained taxonomy of eight types of leakage that range from textbook errors to open research problems’. In fact, the issue had already been described in a paper by Medvedeva et al., (2022), on ‘Rethinking the field of automatic prediction of court decisions’, which contains an extensive overview of the domain of prediction of judgment, concluding that most papers engage in ‘outcome identification’ and ‘predictor identification’ but not in ‘outcome prediction’. The authors of the latter paper note that their own system (JuriSays), which uses training data that does not involve data leakage, has a lower performance. In a sense, I would summarise the issue by concluding that playing by the rules comes at a cost in terms of performance, whereas cheating delivers great numbers in terms of claimed performance. This is one of the major concerns for the integration of legal technologies into legal practice; ‘buying into’ PR can be a waste of good money or building on incorrect information, or both. Insofar as such incorrect information is used to prioritise cases in the context of case load management or legal search, there will be an effect on legal effect. This is clearly undesirable and very problematic from the perspective of the rule of law. On top of that, Medvedeva et al. (2020) argue that predictive technologies should not be used to make legal decisions. The reason is that they can be reverse-engineered and gamed in ways that will exacerbate current problems, while making it very hard to solve the problem of the complex dynamics that are created by the combination of prediction and reverse engineering. The authors also saliently remind us that court decisions should not be defined by past decisions; the task of the court is to decide a new case and to thus develop a new decision (at p. 4): In essence, the decision-predicting system is the one that is able to determine and distinguish between past decisions, whereas a decision-making system should be able to generate new decisions.

One could explain this in computational terms, by saying that machine learning systems cannot train on future data, whereas human beings can imagine new situations, based on the rich complexities of real-life experience. Also, machine learning systems should take into account both concept drift and data drift, though they can only do this by way of extrapolation, not by way of imagination. Meanwhile, imagination should not be confused with the kind of confabulation that NLP models are capable of, notably those involved in language generation, which is basically ‘next-word-guessing’ based on statistical ‘predictions’. On the difference between data-driven ‘experience’ and real-life experience see the work of Cantwell Smith (2019), who writes about the difference between ‘reckoning and judgment’, further developed for the context of moral and legal judgement by Chirimuuta (2023) in her article on ‘Rules, Judgment and Mechanisation’. If GPT4 is passing the Bar Exam (Katz et al., 2023), it still does not understand the law, let alone exercie judgement, though the lure of deploying it as a seemingly efficient tool may become irresistible. The impact of such usage in terms of the deskilling of legal professionals and the effects of scaling the past can hardly be overestimated, though it may indeed save work and could invite creative legal prompt engineering. The latter is key to this type of technology (reinforcement learning with human feedback or RLHF); it regards both the instructions (prompts) built into the system by the developers and the questions (prompts) asked by the user of the system. On the risks of ‘prompt injection attacks’ that can easily subvert the output of this type of system, see Claburn (2023), who warns that such risk may be inherent in the technology and not necessarily solvable.

Grounding computational ‘law’ in legal education and legal training  117 3.4 Representing, Drafting and Executing Legislation 3.4.1 The relevance of the formulation of legal norms Preparing and articulating legislation is a key part of constitutional democracies. It serves both democratic participation (via representation and deliberation) and the rule of law (via the need to stay within the scope of higher legal norms such as the Constitution and International Human Rights Law). Whenever the legislature assigns public administration with the task of drafting legislation, a long process and a detailed procedure are initiated. This procedure instantiates a series of checks and balances that relate to the legal powers to legislate, to delegate and mandate rule-making (based on constitutional limitations), to define criminal offences (presumption of innocence), to impose legal norms (legality principle) and other legal conditions that apply to legislative Acts of Parliament. This should remind us of the fact that law is a complex, multidimensional and dynamic architecture of legal norms, with a series of interacting levels that are both hierarchical or lateral, where each change ripples through the system, reconfiguring the overall architecture in minor or major ways. To draft legislation, many unwritten legal principles must be taken into account, at various levels of abstraction. As described in Sections 2.1 and 2.2, these principles can play out at the highest level of abstraction, in specific jurisdictions and in dedicated legal domains, but they are also at play in the foundational concepts of law, such as ‘legal effect’, ‘legal rights’ or ‘positive law’. These principles bind due to their constitutive nature, not because they have been codified in a specific piece of legislation (even though they are often concretised in particular statutes and/or case law). At the highest level of abstraction, they will not only be generic but also ‘essentially contested concepts’ (Gallie, 1956). This is not a drawback but key to the adaptive nature of natural language (Hildebrandt, 2020e). It means that these concepts cannot be defined without reducing and freezing their constitutive impact. Formalising and disambiguating such principles is not only impossible but any attempt would denaturalise their formative role and reduce the protection afforded by modern positive law. Obviously, this does not imply that anything goes, but rather that their meaning depends to a large extent on tacit knowledge that must be understood in relation to the checks and balances of the rule of law. The consequence of the complex, multidimensional and interactive nature of the legal system is that a particular legal norm can be expressed in natural language though it should nevertheless not be identified with its written expression. This is the case because all other relevant legal norms that apply will limit, extend or alter the scope and meaning of the norm, while the application of legal norms similarly requires an interpretation that may slightly reconfigure, extend, limit or even transform its meaning. Natural language allows for this, because it affords both the stabilisation and the adaptation of the meaning of the norm in the light of new circumstances (Hildebrandt, 2020e). This, in turn, marks the hazardous nature of attempts to articulate legal norms in computer code, as this would require a specification of each legal condition that is explicitly formulated in the written rule, but also each legal condition that follows from other legal norms in the same piece of legislation or in adjacent legislation within the same legal domain, plus relevant legal norms of constitutional and human rights law, including unwritten fundamental principles of law. All this goes both for drafting (new) legal norms in computer code and for representing (already enacted) legal norms in computer code. The goal of the latter may be to enable more efficient ways of ‘searching’ legislation (as already hinted at under legal search above) but also to build applications that execute the code to enable automated decision-making (ADM).

118  Research handbook on law and technology The latter concerns executable code, allowing for the automation of the law itself, which raises flags about, for instance, whether automated execution is equivalent to the making of legal decisions or merely a tool for making legal decisions, and, whether such execution can be disobeyed. As Brownsword (2016) argues, this type of ADM is more about technological management than law, because a ‘law’ that cannot be disobeyed does not count as law. In other words, not a matter of law but one of administration. As with all acts of administration, these ‘digital acts’ must be performed within the bounds of the law, based on the legality principle. This also means they must be contestable in a court of law. 3.4.2 Articulating legal norms in computer code: claimed functionality and intended purpose The Typology of legal techs involves various legal technologies that are claimed to offer tools for representing existing legal norms in the form of markup language or a domain-specific programming language tools for drafting legislation in a way that affords easy transposition into computer code; and tools that allow to translate existing or new legislation into executable code. Examples of the first are Akomo Ntoso and Catala, examples of the second are Akomo Ntoso and LEOS, and examples of the third are Catala, Blawx and the use of Prolog, as presented in the seminal paper on ‘the British Nationality Act as a Logic Program’ by Sergot et al. (1986). I will discuss a legal technology that allows for both the representation of existing legal norms and/or the drafting of new legal norms, that is Akomo Ntoso (AKN), which is briefly described as: an open standard markup language that defines a method for ‘tagging’ the structure and content of legal documents such that they are machine-readable. The goal is to enable other systems to process such documents in more sophisticated ways than is possible with standard word processor files.

The claimed essential features are that it ‘makes the structure and meaning of legal documents machine-readable; facilitates interchange of documents across institutions and jurisdictions; allows precise citation and cross-referencing of documents; and allows for identification of the content of the law at a given point in time’. The claimed rationale and benefits are that AKN ‘enables greater access to legal information; provides a common model for parliamentary, legislative and judiciary documents; and that it facilitates time-tracking and archival of documents’. The claimed design choices are that AKN is ‘an open standard, usable by anyone; that it combines the flexibility of natural language drafting with strict requirements for tagging documents; that AKN’s core ontology (the ‘General Schema’) includes elements of most kinds of legal documents out-of-the-box; that the General Schema is applied to all documents and can be extended with ‘custom schemas’ that include new concepts and metadata; and that it has been designed to be interoperable across jurisdictions. 3.4.3 Articulating legal norms in computer code: are these claims substantiated? Under the heading of the potential substantiation of these claims, we find an extensive listing of the OASIS standard documents, papers and chapters and a link to the code repository. The Typology notes that: the OASIS standard provides full detail of how AKN representations can be implemented, independently and as part of some other system that will generate, process, and manipulate them. Whether or not a system that uses AKN, even implemented according to the standard, otherwise does what it

Grounding computational ‘law’ in legal education and legal training  119 says it will do is a separate question. The ‘proof’ will come from what is done with AKN: the tools that are built with it, and the subsequent affordances that the AKN representations provide.

The Typology reminds us that AKN is ‘an open standard for data formats’, which is distinct from the tools used to create documents that meet that standard. Such ‘editors’ will use AKN as a central component of their functionality, the editor being itself a Standalone system or Application (a prime example being LEOS), or perhaps another component (plugin) used in an application such as Word …

The Typology also notes that ‘the affordances of these applications differ from AKN itself; for example, a “converter” tool will automatically infer the structure of a standard legislative document before converting it to AKN’. AKN was first developed in the context of the UN Department of Economic and Social Affairs, intended for African Parliaments, and is now further developed at the University of Bologna. It is key to, for instance, the ‘legislation open editing software’ LEOS, which is being developed by the European Commission to make EU legislation machine-readable in a way that enables those drafting legislation to easily collaborate, to speedily check across different legislative domains and to prevent inconsistencies within and between EU legislative instruments. 3.4.4 Critical evaluation of technologies to represent, draft or execute law in computer code AKN affords what is now advocated under the heading of ‘rules as code’ (RaC), a method for drafting legislation simultaneously in both natural language and in computer code. This is not merely a matter of translating natural language into code (via a domain-specific language, such as, for instance, Catala), but rather aims to force drafters to consider unintended ambiguities and contradictions that will surface in the process and to help drafters to resolve such issues while drafting the natural language version of the law (Waddington, 2020). It is important that lawyers engaged in working with these types of legal technologies or faced with legislation and regulations that have been drafted with the help of these tools, understand the process of ‘making’ such law, the design decisions involved and the effect that these decisions may have on the ‘mode of existence of law’. This regards the type of normativity that is inherent in modern positive law (contingent upon the affordances of natural language and printed text) as compared to the affordances of ‘law as code’. What if the meaning of legal norms articulated in natural language differs from the more restricted meaning encoded in software code? What if the interpretation chosen by the coders on behalf of the legislature differs from the interpretation chosen by the court, in the case of contestation? Even raising these kinds of questions assumes a sufficient understanding of how the tools for computational representation, drafting and executing of the law actually function.

4. INTEGRATING COMPUTATIONAL ‘LAW’ INTO LEGAL CURRICULA AND PROFESSIONAL LEGAL TRAINING In this chapter, I have traced the affordances of natural language and text as key to the rise of modern positive law and the rule of law, especially with regard to the contestability of legal

120  Research handbook on law and technology decision-making. The proliferation of binding legal text has triggered a complex legal architecture that demands careful scrutiny, in-depth study and a continuous process of reconstruction. The academic study of the law is geared towards this, developing argumentative skills that integrate respect for the democratic legislature with due regard to the thread of legal interpretation that is woven by the courts. This involves a practice of close reading of legal text and dedicated and complex interpretation and argumentation. It also involves an iterative process of translating legal complexity into the kind of language that legal subjects can understand, live up to and challenge. Next to paying keen attention to the affordances of the technologies of the word (Ong, 1982), notably the printing press (Eisenstein, 2005), I have provided the reader with a taste of a Typology that maps and compares different types of computational technologies, while researching their assumptions and implications for law and the rule of law. In this concluding section, I will start from the position that lawyers should not allow other disciplines to colonise law’s methodological integrity and foundational assumptions. This implies keen attention to legal hermeneutics as key to the legal method. A legal education should offer the training of typically legal skills, such as adversarial thinking and the ability to argue for a dedicated position while anticipating potential objections, and an in-depth study of the complexity of legal frameworks in keeping with democratic and rule of law requirements. Based on these imperatives I propose a set of learning objectives regarding the integration of legal technologies in the curriculum of law schools and a dedicated set of teaching practices to achieve a critical and balanced approach to computational technologies, both in legal practice and legal research. 4.1 Learning Objectives In her seminal The Language of Law School: Learning to ‘Think Like a Lawyer’, Mertz (2007) studied the epistemology embedded in legal language, prevalent in US law schools. Based on a linguistic anthropology that combines qualitative and quantitative research methods, she traced the way that law students are primed, highlighting that this is not a matter of learning to implement a neutral and objective method but rather of developing a particular mindset that reorients the way students approach written text. She writes (at 6): Both in terms of content and form, legal education and the language it inculcates mirror a ‘double edge’ arguably found in capitalist epistemology more generally. This double edge offers the possibilities but also the problems that come with moving to a particular form of abstraction, which can erase both those aspects of social context that lead to bias but also those that permit in-depth understanding of social inequalities. Facing this dilemma is a crucial task for any legal system with democratic ideals – as well as for the legal language through which such a system operates.

The legal abstractions that Metz refers to may be situated in the foundational legal concepts that are constitutive of modern positive law, as discussed above in Section 2.2, notably the concepts of legal subjectivity, legal power, individual rights, legal norms, legal obligation and jurisdiction. As the work was written in 2007, it does not confront the impact of computational methods in legal curricula, but it seems that the ‘double edge’ that she describes is pertinent in the context of a shift from text- to code- and data-driven methodologies in law. In the United States, the introduction of machine learning to predict legal judgements is often framed as a welcome

Grounding computational ‘law’ in legal education and legal training  121 contribution to the efficiency of the ‘legal services market’ (Katz, 2012, 2014). The latter is a clear indication that law is seen as a commodity to be bought and sold in an economic market, rather than the foundational framework that enables and shapes the flourishing of economic markets. The ‘capitalist’ underpinnings of such framing are conducive to taking a calculative approach to legal services and legal education, which is in turn conducive to a computational approach. Such capitalist underpinnings may erase the ideological assumptions of the legal abstractions that Mertz referred to, and allow their instrumentalisation by powerful players to escape jurisdiction, liability, taxes and more generally the force of law. Taking the point of departure of Section 2 and the detailed analysis of three main types of legal technologies in Section 3, I propose a set of dedicated learning objectives for the introduction of computational methods in law schools and training in professional legal skills. A proper introduction to computational methodologies will, however, depend on a proper introduction to ‘traditional’ legal methodology. Therefore, I propose two sets of learning objectives: one set focused on text-driven methods and one focused on data- and code-driven methods. As to the study of modern positive law, the curriculum should ensure that students are capable of (1) conducting in-depth legal research, based on appropriate knowledge of statutory law, case law, doctrine, fundamental principles law and international law, involving (2) the close reading of binding legal text, notably applicable legal norms enacted by the legislature, courts and international bodies, (3) integrating such close reading with relevant doctrinal writings, taking note of the argumentative nature of the study and the practice of law, (4) tracing the relationship between rule of law requirements and democratic participation on the one hand, and the interpretation and application of legal norms on the other, (5) resolving new cases by taking a well-argued position, based on detailed knowledge of the relevant sources of the law, anticipating counter-arguments and potential disagreement, (6) considering the theoretical assumptions of hermeneutic approaches to law, notably concerning concepts such as ‘the open texture of legal concepts’ and ‘the role of discretion in legal decision-making’ and (7) explaining how a justification of legal decisions differs from causal, moral or psychological explanations. When students or practitioners of law have mastered the traditional legal methodology, the following learning objectives are key to a critical understanding of computational methodology. We must ensure that students are capable of (1) distinguishing between different types of legal technologies, based on their claimed functionality and claimed intended purposes, notably those of ‘legal search’, ‘prediction of judgment’ and ‘the representation, drafting and/ or execution of legal norms by computer code’; (2) distinguishing between different types of computational technologies and their relationship with branches of computer science and software engineering (e.g. being able to discriminate between ‘domain-specific languages’, ‘markup language’, ‘declarative and logic programming’ in the case of code-driven technologies, and between different types of ‘machine learning’, the use of ‘features’, ‘support vector machines’, ‘naïve Bayes’, ‘transformers’ and ‘elastic search’ in the case of data-driven technologies), (3) understanding what claims are made on behalf of the functionality or intended purpose of different types of legal technologies, while detecting how such claims relate to currently available computational technologies, having a proper grasp of both the capacity and the limitations of these technologies, identifying advertising, marketing or PR as such, (4) raising a number of detailed questions as to the extent to which claims have been or can be substantiated, e.g. by way of ‘verification’, ‘empirical testing’, ‘certification’ and/or ‘codes of conduct’, (5) knowing where to find answers to these questions, what kind of expertise to

122  Research handbook on law and technology involve and how to prevent capture by those who stand to gain from belief in the trustworthiness, efficiency and desirability of a specific legal technology, or type of technology, (6) developing a reliable and argued assessment of specific legal technologies and specific types of legal technologies and, finally, (7) explaining what contribution different types of legal technologies could make to the study and the practice of law, in the context of constitutional democracies. 4.2 Teaching Approaches and Professional Training The study of law has been the study of legal text. This goes for academic education as well as professional training. To develop the academic and professional skills of a lawyer, students need to practice their ability to read, analyse and argue, to situate legislation and case law in relation to the case at hand, to foresee relevant counter-arguments and to trace the implications of different interpretations of the same sources of law. The advent of GPT4 will require new approaches to teaching the law. The study of law is itself a practice. It can be done in a class setting that is adversarial (for instance, deploying the Socratic method used in US law schools), priming for an inductive approach, and learning to infer arguments on a case-by-case basis (Gersen, 2016). It can also be done in a lecture-hall setting that prioritises professorial monologues (for instance, deploying the subsumptive approach used in German law schools), priming for a deductive approach, learning to apply statutory legal rules to a constellation of facts (Schultz, 2011). One could guess that those prioritising case law in legal education may focus on legal practice, producing legal technicians, whereas those prioritising legislation in legal education may focus on the position of the court, with more of an interest in theoretical issues. Many hybrids exist, challenging students to combine deductive reasoning with inductive argumentation, while anticipating the counter-arguments that may be leveraged against one’s position, thereby creating the habit of working with a rich diversity of potential interpretations and reasonings (Jakab, 2007). GPT4 can provide eerily fluent output because of its seemingly correct ‘next-word-guessing’, but even if such output were right in some sense, taking note of the output cannot replace engagement in law as an argumentative practice. Teaching computer science or software engineering requires an entirely different approach, closer to that of teaching mathematics and engineering. Within those domains, one may be tempted to assume that knowledge is universal, neutral and objective (in the context of computer science) or to the idea that one should start from concrete problems without the illusion of finding the rule that defines them (in the context of software engineering). Even if computer scientists and software engineers do not give in to such temptations, lawyers may want to believe that they apply, because it allows them to integrate the findings of these disciplines without much ado. Such blind trust in the homogenous character of another discipline should, however, be rejected as unscientific and not conducive to a proper understanding of the utility and the risks of integrating findings based on assumptions very different from one’s own. As seen in the articulation of learning objectives in the previous section, lawyers must learn to raise other types of questions than those inherent in the process of developing, maintaining, verifying or testing software. Lawyers’ questions sit at a meta-level, soliciting dedicated expertise in computer science and software engineering with the aim of sorting out if and how a particular (type of) system affects law and the rule of law, where it could contribute and how it might threaten issues of transparency, contestability and due process.

Grounding computational ‘law’ in legal education and legal training  123 Teaching and professional training approaches should avoid merely inviting ‘a computer scientist’ or ‘a software engineer’ to give a course on computer science, programming or ‘AI’, let alone inviting developers with a commercial interest in the tools they explain. Teaching and training in computational law should avoid uncritically embracing the findings of nonlegal experts advocating technical solutions for legal problems. Above all, they should reject the suggestion that the deployment of computational technologies necessarily saves time and effort. Instead, teaching approaches could entail the development of a basic course that provides an overview of relevant types of legal technologies for legal search, prediction of judgement and legal argumentation and for drafting, representing or executing legal norms, adding a hands-on discussion of the relevant computational techniques deployed, such as domainspecific languages, logic programming, syntax and semantics, compilation and interpretation, verification and testing and machine learning, neural networks, performance metrics, features and attention.13 The basic course should be followed up with an advanced course that involves critical interaction with these systems, in the form of case studies involving concrete legal technologies that require students to identify claims about functionality and intended purpose, asking them to assess the extent to which these claims can be substantiated. Such a course could involve an exam in the form of a paper that evaluates the potential adverse impact on contestability, legality and fundamental rights of a concrete legal technology. To make this real such courses should involve computer scientists and software engineers, capable of developing an internal critique of computer science and engineering, willing to explain what specific computational techniques can and cannot offer, thus helping lawyers cut through the clutter, discriminating between PR and reliable functionality. One of the most important contributions of these types of courses would be to raise awareness that other disciplines – just like the study of law – host controversial or even contradictory findings, methods and assumptions. This is the first thing that lawyers must learn: legal technologies will not render law more objective, effective or acceptable. In line with that, legal education should pay keen attention to question zero, which is whether a specific type of legal technology should be introduced or even developed in the first place. What problem does it solve? What problem(s) will not be solved and what new or other problems could it create? 4.3 Grounding Computational ‘Law’ in Legal Education and Legal Practice The learning objectives and teaching approaches proposed above are based on new ways of developing the study and the practice of law. It should ground the use of legal technologies such as those described in Section 3, in the constructive normative commitment of law and the rule of law, as described in Section 2. The introduction of the Typology of legal technologies in Section 3, and more precisely the investigations under the heading of ‘claimed functionality’ and ‘substantiation of claims’, offer a first impression of a new hermeneutics for computational ‘law’. In upcoming work, we will be discussing the philosophical underpinnings of such a hermeneutics, based on the work of Don Ihde and others, and we will further explore the notion of proxies in the context of computational technologies, in part building on my 13  During the Spring of 2022, Laurence Diver and I developed and taught a course called ‘Critical Perspectives on Computational Law’, see https://www​.ims​.tau​.ac​.il​/tal​/syllabus​/Syllabus​_ L​.aspx​ ?course​=1411763050​&year​=2021. The content of the syllabus can be found here: https://publications​ .cohubicol​.com​/typology​/teaching​-and​-training​/master​-course​-syllabus

124  Research handbook on law and technology investigation of the role of proxies as mediation between human understanding and machine interpretation (Hildebrandt, 2021, 2022, 2023a) In the era of modern positive law, lawyers are the guardians of the legal architecture of human societies, enabling practical and effective legal protection of individual persons in the context of a political system of checks and balances that institutes countervailing powers against dominant economic, military or political power. We cannot take for granted that computational legal technologies have the affordances offered by written law in the era of the printing press. This chapter invites and urges lawyers to explore and develop an understanding of what computational legal technologies can and cannot do, by critically assessing them in sufficient detail. Taking technologies seriously as being neither good nor bad but never neutral (Kranzberg, 1986), requires keen attention to the effects they may have on the protection that law and the rule of law offer.

REFERENCES Achterberg, N. (1982). Die Rechtsordnung als Rechtsverhältnisordnung: Grundlegung der Rechtsverhältnistheorie. Berlin: Duncker & Humblot. Achterhuis, H. (2001). American Philosophy of Technology. The Empirical Turn. Indianapolis: Indiana University Press. Anscombe, G.E.M. (1958). On Brute Facts. Analysis, 18(3), 69–72. Retrieved from https://doi​.org​/10​ .2307​/3326788. Austin, J.L. (1962). How to do Things with Words. Oxford: Oxford University Press. Retrieved from http://doi​.wiley​.com​/10​.1111​/j​.1468​- 0149​.1963​.tb00768​.x. Austin, J. & Rumble, W. (eds. ). (1995). The Province of Jurisprudence Determined. Cambridge: Cambridge University Press. Baude, W. & Doerfler, R.D. (2017). The (Not So) Plain Meaning Rule. The University of Chicago Law Review, 84(2) 539–566. Retrieved from https://lawreview​.uchicago​.edu​/publication​/not​-so​-plain​ -meaning​-rule. Bender, E.M., Gebru, T., McMillan-Major, A. & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). Retrieved from https://doi​.org​/10​.1145​ /3442188​.3445922. Berman, H. (1983). Law and Revolution. The Formation of the Western Legal Tradition. Cambridge: Harvard University Press. Binding, K. (1890). Die Normen und ihre Ubertretung. Eine Untersuchung über die rechtmässige Handlung und die Arten des Delikts. Erster Band, Normen und Strafgesetze. Leipzig: Verlag von Wilhelm Engelmann. Bommarito II, M. & Katz, D.M. (2022). GPT Takes the Bar Exam. arXiv:2212.14402. Retrieved from https://doi​.org​/10​.48550​/arXiv​.2212​.14402. Brownsword, R. (2016). Technological Management and the Rule of Law. Law, Innovation and Technology, 8(1), 100–140. Retrieved from https://doi​.org​/10​.1080​/17579961​.2016​.1161891. Burdett, M.S. (2017). Eschatology and the Technological Future (1st edition). London: Routledge. Chalkidis, I., Androutsopoulos, I.& Aletras, N. (2019). Neural Legal Judgment Prediction in English. ArXiv:1906.02059 [Cs]. Retrieved from http://arxiv​.org​/abs​/1906​.02059. Chirimuuta, M. (2023). Rules, Judgment and Mechanisation. Journal of Cross-Disciplinary Research in Computational Law, 1(3) 1–25, retrieved from https://journalcrcl​.org​/crcl​/article​/view​/22. Claburn, T. (2023, April 26). Why It’s Hard to Defend Against AI Prompt Injection Attacks. The Register. https://www​.theregister​.com ​/2023​/04​/26​/simon​_willison​_prompt​_injection/. Cohen, J.E. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. New York: Oxford University Press. Diver, L. (2022). Digisprudence: Code as Law Rebooted. Edinburgh: Edinburgh University Press.

Grounding computational ‘law’ in legal education and legal training  125 Diver, L., Duarte, T., Gori, G., van den Hoven, E. & Hildebrandt, M. (2021). Text-driven Normativity. Retrieved from https://publications​.cohubicol​.com ​/working​-papers​/text​-driven​-normativity/. Dworkin, R. (1991). Law’s Empire. London: Fontana. Eisenstein, E. (2005). The Printing Revolution in Early Modern Europe. Cambridge: Cambridge University Press. Eisenstein, E.L. (2012). The Printing Revolution in Early Modern Europe (2nd ed.). Cambridge: Cambridge University Press. Retrieved from https://doi​.org​/10​.1017​/CBO9781139197038. Gallie, W.B. (1956). Essentially Contested Concepts. Proc. Aristotelian Soc’ty, 56, 167–198. Gersen, J.S. (2016). The Socratic Method in the Age of Trauma. Harvard Law Review, 130, 2320. Retrieved from https://heinonline​.org​/ HOL​/ Page​?handle​=hein​.journals​/ hlr130​&id​=2356​&div=​&collection=. Gibson, J. (1986). The Ecological Approach to Visual Perception. Hillsdale: Lawrence Erlbaum Associates. Glenn, P. (2014). Legal Traditions of the World: Sustainable Diversity in Law. Oxford: Oxford University Press. Goody, J. (1986). The Logic of Writing and the Organization of Society. Cambridge: Cambridge University Press. Habermas, J. (1996). Between facts and norms: Contributions to a discourse theory of law and democracy. Cambridge: MIT Press. Hart, H.L.A. (1994a). Law as the Union of Primary and Secondary Rules. In The Concept of Law (2nd ed.). Oxford: Clarendon Press. Hart, H.L.A. (1994b). The Concept of Law. Oxford: Clarendon Press. Helbing, D. (2010). Quantitative Sociodynamics: Stochastic Methods and Models of Social Interaction Processes (2nd ed. 2010 edition). Berlin: Springer. Hildebrandt, M. (2004). The Trial of the Expert: Épreuve and preuve. New Criminal Law Review, 10(1), 78–102. Hildebrandt, M. (2008). Governance, Governmentality, Police and Justice: A New Science of Police. Buffalo Law Review, 557–598(2), 557. Hildebrandt, M. (2015a). Radbruch’s Rechtsstaat and Schmitt’s Legal Order: Legalism, Legality, and the Institution of Law. Critical Analysis of Law, 2(1), 42–63. Hildebrandt, M. (2015b). Smart Technologies and the End(s) of Law. Novel Entanglements of Law and Technology. Cheltenham: Edward Elgar. Hildebrandt, M. (2016). Law as Information in the Era of Data‐Driven Agency. The Modern Law Review, 79(1), 1–30. Retrieved from https://doi​.org​/10​.1111​/1468​-2230​.12165. Hildebrandt, M. (2018). Law as computation in the era of artificial legal intelligence: Speaking law to the power of statistics. University of Toronto Law Journal. Retrieved from https://doi​.org​/10​.3138​/ utlj​.2017​- 0044. Hildebrandt, M. (2020a). Domains of Law: Private, Public, and Criminal Law. In Law for Computer Scientists and Other Folk. Oxford: Oxford University Press. Retrieved from https://doi​.org​/10​.1093​/ oso​/9780198860877​.003​.0003. Hildebrandt, M. (2020b). International and Supranational Law. In Law for Computer Scientists .org​ /10​ .1093​ /oso​ and Other Folk. Oxford: Oxford University Press. Retrieved from https://doi​ /9780198860877​.003​.0004. Hildebrandt, M. (2020c). Law, Democracy, and the Rule of Law. In Law for Computer Scientists and Other Folk. Oxford: Oxford University Press. Retrieved from https://doi​ .org​ /10​ .1093​ /oso​ /9780198860877​.003​.0002. Hildebrandt, M. (2020d). Law for Computer Scientists and Other Folk. Oxford: Oxford University Press.   https://global​ .oup​ .com​ /academic​ / product​ / law​ -for​ - computer​ - scientists​ - and​ - other​ -folk​ -9780198860884​?cc​=be​&lang​=en&. Hildebrandt, M. (2020e). The Adaptive Nature of Text-Driven Law. Journal of Cross-Disciplinary Research in Computational Law. Retrieved from https://journalcrcl​.org​/crcl​/article​/view​/2. Hildebrandt, M. (2021). The Issue of Proxies, and Why EU Law Matters for Recommender Systems. SocArXiv. Retrieved from https://doi​.org​/10​.31235​/osf​.io​/45x67. Hildebrandt, M. (2023a). Ground Truthing in the European Health Data Space. Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies. BIOSTEC 2023. Retrieved from https://doi​.org​/10​.5220​/0000171500003414.

126  Research handbook on law and technology Hildebrandt, M. (2022). Qualification and Quantification. From Explanation to Explication. Sociologica, 16(3), 37–49. Retrieved from https://sociologica​.unibo​.it​/article​/view​/15845. Hildebrandt, M. (2023b). Text-Driven Jurisdiction in Cyberspace. In M. Ó Floinn, L. Farmer, J. Hörnle & D. Ormerod QC (Eds.), Transformations in Criminal Jurisdiction. Oxford: Hart Publishing. Retrieved from https://doi​.org​/10​.31219​/osf​.io​/jgs9n. Ihde, D. (1993). Philosophy of Technology: An Introduction (Vol. 1st). New York: Paragon House. Jakab, A. (2007). Dilemmas of Legal Education: A Comparative Overview. Journal of Legal Education, 57(2), 253–265. Retrieved from https://www​.jstor​.org​/stable​/42894024. Johns, F. (2022). International Law and the Provocations of the Digital: The 2021 Annual Kirby Lecture in International Law (SSRN Scholarly Paper No. 4131816). https://papers​.ssrn​.com​/abstract​= 4131816. Kahneman, D., Sibony, O. & Sunstein, C.R. (2021). Noise: A Flaw in Human Judgment. London: William Collins. Kapoor, S. & Narayanan, A. (2022). Leakage and the Reproducibility Crisis in ML-based Science. arXiv:2207.07048. Retrieved from https://doi​.org​/10​.48550​/arXiv​.2207​.07048. Katz, D.M. (2012). Quantitative Legal Prediction-Or-How I Learned to Stop Worrying and Start Preparing for the Data-Driven Future of the Legal Services Industry the 2012 Randolph W. Thrower Symposium: Innovation for the Modern Era: Law, Policy, and Legal Practice in a Changing World. Emory Law Journal, 62(4), 909–966. Retrieved from https://heinonline​.org​/ HOL​/ P​?h​=hein​.journals​ /emlj62​&i​=923. Katz, D.M. (2014). The MIT School of Law? A Perspective on Legal Education in the 21st Century. University of Illinois Law Review, 5, 1432–1472. Retrieved from https://illinoislawreview​.org​/print​ /volume​-2014 ​-issue​-5​/the​-mit​-school​-of​-law​-a​-perspective​-on​-legal​-education​-in​-the​-21st​-century​/ http:/​/papers​.ssrn​.com ​/abstract​=2513397. Katz, D. M., Bommarito, M. J., Gao, S., & Arredondo, P. (2023). GPT-4 Passes the Bar Exam (SSRN Scholarly Paper No. 4389233). Retrieved from https://doi​.org​/10​.2139​/ssrn​.4389233. Kranzberg, M. (1986). Technology and History: ‘Kranzberg’s Laws’. Technology and Culture, 27, 544–560. Lage-Freitas, A., Allende-Cid, H., Santana, O., & Oliveira-Lage, L. (2022). Predicting Brazilian court decisions. arXiv. Retrived from: https://doi​.org​/10​.48550​/arXiv​.1905​.10348. Lessig, L. (2006). Code Version 2.0. New York: Basic Books. Lévy, P. (1990). Les technologies de l’intelligence. L’avenir de la pensée à l’ère informatique. Paris: La Découverte. Lilkov, D. (2020). Made in China: Tackling Digital Authoritarianism. European View, 19(1), 110–110. Retrieved from https://doi​.org​/10​.1177​/1781685820920121. Livermore, M.A. (2019). Law as Data: Computation, Text, and the Future of Legal Analysis (D.N. Rockmore, Ed.). Santa Fe: SFI Press. MacCormick, N. (2007). Institutions of Law: An Essay in Legal Theory. Oxford: Oxford University Press. Mardziel, P. (2021). Drift in Machine Learning. Medium. Retrieved from https://towardsdatascience​ .com​/drift​-in​-machine​-learning​-e49df46803a. Medvedeva, M., Wieling, M. & Vols, M. (2020). The Danger of Reverse-Engineering of Automated Judicial Decision-Making Systems. ArXiv:2012.10301 [Cs]. Retrieved from http://arxiv​.org​/abs​/2012​ .10301. Medvedeva, M., Wieling, M. & Vols, M. (2022). Rethinking the Field of Automatic Prediction of Court Decisions. Artificial Intelligence and Law. Retrieved from https://doi​.org​/10​.1007​/s10506​-021​-09306​-3. Meinecke, F. (1998). Machiavellism: The Doctrine of Raison d’État and Its Place in Modern History. New Brunswick: Transaction Publishers. Mertz, E. (2007). The Language of Law School: Learning to ‘Think Like a Lawyer’. Oxford: Oxford University Press. Mirowski, P. & Plehwe, D. (Eds.). (2009). The Road from Mont Pelerin: The Making of the Neoliberal Thought Collective (1st edition). Cambridge: Harvard University Press. Moretti, F. (2013). Distant Reading (1 edition). London: Verso. Morgus, R. (2019). The Spread of Russia’s Digital Authoritarianism In R. Morgus, Artificial Intelligence, China, Russia, and the Global Order, pp. 89–97). Alabama: Air University Press. Retrieved from https://www​.jstor​.org​/stable​/resrep19585​.17.

Grounding computational ‘law’ in legal education and legal training  127 Ong, W. (1982). Orality and Literacy: The Technologizing of the Word. London: Methuen. Peirce, C.S. & Turrisi, P.A. (1997). Pragmatism as a Principle and Method of Right Thinking: The 1903 Harvard Lectures on Pragmatism. Albany: State University of New York Press. Pentland, A. (2014). Social Physics: How Good Ideas Spread—The Lessons from a New Science. New York: Penguin Press HC. Pistor, K. (2019). The Code of Capital: How the Law Creates Wealth and Inequality. Princeton: Princeton University Press. Radbruch, G. (2006). Five Minutes of Legal Philosophy (1945). Oxford Journal of Legal Studies, 26(1), 13–15. Retrieved from https://doi​.org​/10​.1093​/ojls​/gqi042. Radbruch, G. (2014). Legal Philosophy. In The Legal Philosophies of Lask, Radbruch and Dabin. Translated by Kurt Wilk. Introduction by Edwin W. Patterson (pp. 44–224). Cambridge: Harvard University Press. Ricoeur, P. (1973). The Model of the Text: Meaningful Action Considered as a Text. New Literary History, 5(1), 91–117. Sartor, G. (1995). Defeasibility in Legal Reasoning. In Z. Bankowski, I. White & U. Hahn (Eds.), Informatics and the Foundations of Legal Reasoning (pp. 119–157). Dordrecht: Springer Netherlands. Retrieved from https://doi​.org​/10​.1007​/978​-94​- 015​-8531​-6. Schultz, U. (2011). Legal Education in Germany – An Ever (Never?) Ending Story of Resistance to Change. Revista de Educación y Derecho, 4, Article 04. Retrieved from https://doi​.org​/10​.1344​/re​ &d​.v0i04​.2212. Searle, J.R. (2011). Speech Acts: An Essay in the Philosophy of Language (34th. print). Cambridge: Cambridge University Press. Sergot, M. J., Sadri, F., Kowalski, R. A., Kriwaczek, F., Hammond, P., & Cory, H. T. (1986). The British Nationality Act as a logic program. Communications of the ACM, 29(5), 370-386. Smith, B.C. (2019). The Promise of Artificial Intelligence: Reckoning and Judgment. Cambridge: The MIT Press. Solomon, P.H. (2015). Law and Courts in Authoritarian States. In J.D. Wright (Ed.), International Encyclopedia of the Social & Behavioral Sciences (2nd Edition, pp. 427–434). Amsterdam: Elsevier. Retrieved from https://doi​.org​/10​.1016​/ B978​- 0​- 08​- 097086​-8​.86160​-7. Stephensen Harwood. (2023). ChatGPT: Will It Pass Its Probation? [Law Firm]. Insights. Retrieved from https://www​.shlegal​.com​/insights​/chatgpt​-will​-it​-pass​-its​-probation​?19022023140044. Stern, R.E. & Liu, L.J. (2020). The Good Lawyer: State-Led Professional Socialization in Contemporary China. Law & Social Inquiry, 45(1), 226–248. Retrieved from https://doi​.org​/10​.1017​/ lsi​.2019​.55. Sulea, O. M., Zampieri, M., Malmasi, S., Vela, M., Dinu, L. P., & Van Genabith, J. (2017). Exploring the use of text classification in the legal domain. arXiv preprint. Retrieved from: https://arxiv​.org​/ abs​/1710​.09306. Taylor, C. (1995). To Follow a Rule. In C. Taylor (Ed.), Philosophical Arguments (pp. 165–181). Cambridge: Harvard University Press. van Dis, E.A.M., Bollen, J., Zuidema, W., van Rooij, R. & Bockting, C.L. (2023). ChatGPT: Five Priorities for Research. Nature, 614(7947), 224–226. Retrieved from https://doi​.org​/10​.1038​/d41586​-023​-00288​-7. Verbeek, P.-P. (2005). What Things Do. Philosophical Reflections on Technology, Agency and Design. University Park: Pennsylvania State University Press. Verstraete, M. (2018). The Stakes of Smart Contracts. Loyola University Chicago Law Journal, 50, 743. Waddington, M. (2020). Rules as Code. Law in Context. A Socio-Legal Journal, 37(1), 179–186. Retrieved from https://doi​.org​/10​.26826​/ law​-in​-context​.v37i1​.134. Waldron, J. (2006). The Rule of International Law. Harvard Journal of Law & Public Policy, 30(1), 15–30. Retrieved from http://webcache​.googleusercontent​.com​/search​?q​=cache​:MRyY7fx6JYQJ​:www​ .law​.harvard​.edu​/students​/orgs​/jlpp​/Vol30​_No1​_Waldrononline​.pdf+​&cd​=1​&hl​=nl​&ct​=clnk​&gl​=nl. Waldron, J. (2008). Concept and the Rule of Law, The. Georgia Law Review, 43(1), 1 Retrieved from http://heinonline​.org​/ HOL​/ Page​?handle​=hein​.journals​/geolr43​&id​=7​&div=​&collection=. Waldron, J. (2020). The Rule of Law. In E.N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2020). Stanford: Metaphysics Research Lab, Stanford University. Retrieved from https:// plato​.stanford​.edu​/archives​/sum2020​/entries​/rule​-of​-law/. Wittgenstein, L. & Anscombe, G.E.M. (2003). Philosophical investigations: The German text, with a revised English translation (Vol. 3rd). Oxford: Blackwell Pub.

8. Hype and cultural imaginary in law and technology Lachlan Robb and Kieran Tranter1

1. INTRODUCTION In November 1900, The Strand Magazine in London printed a remarkable story of ‘An Electric Man’ (Northrop, 1900). Infused with a Twain-ian sense of amazement at practical Yankee ingenuity, it reported a demonstration by inventor Mr Perew of a 7ft ‘automaton’ pulling a small cart around an arena, avoiding obstacles while its moving eyes seemed to scan the way ahead (Northrop, 1900, p. 589). It tells of how Perew had established the ‘United States Automation Company’, that a cross-American promotional tour was planned, and preparations were underway for mass production and global export (Northrop, 1900). The possibility of the deployment of the ‘man of Titan’ in the military and for transport was discussed (Northrop, 1900, p. 587). Referring to Nikola Tesla, who in 1898 had demonstrated his ‘telaumaton’ (Gray, 2018, p. 955), the story suggested that Perew had ‘out Teslaed Tesla’ (Northrop, 1900, p. 586). It was, of course, a sham. The ‘man’ was a puppet pushed by an engine in the cart and controlled by a hidden operator (Hoggett, 2009). It was one of several attempts to make a quick profit from the turn of the century’s fascination with mechanisation and electricity (Abnet, 2020, p. 57). A manifestation of a familiar story of an attempt to nurture and exploit the hype of ‘new’ technologies for personal gain. Technology, Bernard Stiegler (1998) reminds us, is intimate with being human. Homo sapiens evolved through the interplay of tool use and cultural complexity. Most formal accounts of technology, however, locate it within a more limited human frame, as involving rational and applied knowledge to the making of systems and objects to meet human needs.2 Most critical accounts also attempt to render technological change rational: material technologies are consistently being revolutionised by restless capital.3 Ethnographic studies of technologists and studies of ‘technological innovation’ tell more expansive accounts of humans and technology (Burmaoglu, Sartenaer & Porter, 2019). The making and use of technology involves creativity and storytelling. Making and using technology is not purely rational but highly emotive. The locus where technologies are crafted, distributed, and used is human imagination. There seems a disjunction, indeed possibly a rupture, between the formal conceptualisation of

1  We would like to thank Przemysław Pałka and Hans-Wolfgang Micklitz for comments on an earlier draft of this chapter. Naturally, errors, omissions and flights of fancy are solely our responsibility. We would like to thank Hannah Murphy for her editorial assistance on the chapter and funding from the Centre for Justice at the Queensland University of Technology for funding this assistance. 2  See the discussion of ‘Technology’ in STEM education in Ellis et al. (2020). 3  See for example political economy critiques of the digital such as Fuchs (2009) and Birch et al. (2020).

128

Hype and cultural imaginary in law and technology  129 technology as rational and physical and the material and affective of living, buying, selling, and making with technologies. Law—and particularly ‘law and technology’ or ‘technology law’—has shared some of this disruption.4 Generally, when law engages with technology, the dominant approach is positivist and physical. Technology is a thing affecting humans. Law, in the sense of legislation, cases, state-centric regulation and standards, is instrumental. The discourse is inherently, and ironically, technical. It is about how to build human futures through the intertwining and interrelations of law and technology (Tranter, 2011, 2018). However, with this focus, what is often missed is the necessity of creativity and imagination to the dreaming of technological futures. Robert Cover (1983) presented a fundamental challenge to positivist conceptions of law. For him, law ultimately created a nomos, a narrational universe that connected human communities and time (Peters, 2022). For law and technology, this is revealed in the storytelling about technological futures—an imagining of good (and bad) futures orientated on a speculative assessment of how a technology could change humans, human society, and the planet. There is a cultural imaginary of technological futures, of dreams and fears of technology, that law and technology scholars participate in and contribute to, particularly when, on the surface, what appears to be written are rational and structural analyses of rules, laws, and principles. This means that inherent and essential to technology innovation and development and the legal projects that try to manage technological innovation and development are imagination and emotion. High and expensive technological products—the Apple iPhone or Tesla vehicle—are desirable, not entirely because of rational reasons (Vivi & Hermans, 2022). There are other devices and vehicles that can facilitate human doing in the world in much the same manner, and possibly at less cost, than those products. Yet these are desirable, and users are prepared to pay a premium due to an ineffable aura that has been nurtured and exploited by the corporations behind these items. They embody a form of technological ‘hype’ that exceeds rational assessment. This chapter examines hype and the cultural imaginary in law and technology. It begins by considering the idea of ‘hype’ and particularly the phenomenon of hype in funding technological innovation. The second section identifies how hype has infused the cultural imaginary of law and technology. Like the technology entrepreneur who hypes the market to the possibilities and payoffs from their innovating, the law and technology scholar hypes the legal profession, policy lawmakers, and the public to sell the need for ‘law to catch-up with technology’. The third addresses the hype of revealing that law and technology is a hyped discourse, suggesting that this does not disclose a need to double down on rationality and engagement with the real. Rather, it urges law and technology scholars to take the speculative and emotional effects of the cultural imaginary seriously to better legislate for desirable human technological futures.

2. HYPE AND THE TECHNOLOGY ENTREPRENEUR The ubiquity of technological change is capitalised upon by the technology entrepreneur and the media. There are often sensationalist claims made and circulating about technological futures. Often public awareness of a technology or technological change emerges from 4 

On ‘Technology Law’ as a name for the discipline of law and technology, see Guihot (2019).

130  Research handbook on law and technology corporate marketing, entrepreneurs’ spruiking their genius, and the media deploying established frames to report on technological changes and innovation (Marks, Kalaitzandonakes, Wilkins & Zakharova, 2007). The term ‘hype’ often circulates within these discourses. Within this deployment the semantics of hype tends to be negative. Hype is ‘“self-inflating” … sensation-seeking or fear-mongering’ (Vasterman, 2005, p. 511). Hype is often connected to the 19th-century imagery of the ‘snake-oil’ salesman and the false and dangerous claims of miraculous medical cures (Anderson, 2015). Hype is strongly suggestive when contemporarily reading The Strand Magazine’s reporting of Mr Perew’s electric man (Northrop, 1900). Hype is generally seen as disingenuous and manipulative. It is a pejorative. It is usually considered false, as something in opposition to truth or the real. However, hype has been understudied. This is not to say that ‘hype’ is absent from scholarly discussions, particularly in relation to technology. For example, there are many articles about blockchain that have ‘hype’ in the title (Carson, Romanelli, Walsh & Zhumaev, 2018; El-Gazzar & Stendal, 2020; Labazova, Dehling & Sunyaev, 2019; Michelman, 2017; Perera, Nanayakkara, Rodrigo, Senaratne & Weinand, 2020; Pisa & Juden, 2017). However, in this literature hype as a social or discursive phenomenon is not analysed. Rather, it is acknowledged that there is public hype about blockchain, which the text then tries to dispel by explaining the technologies of blockchain and the possible and plausible deployments. The hype is being challenged by rational examination. There are two strands of thought that take hype seriously. The first strand encompasses macro-studies of economic booms and busts, beginning with the ‘Tulip Mania’ of the 1630s, continuing with the Great Depression, the dotcom bubble, and the Global Financial Crisis. The boom and bubble are seen as generated by hype and ‘irrational exuberance’ (Schiller, 2015, p. 1). During a boom, it is a self-fulling prophecy; an increase in price, success, or interest leads to increases in price, success, and interest. This can grow exponentially as new interest stems not from the original product/concept but from the surrounding hype. Each increase in interest seems to provide more evidence that the market and future interest will continue to rise (Schiller, 2015). Herd behaviour sets in. This is fed by the media (both traditional and new) via ‘social amplification’ (Vasterman, 2005) and as interest rises there becomes an economic incentive to continue to spruik the emerging product/concept. This leads to expansive secondary discourses discussing and examining why the hype is growing—which reflexively grows the hype even more. During a bust, the same happens, but in reverse. The falling narrative that emerges continues to push the fall as each decline feeds off itself as a form of negative hype. The key observation is the irrationality of hype; it is regarded as a discourse disconnected from the empirical realm, a damaging and dangerous collective fantasy that leads to market failure and macro-economic disruption. With the dotcom bubble, the nurturing, encouragement, and deployment of hype in relation to digital innovation became obvious (Funk, 2019). This can also be seen again with more recent cryptocurrency and non-fungible token (NFT) crashes (Flick, 2022; Uddin, Ali & Masih, 2020). The natural habitat for the start-up unicorns of Silicon Valley is a mediascape of hyperbole and exaggerated promises (Curran, 2010, pp. 24–25). The relationship between hype generation by technology entrepreneurs, the take-up and perpetuation within media, subsequent investment and then a decline has been formulated by the Gartner Hype Cycle. This is the second strand of rationalism trying to take hype seriously. Broadly, this cycle has been described as a:

Hype and cultural imaginary in law and technology  131 … general path a technology takes over time, in terms of expectations or visibility of the value of the technology. The model proposes that technologies progress through successive stages that are pronounced by a peak, followed by disappointment, and later a recovery of expectations. (Dedehayir & Steinert, 2016, p. 28)

Developed by the information technology firm Gartner to graphically present information about the adoption of technologies, it, like ‘Moore’s law’, has become one of the pseudo-natural laws of the digital, attempting to describe complex, changing socio-technical and cultural doing in the world according to seemingly universal objective rules.5 Formally, the Gartner Hype Cycle describes five stages of a technology’s maturation. First is the innovation trigger, where there is a proof-of-concept or initial idea that begins the process. Second is the ‘peak of inflated expectations’ in which initial publicity brings early success and a peak of interest. Third, the ‘trough of disillusionment’ is when problems begin to appear and interest decreases. Fourth is the ‘slope of enlightenment’, where momentum builds again and new versions begin to show success, and fifth is the ‘plateau of productivity’, which is characterised by mainstream adoption of a viable product. The ‘objectivity’ and certainty of the cycle in presenting a taxonomy of the emergence of an economically viable product allow for description and prediction. For example, in 2022 Gartner indicated that emerging technologies such as customer digital twins are past the innovation trigger, web3 is reaching the peak of inflated expectations, and NFTs are beginning a steady decline into the trough of disillusionment (Peri, 2022). While the hype cycle is an interesting descriptive model to represent and connect hype with technological innovation and eventual commercial success, it rationalises hype. But there is a sense of hype built into it. Simply put, as a descriptive model it is teleological; it is based on examination of what can be seen as successful technologies. There is a sense of inevitability— indeed, connecting the Weberian notion of the protestant work ethic to digital entrepreneurialism (Jarrett, 2022) that early hyped expectation gives way to persistence against doubters, which inevitably leads to eventual rewards. This is because awareness of technology failures is often less visible to the public (Golder & Tellis, 1993): ‘We often don’t see its failures … because those either never make it to market or fail to become a part of our daily lives’ (White, 2015, para. 4). Using the hype cycle, for example, it is difficult to map the life and death of the Parisian Aramis system that Latour (1996) explores, or the global hard stop enacted in response to possible human reproductive cloning.6 Nevertheless, macro-economic studies of booms and bubbles and the hype cycle reveal how technology captures public and commercial attention and affects flows of money: ‘many organizations seem to rush lemming-like to an innovation, only to abandon it when it falls short of initial expectations’ (Fenn & Raskino, 2008, p. 7). Both are attempts to rationalise the irrational, to render hype comprehensible. However, the baseline negative connotations of hype remain dominant. Generally, hype is seen as wasting resources and distracting from more effective and plausible but less hype-centric options (Funk, 2019); media hype can damage and skew public perceptions (Vasterman, 2005) leading to hysteria and suboptimal policy 5  This complexity, and basic historical studies that show that the path of innovation to adoption for technologies is neither universal nor structured, is an obvious criticism of the hype cycle. O’Leary (2008) observes that the path from invention to product is rarely linear, and successful products might involve a combination of innovations, each with their own cycle. 6  Although there has been one, with perplexing conclusions (Caulfield et al., 2015).

132  Research handbook on law and technology responses (Bagus, Peña-Ramos & Sánchez-Bayón, 2021), and hype can create complacency because it is assumed that a fabulous future technological fix is just around the corner, delaying the need for harder choices in the present (Funk, 2019). There remains a sense of technology entrepreneurs and media players nurturing the hype phenomenon for self-benefit; a sense of selling snake oil, electric men, or NFTs to naïve and emotional audiences who readily invest, pay, and consume. There is a narrative about agency—about explicit exploitation of underlying cultural dreams and anxieties—by specific actors for gain. There can be identified something similar in law and technological scholarship.

3. HYPE AND THE CULTURAL IMAGINARY OF LAW AND TECHNOLOGY Law and technology scholarship is a hyped discourse. It is based on a ‘crisis event’ located within broader, often science-fictional-sourced narratives of technological futures, revealing legal vacuums that call for law. Like the technology entrepreneur who hypes the media and market, law and technology scholars engage with hype to sell the need for ‘law to catch-up with technology’ (Bennett Moses, 2003). The format for this intertwining of hype and law and technology scholarship has a long pedigree. While ‘electric men’ like Perew’s have become an obscured footnote in the cultural history of robotics, other emerging technologies from the 1900s have endured. The motor vehicle went on to change human life and the planet (Jones & McCreary, 2022; Urry, 2004). In 1905, Xenophon P. Huddy wrote in the Yale Law Review that this new machine required a legal reckoning. Huddy’s brief paper is illuminating. He opens with an intriguing cluster of statements: A new vehicle has appeared on the highways and streets, with which it may safely be predicted, the courts and legislatures will have much to do in the future. Already it is making a great demand on their time. One cannot pick up a newspaper which does not contain an account of the payment of a price in a court of justice, for driving a motor car at an excessive speed, or of the happening of some sad accident which could have been easily avoided by the exercise of the slightest care. (Huddy, 1905, p. 83)

This opening is significant. It binds together four core components that can be seen as reflected and reiterated throughout law and technology scholarship for the following 118 years. First, there is a sense of public concern. Huddy notes that the media was awash with reports of motors, motorists, and their public impact. Indeed, this is not an overstatement; historical analysis during the period 1900–1918, the pioneer period of motoring, does identify intense media scrutiny on vehicles and drivers causing social harms (Tranter, 2005). Second, there is an immediate entanglement with law. There are reports of court cases arising from protopolicing of driving and litigation from accidents. Legal institutions are reacting to social harm. But that is not all. Third, the vehicle is heralded as new, but new in such a way that suggests the need for future-proofing through new law-making; legal institutions will ‘have much to do’ with the ‘new vehicle’ in the ‘future’ (Huddy, 1905, p. 83). Fourth, this speculation is reasonable; it can ‘safely be predicted’ (Huddy, 1905, p. 83). To contemporary sensibilities, Huddy appears prescient. Not only does the car go to conquer the planet and human lives, but his articulation of the issues for legal institutions to address—clarifying that the motor vehicle is subject to existing road rules and the need for

Hype and cultural imaginary in law and technology  133 a state-sanctioned licensing regime for drivers—came to pass in the United Kingdom, and most US and Australian jurisdictions before 1918 (Knott, 1994; Plowden, 1973; Seiler, 2008). However, Huddy’s paper is significantly more than just a founding example of enduring legal scholarship that engages with the social and ecological harms of the motor vehicle (Tranter, 2021, 2023). The four elements evident in the paper—public concern about a technological change, immediate entanglement with law, need for reform to future-proof the law, and awareness of the speculative grounding of this call for law—remain significant for law and technology scholarship. These focuses can be identified as condensing into a template underpinning law and technology scholarship (Tranter, 2011). This structure has four parts. The first is what we identify as the crisis event. The second is the cultural imaginary that infuses the crisis event and projects from it the promises and fears of the technological futures. The third is a concern with a present legal vacuum—the law is currently inadequate to deal with this anticipated future. The fourth is the call for law, the positing of a reform agenda in the present to safeguard the perceived benefits and mitigate perceived threats. The crisis event is exactly what Huddy captured in relation to the motor vehicle generating media reports and public consternation in the 1900s. The crisis event is triggered by a technology that taps into or generates social anxiety. Not all technological changes, even ones that could be identified as historically significant, generate a crisis event. But rarely does law and technology scholarship focus on unsensational technological changes. There is not a law and technology scholarship dedicated to nylon faucet washers, notwithstanding the importance of this technology across the globe for leak-free potable water systems. A crisis event for law and technology scholarship has three features. The first is that it is public and mainstream. Huddy’s newspapers are full of reports of motor vehicles causing hurt and harm, or, in recent years, the news streams show images of the latest death from an automated vehicle, or a sleepy US public in 1957 awakes to a Soviet satellite overhead, or in 1997 there is the announcement of a cloned sheep. There is a sense of the spectacular and futuristic. Crisis events are public and framed by media practices (Tranter, 2010). Crisis events are staged and packaged. There is hype. The substantial material used in the staging and packaging of crisis events, the deep resources that turn a factoid about technology into news, is the cultural imaginary that prefigures and anticipates that technology as spectacular and futuristic. The cultural imaginary has been a concept emergent in a range of disciplines (Brosch, 2021; Cronin, 2019; Strauss, 2006). It is a concept that captures how broader social assumptions and considerations are ‘given a concrete form in the sphere of cultural production and communication, manifest for example in discourses of the arts, literature, film, journalism’ (Yar, 2014, pp. 3–4). ‘Cultural imaginary’ captures how social anxieties and dreams become crystallised into commonly circulated images, narratives, and memes and how those images, narratives, and memes become signifiers for the underlying social anxieties and dreams. Tomislav Z. Longinović (2011) summarises the cultural imaginary as ‘a realm of phantasms tied to one’s collective being’ (p. 48). The cultural imaginary is the fertile field from whence hype grows. An excellent example of the cultural imaginary is Frankenstein. Frankenstein represents more than Mary Shelley’s (1818) novel. The novel is the fountainhead for an archive of images, narratives, and memes relating to creations, creators, monsters, horror, science, technology, and humanity (Glut, 2002). It goes on to inspire generations of imitators in print, film, and on various screens (Picart, 2003). Further, it became a signifier with a core meaning (notwithstanding slippage in the Frankenstein name from Victor, the creator, to the monster) relating

134  Research handbook on law and technology to the dangers of unbounded science and technology, of creating without limits and oversight (Campbell, 2003; Conley, 2018; Sabl, 2001). As such, Frankenstein circulates in public and legal discourse as a pejorative, a warning, and a call for action on fears of scientific and technological overreach (Romanyshyn, 2019). A clear example is the ‘Frankenfood’ nomenclature in the 1990–2000s legal discourse on genetic modification in agriculture (Franken, 2000; McGarity, 2007). While Frankenstein can be seen as the Ur-text for the cultural imaginary that infuses law and technology (Tranter, 2018), specific law and technology scholarships have a more focused imaginary. The crisis event is conceived as a crisis because it has been anticipated—dreamed but more likely feared—within science fiction. Dolly the cloned sheep was a crisis event in the late 1990s because of the clone archive in science fiction anticipating reproductive cloning and telling dystopian stories about clones since at least Aldous Huxley’s (1932) Brave New World (see Tranter, 2018). Most law and technology scholarship exploits science-fictioninformed cultural imaginaries. In current literature considering law and regulation in response to robots, Asimov’s ‘Laws of Robotics’ and the Terminator (Cameron, 1984) franchise loom large,7 the latter particularly when lethal autonomous weapons systems are considered (Meier, 2016). The imagined benefits and possible problems of automated vehicles have been imaged and watched from Disney’s Herbie to Knight Rider to Stephen King’s Christine (Herrmann, Brenner & Stadler, 2018; Pembecioğlu & Gündüz, 2020). Certain technologies, particularly positive and negative manifestations, are already circulating within the public conscience, providing a cultural context and framework for ‘when science fiction becomes science fact’. Law and technology scholars participate in the cultural imaginary of technological change. There is speculation about possible benefits, but often horrors—cornucopian and dystopian speculations (Tranter, 2002)—emanate within the crisis event (indeed, framing the crisis event), which are drawn upon by law and technology scholars. In this, law and technology scholarship can be seen to participate in the hype that surrounds the cultural reception of certain technologies. This is not accidental but purposeful. The cultural imaginary creates anxieties and urgencies that the law and technology scholars channel towards law. But to do this channelling, there is usually an intermediate stage: the identification of legal vacuums. This provocative phrase was coined by Beebe (1999) in his examination of firstgeneration international space law literature. He identified that linking the launch of Sputnik in 1957 and the establishment of the Outer Space Treaty in 1967 was sustained activity by space law scholars that argued that space was a legal vacuum, which threatened law’s claims to universality and required to be filled (Beebe, 1999, pp. 1757–1758). Sputnik and the rise of the engineers and technocrats of the Rocket State threatened to exceed the law’s normative universe. In response, first-generation space law scholars engaged in, through law review articles and congressional hearings, a sustained enterprise of cultural work transmuting space into jurisprudential discourse (Beebe, 1999, p. 1763). They made space amenable to law, leading to the Outer Space Treaty. This is replicated throughout law and technology scholarship. The technological futures imagined from the crisis event are seen as raising a challenge to law’s normative capacity to describe, regulate, and control. In more recent law and technology scholarship, the noun disruption and its derivatives are often deployed (Tranter, 2021). Once conceived as a vacuum or disrupted, a sense of momentum is established; there is a need for law. 7 

See, for example, the contributions in Corrales et al. (2018).

Hype and cultural imaginary in law and technology  135 This call for law is the final feature. It is the call for law to catch-up with the imagined technological future that is heralded by the crisis event. Law and technology scholars identify that legal change is needed to either secure some of the promised benefits from the technology and to prohibit, nudge, or mitigate some of the negative concerns. This is where the cultural imaginary is transmuted into a legal imaginary (Travis, 2011). Technological futures become described, anticipated, and rendered as law reform. The language is normative. Courts and lawmakers should apply principles in a certain way, develop regulations and standards, enact new laws, repeal old laws, and develop sandboxes or new institutions. There is an urgency. Action is needed in the present to prevent killer robots, Frankenfoods, human reproductive cloning, or orbital nuclear missiles. Or to ensure the rollout of green technologies, safer automated transport systems, dynamic fintech innovations, or space activities to the benefit of all humans. The present needs to legislate for the future. Focusing on these four features of law and technology scholarship reveals it as a hyped project. Notwithstanding the genre expectations of legal analysis, beyond the orthodox conventions of analytical writing about law, law and technology scholarship is creative, emotional, and speculative. It is deeply entwined with cultural imaginaries, through exploiting a crisis event and then transmuting ‘a realm of phantasms’ into legal analysis. Thus, participating in hype has a strategic purpose. It is to sell law, or at least versions of normative ordering, in the present to secure desirable futures. There is an entrepreneurial characteristic to law and technology scholarship. A creating of need, through finding problems, vacuums and disruptions, to justify the solution. It is a hyped enterprise.

4. ARE WE HYPING ABOUT HYPE? There are two responses to the revelation that law and technology scholarship is a hyped enterprise. The first is to take hype at its first-level denotation as an irrational negative. This could suggest a self-defeating cancer at the core of law and technology. Its reasoning and reasonability about law, technology, and futures are illusionary, tainted by a science-fictional imaginary that is disconnected from the real world and the real lives of people. There is a valid critique within this. The technologies of law and technology scholarship are those that circulate in the cultural imaginary and are generative of crisis events: the satellite, genetic modification, cloning, robots, artificial intelligence, social media algorithms, self-driving cars, and nanotechnology. They tend to be high-cost, high-tech technologies emerging from the techno-industrial centres of the Global North. While there are always framing claims about how these technologies might address global realities of climate change, basic health care, inequality of wealth, violence against women and children, and war, the general focus remains in the realm of high speculation. Rarely does law and technology focus on technologies that might directly address the urgent global need of the human majority on the planet, such as low-cost, low-tech innovation emerging from the Global South. Further, once a technology and its systems become socially entrenched, law and technology scholarship transitions to the next hyped thing that generates the next crisis event. How technologies, humans, culture, and law interact in the complex, changing present is generally outside of law and technology. Even further removed are historical studies on the pathways of change of how technology, law, and culture interacted to create certain modes of human life in the past. It is rare for law and technology forums to publish historical work that examines in detail the archive of a past moment of legal change

136  Research handbook on law and technology in response to technology (Cockfield & Pridmore, 2007). Law and technology scholarship remains focused on the future; its science-fictional cultural imaginary runs deep. This critique of law and technology, which undermines law and technology by challenging its hyped foundation, does have some merit. Calling for law to regulate self-driving cars or killer robots, or social media algorithms does seem slightly indulgent in the context of famine, pandemics, climate change, and global inequality. Further, the theoretical and methodological challenge of understanding law, technology, and humans in the material and performative actuality of the present (or generated from the archives of the past) presents the possibility of radically different forms of law and technology knowledge. This could be considered a dehyped law and technology. The speculative future is replaced with a detailed examination and analysis of the past and present of law and technology to develop hopefully more nuanced, less hyped models of how law and technology interrelate. However, this critique, with its emphasis on the actual, material, and real of the present and past, and not the cultural imaginary of technological futures, is not the whole story. To only follow this response, law and technology would lose its connection to the future and to popular culture, particularly the cultural imperatives that it transmutes into law. Technologies are developed, marketed, and consumed in highly irrational ways. Hype can be seen as a catchword for the irrational, emotional affect of technologies. It is dreaming of a future that drives technological innovation and consumer purchases. Humans are creatures of reason and unreason, of rationality and irrationality. The head justifies what the heart does. Hype is not bad from this perspective; rather, it is a window into the narrational, creative, and emotional aspect of humanity that the moderns, with their science and reason, tried to suppress (Latour, 1993). Without the cultural imaginary, without the hype leading to media frenzies, booms, and busts, there would not be technological, social, and legal change—rather, a static world of the ever-present. The cultural imaginary might be full of dystopian articulations of human death and misery from out-of-control technology, but in its connection of the present to the future, there is also hope. Even within the law and technology project of trying to legislate to prohibit the coming to pass of dystopian technological-infused nightmares, there is a fundamental affirmation that human agency in the present can master the future. This is possibly the ultimate gift of modernity (Tranter, 2022). The future is not merely the medieval cycle of sameness until the Almighty calls the end of days but will be and can be different from the present. This is the actual embodied wisdom in the Gartner Hype Cycle. Ignoring its salvation story for the technological entrepreneurial savant, the hype cycle embodies this modern sense of change over time; that action in the present can lead to a changed future. For the entrepreneur, this is a future of riding the hype and investment waves to eventual product and profit; for the law and technology scholar, law work in the present can achieve desirable technological futures. This suggests that law and technology needs hype. It should continue to engage with the cultural imaginary, to speculate, dream, and fear about technological futures so as to contribute to a better future (Travis, 2022). It suggests more openness and critical engagement with the cultural imaginary (Neuwirth, 2022). A taking seriously of the science-fictional-infused maelstrom that dreams into reality technologies and technological societies (Csicsery-Ronay, 2008), whereas the de-hyped critique of law and technology heads towards the empirical of what can be known of the past and present to better understand law and technology without the irrationality of hype. The alternative perspective encourages law and technology to embrace its imaginative and future-focused orientation—to possibly dream of better normative

Hype and cultural imaginary in law and technology  137 universes (Green, Travis & Tranter, 2022). There is a possibility that law and technology could look beyond the ‘male-stream’ of the mainstream (and mostly 20th-century) science fictions that are alluded to and referenced and embrace the creative diversity of feminist, Afrofuturist, Asian, eco-utopian, First Nations, and queer science fictions that are flowering in the third decade of the 21st century. Ultimately, examining hype and law and technology ends with an affirmation of both responses. The de-hyped empirical reaction to identifying the cultural imaginary within law and technology creates an agenda for more grounded studies of how law and technology have interrelated to create the present that is imagining and producing technologies that generate legal discourses about catching-up. Law and technology are interrelated normative influences on human life. Better understandings and models of these interactions will deepen law and technology’s capacity to build better futures. This highlights that the two responses are not mutually exclusive. Better knowledge of the past and present of law and technology is connected to the cultural imaginary of technology futures. The capacity to imagine the shape of that future, its cornucopian and dystopian potentials, is speculative, emotional and, for want of a better word, connected to hype. Mirroring how technologies are developed through a combination of speculative imagination and rational technicity (the meme credited to Edison of ‘1% inspiration, 99% perspiration’; Cropley, 2006, p. 395), law and technology scholarship needs to connect its analysis and models of law and technology to the cultural imaginary of technological futures. It is a hyped discourse. Notwithstanding the negative connotations of hype with snake-oil salesmen selling giant, cart-pulling electric men or NFTs (Sarlin, 2022), engagement with the speculative, emotional, and imagery of technological futures is essential to law and technology as the rational analysis of the adequacies of laws and the strengths and weaknesses of normative forms.

5. CONCLUSION This chapter considered hype and law and technology. It observed that hype tends to have a negative connotation, particularly in response to unsubstantiated claims about technologies. Hype drives economic booms and busts. Hype is often made with the ulterior purpose of extracting a sale or venture capital; to secure investment in giant, electric-men manufacturing. This chapter argues that dismissing hype, particularly in relation to law and technology scholarship, misconceives law and technology as a discourse concerned with human technological futures. Rather, law and technology in its foundational engagement with the science-fictional cultural imaginary of technological futures is a hyped enterprise. The structure of law and technology scholarship involves a crisis event emerging from a cultural imaginary that prefigures the technological change and speculates how that change will lead to positive and negative futures. Speculating these futures reveals law as inadequate to respond to the potentials encoded by the crisis event. The crisis event highlights legal vacuums and legal disruption. This inherently grounds the call for law—for law work in the present to build normative regimes that can safeguard the future, prohibit dystopian fears, and facilitate cornucopian promises. Awareness of the hyped foundation of law and technology leads to two responses. The first is an empirical project of de-hyping—of focusing on the real and material of past and present interactions of law and technology. However, this would not be law and technology scholarship. It would forgo its grounding commitment to the future. The second response is

138  Research handbook on law and technology to take the cultural imaginary of technological futures more seriously. To deeply engage with the cultural imaginary so as to be better at legislating and providing the normative framework for desirable human technological futures.

REFERENCES Abnet, D. A. (2020). The American Robot: A Cultural History. Chicago: University of Chicago Press. Anderson, A. (2015). Snake Oil, Hustlers and Hambones: The American Medicine Show. Jefferson: McFarland. Bagus, P., Peña-Ramos, J. A. & Sánchez-Bayón, A. (2021). COVID-19 and the political economy of mass hysteria. International Journal of Environmental Research and Public Health, 18(4), 1376 doi 10.3390/ijerph18041376. Beebe, B. (1999). Law’s empire and the final frontier: Legalizing the future in the early corpus Juris Spatialis. Yale Law Journal, 108(7), 1737–1773. Bennett Moses, L. (2003). Adapting the law to technological change: A comparison of common law and legislation. University of New South Wales Law Journal, 26(2), 394–419. Birch, K., Chiappetta, M. & Artyushina, A. (2020). The problem of innovation in technoscientific capitalism: data rentiership and the policy implications of turning personal digital data into a private asset Policy Studies, 41(5), 468–487, DOI: 10.1080/01442872.2020.1748264. Brosch, R. (2021). Filmic representations of Eddie Mabo in a changing cultural imaginary. In G. Rodoreda & E. Bischoff (Eds.), Mabo’s Cultural Legacy: History, Literature, Film and Cultural Practice in Contemporary Australia (pp. 93–102). New York: Anthem Press. Burmaoglu, S., Sartenaer, O. & Porter, A. (2019). Conceptual definition of technology emergence: A long journey from philosophy of science to science policy. Technology in Society, 59, 101126. Cameron, J. (1984). The Terminator [Film]. G. A. Hurd: Orion Pictures. Campbell, C. S. (2003). Biotechnology and the fear of Frankenstein. Cambridge Quarterly of Healthcare Ethics, 12, 342–352. Carson, B., Romanelli, G., Walsh, P. & Zhumaev, A. (2018). Blockchain Beyond the Hype: What is the Strategic Business Value. McKinsey Digital. Retrieved from https://www​.mckinsey​.com​/capabilities​ /mckinsey​-digital​/our​-insights​/ blockchain​-beyond​-the​-hype​-what​-is​-the​-strategic​-business​-value. Caulfield, T., Kamenova, K., Ogbogu, U., Zarzeczny, A., Baltz, J., Benjaminy, S., Cassar, P. A., Clark, M., Isasi, R. & Knoppers, B. (2015). Research ethics and stem cells: Is it time to re‐think current approaches to oversight? EMBO Reports, 16(1), 2–6. Cockfield, A. J., & Pridmore, J. (2007). A synthetic theory of law and technology. Minnesota Journal of Law, Science and Technology, 8(2), 475–514. Conley, S. N. (2018). An age of Frankenstein: Monstrous motifs, imaginative capacities, and assisted reproductive technologies. Science Fiction Studies, 45(2), 244–259. Corrales, M., Fenwick, M. & Forgó, N. (Eds.). (2018). Robotics, AI and the Future of Law. Singapore: Springer. Cover, R. M. (1983). Nomos and narrative. Harvard Law Review, 97(1), 4–68. Cronin, J. (2019). The Making of... Adaptation and the Cultural Imaginary. Cham: Palgrave Macmillan. Cropley, A. (2006). In praise of convergent thinking. Creativity Research Journal, 18(3), 391–404. Csicsery-Ronay, I. (2008). The Seven Beauties of Science Fiction. Middletown: Wesleyan University Press. Curran, J. (2010). Technology foretold. In N. Fenton (Ed.), New Media, Old News: Journalism and Democracy in the Digital Age (pp. 19–34). London: Sage. Dedehayir, O. & Steinert, M. (2016). The hype cycle model: A review and future directions. Technological Forecasting and Social Change, 108, 28–41. El-Gazzar, R. & Stendal, K. (2020). Blockchain in health care: Hope or hype? Journal of Medical Internet Research, 22(7), e17199. Ellis, J., Wieselmann, J., Sivaraj, R., Roehrig, G., Dare, E., & Ring-Whalen, E. (2020). Toward a productive definition of technology in science and STEM education. Contemporary Issues in Technology and Teacher Education, 20(3), 472–496.

Hype and cultural imaginary in law and technology  139 Fenn, J., & Raskino, M. (2008). Mastering the Hype Cycle: How to Choose the Right Innovation at the Right Time. Cambridge: Harvard Business Press. Flick, C. (2022). A critical professional ethical analysis of Non-Fungible Tokens (NFTs). Journal of Responsible Technology, 12, 100054. Franken, M. (2000). Fear of Frankenfoods: A better labeling standard for genetically modified foods. Minnesota Intellectual Property Review, 1(1), 153–181. Funk, J. (2019). What’s behind technological hype? Issues in Science and Technology, 36(1), 36–42. Fuchs, C. (2009) Information and communication technologies and society: A contribution to the critique of the political economy of the Internet. European Journal of Communication 24(1), 69–87. Glut, D. F. (2002). The Frankenstein Archive: Essays on the Monster, The Myth, The Movies and More. Jefferson: McFarland. Golder, P. N. & Tellis, G. J. (1993). Pioneer advantage: Marketing logic or marketing legend? Journal of Marketing Research, 30(2), 158–170. Gray, C. H. (2018). Drones, war, and technological seduction. Technology and Culture, 59(4), 954–962. Green, A., Travis, M. & Tranter, K. (2022). Jurisprudence of the future. Law, Technology and Humans, 4(2), 1–4. Guihot, M. (2019). Coherence in technology law. Law, Innovation and Technology, 11(2), 311–34s. Herrmann, A., Brenner, W. & Stadler, R. (2018). Autonomous Driving: How the Driverless Revolution will Change the World. Bingley: Emerald Group Publishing. Hoggett, R. (2009). 1894–1914 – Electric Man – Perew – (American). Retrieved 2 November from https://cyberneticzoo​.com​/walking​-machines​/1894–1914-electric-man-perew-american/. Huddy, X. P. (1905). The motor car’s status. Yale Law Journal, 15(2), 83–86. Huxley, A. (1977). Brave New World. London: Grafton Books. Jarrett, K. (2022). Digital Labor. Cambridge: Polity. Jones, C. & McCreary, T. (2022). Zombie Automobility. Mobilities, 17(1), 19–36. Knott, J. W. (1994). Speed, modernity and the motor car: The making of the 1909 Motor Traffic Act in New South Wales. Australian Historical Studies, 26(113), 221–234. Labazova, O., Dehling, T. & Sunyaev, A. (2019). From hype to reality: A taxonomy of blockchain applications. Proceedings of the 52nd Hawaii International Conference on System Sciences (HICSS 2019). Latour, B. (1993). We Have Never Been Modern. Cambridge: Harvard University Press. Latour, B. (1996). Aramis or the Love of Technology Cambridge: Harvard University Press. Longinović, T. Z. (2011). Vampire Nation: Violence as Cultural Imaginary. Durham: Duke University Press. Marks, L. A., Kalaitzandonakes, N., Wilkins, L. & Zakharova, L. (2007). Mass media framing of biotechnology news. Public Understanding of Science, 16(2), 183–203. McGarity, T. O. (2007). Frankenfood free: Consumer sovereignty, federal regulation, and industry controls in marketing and choosing food in the United States. In P. Weirich (Ed.), Labeling Genetically Modified Food: The Philosophical and Legal Debate (pp. 128–151). Oxford: Oxford University Press. Meier, M. W. (2016). Lethal Autonomous Weapons Systems (LAWS): Conducting a comprehensive weapons review. Temple International and Comparative Law Journal, 30(1), 119–132. Michelman, P. (2017). Seeing beyond the blockchain hype. MIT Sloan Management Review, 58(4), 17. Neuwirth, R. J. (2022). Future law, the power of prediction, and the disappearance of time. Law, Technology and Humans, 4(2), 38–59. Northrop, W, B. (1900). An electric man. The Strand Magazine, 586–590. O’Leary, D. E. (2008). Gartner’s hype cycle and information system research issues. International Journal of Accounting Information Systems, 9(4), 240–252. Pembecioğlu, N. & Gündüz, U. (2020). Cars as heroes: Transforming to the next step reality via autonomous cars. European Journal of Education Studies, 7(12) doi 10.46827/ejes.v7i12.3399. Perera, S., Nanayakkara, S., Rodrigo, M., Senaratne, S. & Weinand, R. (2020). Blockchain technology: Is it hype or real in the construction industry? Journal of Industrial Information Integration, 17, 100125. Perri, L. (2022). What’s New in the 2022 Gartner Hype Cycle for Emerging Technologies. Retrieved 19 December from https://www​.gartner​.com​/en​/articles​/what​-s​-new​-in​-the​-2022​-gartner​-hype​-cycle​ -for​-emerging​-technologies.

140  Research handbook on law and technology Peters, T. D. (2022). A Theological Jurisprudence of Speculative Cinema: Superheroes, Science Fictions and Fantasies of Modern Law. Edinburgh: Edinburgh University Press. Picart, C. J. S. (2003). Remaking the Frankenstein Myth on Film. New York: State University of New York Press. Pisa, M. & Juden, M. (2017). Blockchain and economic development: Hype vs. reality. Center for Global Development Policy Paper, 107, 150. Retrieved 19 December from https://www​.cgdev​.org​/ publication ​/ blockchain​-and​-economic​-development​-hype​-vs​-reality. Plowden, W. (1973). The Motor Car and Politics in Britain 1896–1970. Harmondsworth: Penguin Books. Romanyshyn, R. D. (2019). Victor Frankenstein, the Monster and the Shadows of Technology: The Frankenstein Prophecies. London: Routledge. Sabl, A. (2001). False Frankensteins: The costs and illusions of computer mastery. Techné, 5(3), 62–81. Sarlin, J. (2022). Donald Trump’s NFT Superhero Trading Cards Timed the Market All Wrong. Retrieved 19 December from https://edition​.cnn​.com​/2022​/12​/16​/investing​/donald​-trump​-nft​-trading​/index​ .html. Seiler, C. (2008). Republic of Drivers: A Cultural History of Automobility in America. Chicago: Chicago University Press. Shelley, M. (1965). Frankenstein: Or, the Modern Prometheus. New York: Signet (Original work published 1818). Shiller, R. J. (2015). Irrational Exuberance (3rd ed.). Princeton: Princeton University Press. Stiegler, B. (1998). Technics and Time, 1: The Fault of Epimetheus. Stanford: Stanford University Press. Strauss, C. (2006). The imaginary. Anthropological Theory, 6(3), 322–344. Tranter, K. (2002). Terror in the texts: Technology – law – future. Law and Critique, 13(1), 75–99. Tranter, K. (2005). ‘The History of the Haste-Wagons’: The Motor Car Act 1909 (Vic), Emergent Technology and the Call for Law Melbourne University Law Review, 29(3) 843–877. Tranter, K. (2010). Biotechnology, media and law-making: Lessons from the cloning and stem cell controversy in Australia 1997–2002. Law, Innovation and Technology, 2(1), 51–93. Tranter, K. (2011). The law and technology enterprise: Uncovering the template to legal scholarship on technology. Law, Innovation and Technology, 3(1), 31–83. Tranter, K. (2018). Living in Technical Legality: Science Fiction and Law as Technology. Edinburgh: University of Edinburgh Press. Tranter, K. (2021). Disrupting Technology Disrupting Law. Law, Culture and the Humanities, 17(2), 158–171. Tranter, K. (2022). Sisyphus and the present: Time in modern and digital legalities. International Journal for the Semiotics of Law. 36(2), 373–384. Tranter, K. (2023). Law and automobility. In R. Spoo & S. Stern (Eds.), Elgar Concise Encyclopedias in Law and Literature (pp. Forthcoming). Cheltenham: Edward Elgar. Travis, M. (2011). Making space: Law and science fiction. Law and Literature, 23(2), 241–261. Travis, M. (2022). Jurisprudence, temporality and science fiction. Law, Technology and Humans, 4(2), 5–23. doi​.org​/10​.5204​/ lthj​​.2485. Uddin, M. A., Ali, M, H. & Masih, M. (2020). Bitcoin—A hype or digital gold? Global evidence. Australian Economic Papers, 59(3), 215–231. Urry, J. (2004). The ‘system’ of automobility. Theory, Culture and Society, 21(4/5), 25–39. Vasterman, P. L. (2005). Media-hype: Self-reinforcing news waves, journalistic standards and the construction of social problems. European Journal of Communication, 20(4), 508–530. Vivi, M. & Hermans, A.-M. (2022). “Zero emission, zero compromises”: An intersectional, qualitative exploration of masculinities in Tesla’s consumer stories. Men and Masculinities, 25(4), 622–644. White, M. C. (2015). This is Why Tech Bubbles Actually Happen. Time. Retrieved 18 December from https://time​.com​/3859396​/tech​-bubble/. Yar, M. (2014). The Cultural Imaginary of the Internet: Virtual Utopias and Dystopias. Basingstoke: Palgrave Macmillan.

PART II BRANCHES

9. Technology, monopoly, and antitrust from a historical perspective Ramsi A. Woodcock1

1. INTRODUCTION The relationship between antitrust and technology is fundamental.2 Technological advance reduces the value of the things that the individual can create independently for himself, leading to dependence for the individual and monopoly power for the producers who make things for him. Antitrust laws attack this power by promoting competition between producers, typically by prohibiting collusion between firms and anticompetitive conduct by powerful individual firms. Antitrust cannot, however, limit the power of the most profound and enduring monopoly created by technology: the state (Woodcock, 2020a). The state monopolizes the production of physical security by controlling the technology of arms that underpins security. Taxes represent the payments the public makes in exchange for access to the state’s security product. Those who refuse to pay are jailed or killed—they are denied the product. Antitrust cannot touch the state’s security monopoly because antitrust’s remedy—competition—leads to war when applied to the provision of security, and war makes people less secure. In other words, in the case of the state, the competition remedy destroys the product. Society’s answer to state power has therefore been to make the state into a consumer cooperative—we call it democracy—rather than to use antitrust to promote competition between states. The public, which consumes the state’s security product, elects the state’s managers, ensuring that the state does not abuse its power. Because competition is no remedy for state power, antitrust laws focus on smiting private monopolies in goods or services as opposed to the state’s monopoly in the provision of physical security. The state’s monopoly on security is so powerful that it was only after the state started to take on the consumer cooperative form that the private monopolies that antitrust regulates started to come into being. States had historically been organized as for-profit businesses. The king was the controlling shareholder and the king regulated all forms of business activity to ensure the activity would be undertaken primarily for the his benefit. The shift to the consumer cooperative form of organization of the state over the course of the 19th and 20th centuries in the West had two important consequences that are relevant here (Caenegem, 2000). First, the shift to the consumer cooperative form led to economic freedom (Mokyr, 2016). Consumers forced the state to stop using its monopoly on security to control all other lines of

1  The author thanks Przemysław Pałka, two anonymous reviewers, and participants in the online symposium associated with this Research Handbook for their comments. The author also thanks the University of Kentucky J. David Rosenberg College of Law for supporting this research with a summer research grant. 2  This chapter is based on prior work by the author, and one article in particular (Woodcock, 2020a). It restates some of the arguments made in that body of work.

142

Technology, monopoly, and antitrust from a historical perspective  143 business. Production fell into private hands. In antitrust terms, the state started to supply security to private businesses on fair and nondiscriminatory terms. The state became, in effect, a security platform. Second, the newly liberated private businesses used new technologies to create the first genuinely private monopolies. These first concentrations of private economic power inspired terror. This was especially true in America, where state power had been so effectively checked that it government amounted to rule by courts (Skowronek, 1982). Private power therefore loomed particularly large. The terror it inspired birthed the world’s first antitrust laws. At first, there appeared to be a fundamental problem lying at the heart of the antitrust project. If technology creates power, then to eliminate power one must eliminate technology. But eliminating technology is a recipe for national decline in the modern age. Fortunately, as the 20th century wore on, it became clear that technological advance tempers its own excesses. Each new wave of technology displaces firms that achieved power by supplying the previous wave (Schumpeter, 1994). This gave antitrust a role to play, albeit a profoundly chastened one. In the United States today, rather than attack private power wherever it can be found, antitrust attacks only attempts by powerful firms to retard the process of technological advance that will one day overthrow them. Antitrust does this by condemning monopolies that degrade competitors’ products instead of improving their own (Woodcock, 2018, 2021a). Antitrust’s orientation toward condemning attempts to degrade competing products makes antitrust a kind of economy-wide product designer, approving (through non-action) designs that make products better and condemning those that, because they are aimed at degrading the quality of competing products, make products worse (Woodcock, 2021a). Antitrust enforcers take no action against Apple’s iPhone, even though it gives Apple a monopoly on high-end phones, because the iPhone is better than other phones. But antitrust enforcers condemned Microsoft’s tie of its Internet Explorer browser to its Windows operating system because other browsers, such as Netscape Navigator, could work better with Windows.3 This rule against product degradation, which is implicit in the law, is very likely to be the standard by which antitrust cases against the giants of the information age will be measured in the United States.4 European antitrust enforcement, which started in earnest after World War Two, mixes this approach with the older view that monopoly must be punished no matter the technological cost (Gerber, 2010). Enforcement against the tech giants in Europe will, accordingly, be harsher.

2. ANTITRUST AND POWER The Sherman Act, which was passed in 1890 in the United States, is said to be the first substantial antitrust law (Gerber, 2012).5 A few false starts aside, it remained the only antitrust 3  See United States v. Microsoft Corp., 253 F. 3d 34 (D.C. Cir. 2001), https://scholar​.google​.com​/ scholar​_case​?case​=179​8761​8389​090921096. 4  See, e.g., FTC v. Meta Platforms, Inc., No. 1:20-cv-03590 (D.D.C. January 13, 2021). https://www​ .courtlistener​.com ​/docket ​/18735353​/federal​-trade​-commission​-v​-facebook​-inc/; United States v. Google LLC, No. 1:20-cv-03010 (D.D.C. October 20, 2020). https://www​.courtlistener​.com​/docket​/18552824​/ united​-states​-of​-america​-v​-google​-llc/. 5  15 U.S.C. § 1 (2018). https://www​.law​.cornell​.edu​/uscode​/text​/15​/1; 15 U.S.C. § 2 (2018). https:// www​.law​.cornell​.edu​/uscode​/text​/15​/2. An 1889 Canadian law is sometimes named as the first antitrust

144  Research handbook on law and technology law until after World War II, when antitrust spread to Western Europe and became part of the treaty that gave rise to the European Union (Gerber, 2010). Like the Sherman Act, the Treaty on the Functioning of the European Union prohibits both collusion by groups of firms and anticompetitive or abusive conduct by powerful firms.6 Depending on the jurisdiction, remedies range from the breakup of cartels or monopolies, the prohibition of mergers, and the forced sale or licensing of assets, to fines. In the 1990s, antitrust was globalized, and today nearly every country has an antitrust law of some form (Hawk, 2020). Many jurisdictions define power in a highly technical fashion that leaves the impression that the monopoly power that interests antitrust is different in kind from other forms of power. In fact, it is not. Monopoly power in the antitrust sense is just social power applied in the commercial context in which antitrust laws are usually enforced. All social power, including monopoly power, operates through the channel of input denial (Woodcock, 2021a). You are presented with a choice. Do what you want to do and lose access to something you believe that you need. Or do what the powerful want you to do and gain access to that thing as a reward. The definition of monopoly power usually employed by antitrust scholars in the United States—that monopoly power is the power profitably to raise price (Kirkwood, 2018)—is an application of the definition of social power to commerce (Woodcock, 2020a). Firms sell inputs in the sense that their customers always need their products to carry out some activity. When a firm raises its prices, the firm implicitly issues a threat. The firm says, in effect: do what you want to do (i.e., do not pay the higher price) and you will lose access to the firm’s product. Or do what the firm wants you to do (i.e., pay the higher price) and gain access to the firm’s product as a reward. The requirement that the price increase be profitable amounts to the requirement that consumers comply in the face of the threat. If consumers refuse to pay the higher price, the price increase will not be profitable. Monopoly power, like all social power, is pervasive. We all depend on others. The people who can exact a price from us—whether in dollars or friendship—in exchange for granting access to something we need, are legion. Only in a race of hermits does no one have power over anyone else. But those who lack alternatives have less power than others. The fewer alternative sources of a particular input available to you, the more you will be willing to pay to gain access to the input. The price may be high, but you have nowhere else to go. Antitrust seeks to prevent excessive concentrations of power by giving people more alternatives. That is what it means for antitrust to use competition to limit power. Antitrust must, then, decide what constitutes too much power. The placement of the line between “enough” and “too much” is necessarily arbitrary. In the United States, enforcers treat the power profitably to raise the price by 5 percent above cost as the threshold (Hovenkamp, 2020). But they could have chosen 1 percent, 10 percent, or an entirely different metric.

law, but, according to Cheffins (1989), “the fact that the legislation was, in all likelihood, unenforceable has led some observers to conclude that the Act was primarily a sham” (p. 454). 6  Treaty on the Functioning of the European Union Art. 101 (March 25, 1957). https://eur​-lex​.europa​ .eu​/ LexUriServ​/ LexUriServ​.do​?uri​= CELEX​:12008E101​:EN​:HTML; Treaty on the Functioning of the European Union Art. 102 (March 25, 1957). https://eur​-lex​.europa​.eu​/ LexUriServ​/ LexUriServ​.do​?uri​ =CELEX​:12008E102​:EN​:HTML.

Technology, monopoly, and antitrust from a historical perspective  145

3. TECHNOLOGY AND POWER The two great sources of social power are belief and control over resources. The first makes people want what you have. The second ensures that you have what they want. Firms use advertising to manipulate beliefs (Sunstein, 2016). Technology, such as the technology of targeted advertising, facilitates the manipulation. In the United States, antitrust enforcers once sought to limit power achieved through the manipulation of beliefs by attacking firms that enjoyed advertising advantages (Mensch & Freeman, 1990). But today enforcers rarely act against advertising. The focus here will therefore be on power associated with control over resources. Technology acts constantly to centralize control over resources in the hands of a few. In a modern world characterized by rapid technological advance, that makes technology the single most important creator of imbalances of power in the economy. (Fortunately, as we shall see in Section 7, technology also imposes important limits on the imbalances that it creates.) Technology centralizes control over resources because, to the extent that technology is economically meaningful, it creates labor savings. The labor of the machines is substituted for the labor of the individual. Thus a productive activity that once depended on the many—or which could be carried out independently by many producers—comes to depend only on those who control the machines. The labor savings created by technology also cause the many to become dependent on the few. Those who fail to adopt a technology lose out in competition to those who do (Hamilton, 1941). They become poor and insecure. For example, artificial intelligence will be a useful technology because once a few people have built an effective artificial intelligence, the intelligence will be able to liberate the many from the drudgery of intellectual labor. But artificial intelligence will also make people dependent on those who create it. Running a business or leading a professional career without using artificial intelligence will become impossible. Everyone will become dependent upon the creators of artificial intelligences. If more people can be found to make a particular technology or variations on it, then the power imbalance created by the technology lessens. More alternatives become available. No one controller of the technology can fully deny it to those who do not wish to obey. The purpose of antitrust laws is to promote such a proliferation of alternatives. But the imbalance of power between producers of technology and consumers of technology cannot be eliminated. So long as people rely on the production of others they must pay some price for access, even if, thanks to the availability of alternatives, the price is relatively low. Ultimately, you are always dependent on a group of firms, if not an individual firm, for your technology. To eliminate power imbalances entirely would require that everyone make his own machines. But that would defeat the purpose of technology, which is to find ways of enabling others to do for you what you once had to do for yourself. In any event, it is often the case that power cannot be reduced, much less eliminated, because alternatives cannot be created. More people cannot be found to make the machines because the machines are very hard to make (Arthur, 2011). Antitrust can slash through legal restrictions on the transfer of technology from one maker to another, break up firms, force asset sales, and the like. It follows that antitrust can sometimes temper imbalances of power created by technology. But if the technology by its nature cannot be profitably duplicated or improved upon, then antitrust cannot create alternatives.

146  Research handbook on law and technology Technology creates dependence, but people often experience technology as empowering. That experience is an illusion. Consider one of the great technological liberators of the 20th century: the AK-47 assault rifle. The AK-47 appears at first glance to have empowered the North Vietnamese during the Vietnam War. However, the North Vietnamese only needed it because the Americans had the M-14 assault rifle. The advance of rifle technology therefore rendered the North Vietnamese dependent on foreign powers that controlled AK-47 technology, such as Russia and China (Farley & Isaacs, 2020). If the Americans had arrived with spears, by contrast, the North Vietnamese could have manufactured their own spears at home, avoiding dependence on foreign countries for armaments altogether. The advance of technology from spear to assault rifle did not liberate the North Vietnamese. It disempowered them. Technology feels liberating because it frees us from restrictions imposed by nature. The AK-47 lets a soldier inflict destruction that he could never inflict with his bare hands or a spear. But the price of liberation from nature is dependence on other people.

4. STATE POWER AND TECHNOLOGY One way in which a technology cannot profitably be duplicated arises when duplication makes the technology undesirable. Antitrust will, again, not be able to limit the power of the firm that controls the technology. This is so for the state. The state monopolizes security. It sells security both from itself and from outside actors. The state’s subjects are the consumers of the security that the state provides. They pay for security with taxes and obedience. Those who do not comply are jailed or killed, which is to say that they are denied access to the product (Woodcock, 2020a). Antitrust cannot restrain the power of the state because competition destroys security. In a competitive market for security, the dynamics of the protection racket apply. Each gang strives to prove to residents that it alone can guarantee their security—so long as residents obey the gang. Each gang therefore competes to treat with greater savagery than other gangs those residents who do not obey. The worse the consequences of disobeying a particular gang, the more valuable the security the gang can provide in return for residents’ obedience. When the product is security, competition in product quality is competition in savagery. That makes people less safe. Only once one competitor triumphs over the others, both by achieving the allegiance of residents through fear and by defeating the other gangs by force of arms, does violence come to an end. A monopoly supplier of security is essential to the enjoyment of security. The state is the most powerful monopoly because an input that the state monopolizes is the input to all other inputs (Woodcock, 2020a). Without security, nothing can be produced and what is produced is stolen. Without security, one cannot eat, cannot sleep; indeed, one cannot live. The state can therefore demand everything of others. It can micromanage the behavior of its subjects. It can dictate the ways in which they produce. It can extract all of the wealth they create. This is not to say that security itself does not require inputs. Without weapons, the state cannot produce security. But this is to say that security is the ultimate bottleneck. All production must pass through it. If the security monopolist corks it, production cannot take place. The foundation of the state’s security monopoly is technological advance. The first states arose thanks to the arrival of the technologies of plant and animal domestication, which created the labor savings needed to raise armies (Morris, 2010). States are security monopolies within the geographic territories they control. But they compete to wrest territory from each other.

Technology, monopoly, and antitrust from a historical perspective  147 States are able to persist and thrive only by continually improving their technologies in order to defend their territories. This is true not only with respect to military technologies, but also with respect to technologies implicating all modes of production. For military struggles are always ultimately economic struggles. Thus, states have played host to innovation across the full spectrum of production, and not just in the weapons that most directly underpin state power. Unlike security, some technologies can be supplied to a given consumer by more than one producer. One might therefore have expected antitrust to be of as ancient lineage as the state. Antitrust might have been used to break up some of the earliest monopolies created by technological advance, if not the security monopoly itself. But there are no restrictions on the behavior of large firms to be found in the laws of ancient Egypt, Rome, or China (see Hawk, 2018). The reason is that, until very recently, the state exercised despotic control over all other monopolies within its borders. All other monopolies existed only at the pleasure of the sovereign and so were indistinguishable from it. Breaking them up would have meant going against the will of the sovereign and so would in effect have required the breakup of the state—which is, as we have seen, not desirable. So long as the state dominated all other forms of production, there was no room for antitrust. Or, equivalently, there was only one monopoly from which all other monopolies flowed (the state), and antitrust could not touch it. The power of the state had to be restrained by some other means in order for other, genuinely private monopolies to appear and become susceptible to a competition remedy.

5. DEMOCRACY AND TECHNOLOGY The public started to limit the power of the state about 250 years ago with the growth and spread of democracy in the West (Caenegem, 2000). Until then, states were run as for-profit businesses. Their leaders were the shareholders. Like any firm enjoying a monopoly position, these leaders sought to extract the maximum profit from their customers. Of course, many states were weak, large parts of the earth had no state at all, and in these places competition to supply security could be fierce (Scott, 2017). Under the feudalism that prevailed in the West for centuries after the collapse of the Roman Empire, for example, kings outsourced security to lords who sometimes fought each other or turned on the king (Caenegem, 2000). Kings and lords nevertheless sought to exploit their power wherever they enjoyed a monopoly over security, whether over a large domain or a small one. The American and French Revolutions marked the beginning of the reorganization of states in the West into consumer cooperatives. The consumers of security who paid the state’s taxes gained a say in management. Over time, that matured into a right in most of the population to vote (that is, to elect government representatives). In this way, the power of the state was brought under control despite the inability of antitrust to bring competition to the state. Consumers used their control over management to ensure that the state would enrich them, not a separate class of shareholders (Hansmann, 2014). The consumers of security in effect became the owners of the security monopoly. They abolished monarchy and aristocracy. They insisted that the state spend all tax revenues on improving the quality of the services that the state rendered to consumers—or not collect taxes at all (Woodcock, 2020a). Consumers also gave themselves economic freedom. In the United States, this was the “property” in the “life, liberty, or property” that the country’s constitution promised not to

148  Research handbook on law and technology deny.7 Economic freedom meant that the state would become what antitrust would today call a “platform”. The state would supply security to private firms on fair, reasonable, and nondiscriminatory terms. The state would physically protect businesses and charge them for this service according to a uniform tax schedule at rates that would allow businesses to retain substantial profits. The state might occasionally create monopolies of its own, but only in select industries and only when doing so would benefit the public. Outside of that limited context, monopolies would no longer bear the implicit or explicit imprimatur of the state. The salt monopoly in Ancient China could only exist at the pleasure of the emperor (Huan & Gale, 1931). But a salt monopoly in America was a private phenomenon. It was neither created by the state nor specifically acquiesced in by the state. The door was now open for antitrust to step in and bring competition to monopolies in the production of things other than security. Thanks to technological advance, there were a lot of monopolies for antitrust to attack. The rise of economic freedom coincided with an acceleration in the rate of technological change. Whether economic freedom or the enlightenment culture of innovation that preceded economic freedom unleashed innovation is a matter of debate (Mokyr, 2016). Regardless, technological advance, combined with economic freedom, drove an acceleration in the concentration of economic power across a range of industries in the West in the late 19th and early 20th centuries (Chandler, 1990). In the United States, this was the era of great industrial monopolies such as US Steel and Standard Oil (Scherer, 1996).

6. ANTITRUST AS A RESPONSE TO PRIVATE POWER The rise of these monopolies inspired terror (Hofstadter, 1955). Their power was reminiscent of the state’s, but they were not democratic institutions. They were for-profit businesses, not consumer cooperatives, and remain so today (Hansmann, 1996). There was, therefore, no evident check on their ability to wield their power to exploit their customers and some feared that they would exploit their customers in ways that would hearken back to the worst excesses of the pre-democratic, for-profit state. Hence the famous Standard Oil octopus extends its tentacles across a nation.8 It threatened violence. Americans feared private monopoly more than did Europeans because Americans had more effectively restrained the power of the state. In the late 19th century, nearly all European countries were constitutional monarchies (Caenegem, 2000). In Europe, the problem of private monopoly was partially submerged beneath the continuing problem of restraining the security monopoly that is the state. The state in America was, by contrast, so restrained that it appeared to Americans to be just another organization, on a par in terms of significance with the corner grocery or the neighborhood civic association (Skocpol & Finegold, 1982). It did not appear to Americans to be the permanent, all-encompassing power that it in fact is (see Moss, 2002). A big firm loomed much larger against this anti-statist backdrop than it did in a Europe in which kings still commanded armies (Dinardo & Hughes, 2018).

7  U.S. Const. amend. V. https://www​.law​.cornell​.edu​/constitution​/fifth​_amendment; U.S. Const. amend. XIV. https://www​.law​.cornell​.edu​/constitution​/amendmentxiv. 8  Illustration. (1904, September 7). Puck, 56(1436). https://commons​.wikimedia​.org​/wiki​/ File​: Standard​_oil​_octopus​_loc​_color​.jpg.

Technology, monopoly, and antitrust from a historical perspective  149 Americans’ fear of private monopoly was excessive. The state had never relinquished its power over monopolies. It had merely chosen not to exercise it, acting at the behest of democratic majorities who demanded economic freedom. All private monopolies require security to function. The state remained their sole source of that input. No private monopoly provided a complete security package for itself. If one of these monopolies had, and the state had been unable to put a stop to it, then the monopoly would have been a state. The problem would not have been one of monopoly per se but rather one of dictatorship and the need for democratic governance of the monopoly. However, these private monopolies were not states. So long as the states within which they operated were democracies, consumers ultimately controlled private monopolies. Consumers controlled them through consumers’ control over the state. Private monopolies could never grow powerful enough to challenge the state without becoming states themselves. But they could still exercise their lesser power to oppress consumers, albeit not so completely as the state would be able to do. Thanks to the doctrine of economic freedom, the state had adopted the basic rule that the state should not discriminate in its supply of security input to businesses because consumers had thought that would be good for them. In the late 19th and early 20th centuries, it became clear that, in order to protect themselves, consumers would need to revise that rule to permit the state to discriminate against private monopolies that sought to oppress consumers. The question was how. Antitrust seemed like the natural approach. The United States Congress passed the Sherman Act. But antitrust immediately ran up against the problem of technology. Antitrust reduces power by facilitating the creation of alternative technologies. But, as we have already observed in the context of the state’s monopoly on security, some technologies cannot be duplicated without making them less effective. As a result, an antitrust law that indiscriminately forces duplication creates a risk of technological regression. The more vigorous the enforcement, the greater the risk. If we follow Arthur (2011) in thinking of technology broadly to include not just a mechanism in a box or factory building but any means to a purpose, then it becomes clear that the problem is potentially widespread, implicating every technology that an economist might characterize as exhibiting large economies of scale. If a single implementation of the technology is capable of serving an entire market, then any duplication will lead to technological regression in the sense that it will cause more workers to be required to achieve the same level of output. Duplication will increase overall industry costs of production, forcing prices up and reducing demand. Consider, for example, a great steel foundry able to serve the demand of an entire nation. If antitrust enforcers were able to facilitate the entry of a second foundry into the market, competition between the two foundries would limit the power of each. But the arrival of the second foundry would also double the cost of producing steel, for now the same level of demand would be served by two foundries where only one was needed. More labor effort would be required to achieve the same level of output. The imbalance of power created by steel production could be eliminated entirely by going further and asking each person to make steel on his own, in which case all dependence on an external producer would disappear. But only small amounts of steel could be made that way at great time and expense per unit. It would be a technological advance, from the perspective of the market, to go back to producing the entire country’s needs in a single foundry. Consumers do not need antitrust to limit the power of private monopolies. They can instead leverage state power directly to regulate monopolies’ conduct. Consumers took a similar approach against the state itself. They converted the state into a consumer cooperative and

150  Research handbook on law and technology then told the state how to act. Now consumers could leverage the state’s control over security to dictate to private monopolies how to act. Theodore Roosevelt was a leading proponent of this approach (Hofstadter, 1955; Sklar, 1988). In 1908, as US President, he nearly succeeded at giving a Federal Bureau of Corporations the power to revoke the charters of corporations that did not act in the public interest (Sklar, 1988). But Americans feared private monopoly too much. Roosevelt was defeated in the 1912 presidential election by Woodrow Wilson, who advocated more antitrust (Sklar, 1988). During the Wilson Administration, Congress created the Federal Trade Commission (FTC), gave private plaintiffs the right to sue for antitrust violations, and made clear that antitrust could be used to block or unwind mergers (Sklar, 1988). The great statue of a man straining to control a horse that eventually appeared at the front of the new FTC building in Washington, DC during the New Deal would have better adorned Roosevelt’s Bureau of Corporations. To control a horse is to alter its conduct. After 1912, the antitrust urge to butcher the horse was instead ascendant. But still no one could bear to do the butchering. While antitrust won out over direct regulation as a matter of the law on the books in the United States, antitrust enforcement was anemic until the late 1930s. Americans did not know how to reconcile their fear of monopoly with their fear that a faithful application of the antitrust laws would take a toll on technological progress (Hovenkamp, 2009a).

7. MONOPOLISTIC COMPETITION AS SAVIOR The monopolistic competition revolution in economics solved the problem (Hovenkamp, 2009a). Economists noticed that every product is at least a little bit different from every other with respect to design, branding, mechanism, time or place of sale, or some other factor (Chamberlin, 1956). It followed, first, that duplication of a technology was never truly duplicative. The new technology would produce a product differing from the original, even if only in time and place of sale. Second, because different consumers prefer different varieties, the introduction of a new variety of a product into a market could make some consumers better off. But if the alternative technologies that antitrust might help into a market were not strictly duplicative, and could make at least some consumers better off, then technological regression was a less likely outcome of vigorous antitrust enforcement than it had at first appeared to be. When technologies are identical, the introduction of an alternative technology wastes labor because demand could have been satisfied without it. But when the alternative is even slightly different from the original, the extra work needed to introduce it may no longer count as wasteful duplication of effort. The more consumers prefer the new technology, the less wasteful it becomes. The introduction of a new steel foundry adds a second variety of steel to the market. If even one consumer prefers the new variety, then it is no longer meaningful to say that the new foundry has doubled the cost of steel. There are now two kinds of steel, each serving different needs. The more consumers value the differences between the varieties, the more the introduction of the second foundry will come to constitute a technological innovation rather than a regression. Another implication of the monopolistic competition revolution was that technology can temper the worst excesses of technology. Technological advance makes consumers dependent on new technology. And if technological advance were to stop there, oppression might follow.

Technology, monopoly, and antitrust from a historical perspective  151 But technological advance need not stop there. It can continue on, creating variations and improvements on the original. Each might appeal enough to consumers to give them alternatives to the original technology, limiting the power of any one owner to coerce behavior and forcing each owner to compete to some extent on price. To be sure, because every technology is different, a consumer who prefers one loses something in switching to another. But if the owner of the consumer’s preferred technology is too coercive—the owner sets the price too high—the consumer will find it worthwhile to switch. Markets are monopolistic because each firm has a mini-monopoly in its own unique technology. But markets are competitive because new technologies are constantly drawing consumers away from old ones. One technology means power. But a multiplicity of technologies means freedom. Albeit within limits. Consumers cannot become self-sufficient. They still need to buy from someone. And some technologies are more popular than others, giving their owners extra sway over markets. Conservatives and progressives drew different lessons from the theory of monopolistic competition. It had once been thought that technology makes antitrust impossible. Joseph Schumpeter now argued that technology makes antitrust unnecessary (Schumpeter, 1994). No technology endures forever and so no monopoly endures forever. A revolutionary innovation will eventually appear. Consumers will flock to it. A big firm will bite the dust. Kodak will fall to the digital camera. Nokia will fall to the smartphone. Some reigning tech giant of today may fall to artificial general intelligence. Schumpeter called it “creative destruction” (Schumpeter, 1994). The progressives who coined the “monopolistic competition” moniker were less optimistic (Chamberlin, 1956; see also Robinson, 1969). In their view, technology creates monopolies. It also creates so many that the monopolies might compete. But there is nothing to stop the monopolies from saving themselves from competition by interfering with the technological advance that threatens to displace them. Progressives thought that firms might slow technological advance in two ways, corresponding to the two sources of social power identified earlier in this essay: belief and control over resources. First, progressives thought that firms might make their products appear to be different and better when in fact they are not (Chamberlin, 1956). Through the use of advertising and branding, firms might differentiate their products in consumers’ minds rather than in reality, allowing them to negate the competitive advantage of firms that have innovated and improved their products (Woodcock, 2018). For example, branding enables makers of luxury handbags to charge much higher prices than mid-market handbag makers while delivering goods that many believe are of poorer workmanship. Second, firms might slow technological advance by denying competitors the inputs they need to innovate or to bring their innovations to market. In other words, they might prevent competitors from differentiating their products in fact. To point to a relatively recent example, Microsoft might restrict Netscape’s access to the Windows operating system in order to prevent a competing maker of web browsers from bringing the product to consumers.9 Progressives called either approach the erection of a “barrier to entry” into markets (Bain, 1956). Barriers could be erected in the mind or in the world.

9  See United States v. Microsoft Corp., 253 F. 3d 34 (D.D.C. 2001). https://scholar​.google​.com​/ scholar​_case​?case​=179​8761​8389​090921096.

152  Research handbook on law and technology

8. ANTITRUST AS DEFENDER OF INNOVATION The possibility that firms would try to destroy the innovative competitors that threatened to displace them gave antitrust something to do. Antitrust could not go about indiscriminately breaking up firms or otherwise promoting the duplication of existing technologies. That would make technology less useful. But antitrust could go about preventing firms from erecting barriers to entry. That would encourage the proliferation of technologies, which would in turn limit monopoly power. The theory of monopolistic competition also suggests that antitrust enforcers should vigorously oppose mergers or collusion unless the merger or collusion has a technological justification (or, more broadly, an efficiency justification) (Hovenkamp, 2020). Competition gives rise to innovation, as firms try to differentiate their products to attract and hold more customers. When competitors collude or merge, they cease to compete. Innovation suffers. It followed that collusion or merger tend to prevent technological advance from overtaking the colluding or merging firms and ought to be prohibited on that basis. Collusion and merger have the same effect as a barrier to entry. But only if the collusion is not in aid of a joint research and development project or other joint productive undertaking. If firms need to work together to improve the quality of the products they sell, then prohibiting collusion or merger would be tantamount to breaking up an innovative firm. It followed that collusion that amounts to naked price fixing, or mergers that function only to raise prices, should be stopped. But productive unions should be permitted. The vision of antitrust’s role as that of knocking down firm-created barriers to entry was a far cry from the role of the scourge of monopoly originally envisioned for antitrust (Hofstadter, 1955). According to that vision, a monopoly that acquired power by developing a superior technology and then maintained that power by continuing to innovate could avoid antitrust liability so long as the firm did not use its power to degrade the technology of competitors, including by allying with other firms (Woodcock, 2018). So far from smashing every firm it could find, America was willing to accept monopoly power in exchange for technological advance. Antitrust now had only a modest role to play in policing industrial competition to ensure that it would unfold through a technological race to the top rather than a destructive race to the bottom (Woodcock, 2021a). In principle, the standard that antitrust applied required consideration of the net effect of a firm’s conduct. Antitrust’s role was to ensure that firms could become powerful only by doing more good than harm. A monopolist that had innovated but degraded competitors’ technology as a collateral matter would need to be protected if the innovation were more valuable than the harm to competing technologies caused by the innovation. In practice, however, comparing benefits and harms is difficult. American courts would eventually come to protect any monopoly that improved its product, even if the improvement also incidentally degraded competing products to a greater extent, creating a net loss for consumers, so long as there were no less harmful alternative means by which the firm could have made the improvement (Hemphill, 2016; Woodcock, 2018). That is the standard followed by the courts today (Woodcock, 2018). If the restrictions on Windows access placed by Microsoft on Netscape had done anything to improve Windows, Microsoft would have escaped liability when American enforcers brought a case against the company in the 1990s. This standard made antitrust a sort of light-touch economy-wide product design supervisor (Woodcock, 2021a). We can think of a competitive market as one in which the consumer has

Technology, monopoly, and antitrust from a historical perspective  153 the ultimate power to design the product by choosing between the competing technologies on offer in the market. When a monopoly denies inputs to competing products, driving them from the market, the monopoly in effect takes ultimate design authority away from consumers and places it in-house. The monopoly always argues that it must use those inputs to improve its own product. The question antitrust courts and enforcers must therefore always answer is whether the monopoly’s in-house design is better for consumers than the one that consumers would have chosen in a competitive market. In most cases, enforcers and courts rely on economists to supply the answer using data on past consumer purchase decisions, which they believe reflect consumer preferences. But there is only so much that data will reveal. Ultimately, courts and enforcers must rely on their own intuitions regarding which approach to design consumers would prefer in any particular case. In the United States, the monopolistic competition revolution provided the intellectual basis for sustained enforcement of the antitrust laws starting in the second half of the New Deal and running through the first half of the 1970s (Vaheesan, 2014). The Supreme Court held that in order to violate the antitrust laws a firm must engage in conduct that harms competitors—what the courts came to call “exclusionary conduct”. Simply being a monopoly would not be sufficient for a violation to exist.10 The Court also declared that it would take competition from firms selling differentiated products into account in deciding whether a firm has monopoly power.11 But the new monopolistic competition framework did not immediately replace the original absolutist conception of antitrust as indiscriminate scourge of monopolies. If anything, the spur to action that the theory of monopolistic competition provided to antitrust operated to reinvigorate the older approach as well. Over the same Postwar period in which the Court embraced the monopolistic competition framework, the Court also incongruously declared that the purpose of the antitrust laws is to protect small businesses regardless of the consequences for efficiency and presumably also for technological advance.12 And the courts repeatedly condemned large firms based on implausible allegations of bad conduct, suggesting that the courts wished to condemn monopoly per se.13 From the 1950s through the early 1970s, important members of the antitrust establishment supported this tendency by calling on the US Congress to enact a “no-fault” monopolization law (Hovenkamp, 2009b; Kaysen & Turner, 1959; “The Industrial Reorganization Act”, 1973). Meanwhile, many Western European countries, some of which had experimented with antitrust to a limited extent during the Interwar period, embraced antitrust in the Postwar period (Gerber, 2010). The defeat of the Nazis had made clear that henceforth the security monopoly in Western Europe was to be governed by consumers. There could be no substitute for democracy. As it had in the United States, the subordination of the state to the people caused the problem of private monopoly to come fully into focus. At the transnational level, the nascent 10  United States v. Aluminum Co. Of America, 148 F. 2d 416 (2d Cir. 1945) (sitting as the Supreme Court). https://scholar​.google​.com ​/scholar​_case​?case​=155​1930​0671​749179648. 11  United States v. EI du Pont de Nemours, 351 U.S. 377 (1956). https://scholar​.google​.com​/scholar​ _case​?case​=116​1805​0296​866736407. 12  Brown Shoe Co. v. United States, 370 U.S. 294 (1962). https://scholar​.google​.com​/scholar​_case​ ?case​=9571017711360259745. 13  United States v. Aluminum Co. Of America, 148 F. 2d 416 (2d Cir. 1945). https://scholar​.google​ .com ​/scholar​_case​?case​=155​1930​0671​749179648; United States v. United Shoe Machinery Corp., 110 F. Supp. 295 (D. Mass. 1953). https://scholar​.google​.com​/scholar​_case​?case​=100​2369​6601​997213088.

154  Research handbook on law and technology European Union absorbed the mix of monopolistic competition and absolutist approaches then circulating in the United States (Gerber, 2010). But there was a twist. As we have seen, in the United States, Theodore Roosevelt had once hoped to create a general regulator of business conduct, including the conduct of monopolies. Such a regulator would have been able directly to order monopolies to charge low prices or otherwise engage in consumer-friendly behavior. The US Congress had instead embraced antitrust—using the protection of competition indirectly to discipline firms—and chosen indirectly to regulate business behavior only in a handful of industries. This was the American regime of industry-specific rate regulation (Fried, 1998). Europe initially seemed to side with Roosevelt. The bloc included a general prohibition on “abusive” conduct in its antitrust rules (Ackermann, 2012). But thanks perhaps to the gravitational pull of American antitrust policy, European courts and enforcers have treated this provision primarily as an antitrust law mandating the protection of competition in the monopolistic competition or absolutist senses. But they do occasionally apply it directly against oppressive conduct, such as excessive pricing (Ackermann, 2012). In the United States, the lingering strand of antitrust absolutism triggered a conservative backlash that made itself felt in the mid-1970s. It was led by scholars belonging to the Chicago School (Goldschmid et al., 1974). They professed to reject the monopolistic competition concept (Stigler, 1957). In fact, they rejected only progressive framings of it, including the “monopolistic competition” and “barriers to entry” monikers (Demsetz, 1982; Stigler, 1957). But they accepted the substance. The Chicago School’s real target was not monopolistic competition. It was the lingering view that monopoly should be condemned even at the cost of technological advance. In this vein, the Chicago School argued that too little antitrust enforcement is better than too much (Easterbrook, 1984). No firm can maintain a monopoly forever. Superior technologies eventually find a way around the most entrenched positions. But government can chill innovative conduct permanently by condemning firms that achieve power through innovation (Manne & Wright, 2010). The implication was that antitrust should focus only on conduct that it can be quite certain prevents superior products from coming to market. This “error costs” argument was a call to antitrust to take the implications of the monopolistic competition revolution more seriously than it had in the past. If differentiation was a source of value and a limit on power, antitrust enforcers needed to be very careful indeed about chilling it. But the Chicago School couched this argument in the language of Schumpeterian creative destruction and epochal technological change rather than the language of product differentiation that was more familiar to progressives. The Chicago School did not depart from the monopolistic competition paradigm. But the Chicago School still managed to alter antitrust enforcement in three ways. First, the Chicago School succeeded at squelching the absolutist strand in antitrust in the United States, at least temporarily. American courts today rely exclusively on the monopolistic competition framework in deciding cases. This has greatly reduced the number of firms subject to liability. Second, in advancing the error cost argument, the Chicago School made enforcers wary of bringing cases. Afraid of mistakenly chilling innovation, enforcers started to wonder whether any act that at first appeared only to degrade competitors’ products might also improve the defendant’s product and so actually count as innovative conduct (Woodcock, 2021b). Third, the Chicago School put an end to antitrust enforcement against advertising, as noted in Section 3. The Chicago School accepted the implication of the monopolistic competition paradigm that monopolies might seek to preserve their power by degrading competitors’

Technology, monopoly, and antitrust from a historical perspective  155 products instead of improving their own. However, the movement rejected the view that monopolies might use advertising and branding to degrade competitors’ products in the minds of consumers rather than in fact (Bork, 1978). This was because the Chicago School wished to maintain that the market was democratic. In the Chicago School’s view, the market enabled consumers to vote, through their purchase decisions, on which products the economy should produce. If advertising or branding could manipulate consumers into buying products that they did not really prefer, then consumers were not actually in control. Monopolies were. Markets were therefore not democratic and government intervention in markets on a grand scale would be justified. Rather than entertain this possibility, the Chicago School denied the manipulative character of advertising. The movement argued that advertising was merely informative (Nelson, 1974). Advertising helped consumers find the products that they preferred. Advertising was a good, not a bad. American courts rarely apply the current approach to antitrust law using the terms that I have used here. They do not usually speak of a contrast between monopolies that obtain power by innovating and those that obtain it by degrading competitors’ products. They do not usually speak of control over inputs as the ultimate source of the power wielded by monopolies. Instead, they speak of exclusionary conduct and consumer harm.14 They profess to condemn firms that have power if the firms engage in exclusionary conduct that harms consumers. This is true also with respect to the antitrust offense of collusion—a corner of the law not otherwise addressed in this chapter. Apart from naked price fixing, which the courts categorically condemn,15 courts evaluate collusion under a “rule of reason”.16 According to this meta-rule, if the collusion makes consumers better off, the courts permit it. If the collusion does not, and the colluders collectively have power in the sense of the ability profitably to raise price above some threshold amount, the courts condemn it. When activists call for a return to antitrust absolutism, they, too, employ consumer welfare language rather than the language used in this chapter. They say that they oppose the “consumer welfare standard” (Vaheesan, 2018). They do not say that they oppose a system that gives license to innovative monopolies. But consumer welfare and technological advance are very nearly the same thing. Technology advances whenever firms do things in new ways and consumers prefer the results to the results under the old ways. Monopolies that exclude competitors from markets by advancing their own technology tend to avoid liability under the consumer welfare standard because the monopolies maintain their power by improving their products, making consumers better off (Woodcock, 2018). And firms that collude to advance their technology also tend to avoid liability because they, too, improve their products, making consumers better off (Hovenkamp, 2020).

14  See United States v. Microsoft Corp., 253 F. 3d 34, 58–59 (D.D.C. 2001). https://scholar .google .com /scholar _case ?case =179 8761 8389 090921096. 15  Arizona v. Maricopa County Medical Soc., 457 U.S. 332 (1982). https://scholar .google .com / scholar _case ?case =4200092683765231599. 16  See, e.g., Broadcast Music, Inc. v. Columbia Broadcasting System, Inc., 441 U.S. 1 (1979). https:// scholar .google .com /scholar _case ?case =9239998327680075982.

156  Research handbook on law and technology

9. ANTITRUST AND TECHNOLOGY TODAY The Information Revolution has followed the same basic pattern as the Industrial Revolution that birthed antitrust a century ago. Technological advance has created powerful firms. That has inspired fear that the powerful firms will take on state-like characteristics despite their lack of control over the sine qua non of state power: security. Fear has led to calls for the indiscriminate breakup of large firms. During the Industrial Revolution, the technological advances that led to calls for an absolutist antitrust had to do with the production of things (Gordon, 2017). This time around, the advances have been in communications and data analysis. They have enabled Google and Facebook to amass vast amounts of user attention and data, process it into targeted advertising distribution services, and sell those services to advertisers. Legacy advertising distributors, especially newspapers, have not been able to generate enough attention to keep up (Woodcock, 2022). Many have closed in the United States and, increasingly, in Europe as well (Le Gall, 2021). The result has been the centralization of control over advertising distribution in the hands of a few powerful firms. And this has led to calls from antimonopoly activists indiscriminately to condemn monopolies (Teachout, 2020). These activists scored a success when the Biden Administration appointed some of them to head the two main antitrust enforcers in the United States, the Antitrust Division of the Department of Justice and the Federal Trade Commission. But as enforcers, they have, however, been at pains to abide by antitrust’s monopolistic competition framework in the cases they have filed against Google and Facebook. In both cases, enforcers have argued that a monopoly sought to postpone the gale of creative destruction by harming competitors whose products advertisers might have preferred. In the Google case, the Department of Justice has alleged that Google prevented better search engines from coming to the attention of users. According to the Department of Justice, Google did this by arranging to have Google’s search engine appear by default on web browsers and mobile phones.17 In the Facebook case, the FTC has alleged that Facebook made it difficult for users to switch to better social media apps. According to the FTC, Facebook did this by preventing new social media apps from offering users the option to invite their Facebook friends to join them on the new app.18 In both cases, the conduct would have prevented competitors from amassing more user attention and data. That would have made competitors less appealing to advertisers. The competitors might have been able to offer advertisers more sophisticated advertising distribution. But without users, they would have had nothing to sell. Both cases raise the question whether, in driving users to their own platforms and away from competitors’ platforms, Google and Facebook were improving their own products. If they were improving their own products, then their conduct would count as desirable innovation and receive protection, at least in the United States, even if the conduct also degraded competitors’ products (Gifford & Kudrle, 2015). It does appear that Google and Facebook’s conduct improved the services they offered to advertisers (Woodcock, 2020b). The more users 17  Complaint, p. 55, United States v. Google LLC, No. 1:20-cv-03010 (D.D.C. October 20, 2020). https://storage​.courtlistener​.com ​/recap​/gov​.uscourts​.dcd​.223205​/gov​.uscourts​.dcd​.223205​.1​.0​_6​.pdf. 18  Substitute Amended Complaint for Injunctive and other Equitable Relief, p.51, FTC v. Meta Platforms, Inc., No. 1:20-cv-03590 (D.D.C. September 8, 2021). https://storage​.courtlistener​.com​/recap​ /gov​.uscourts​.dcd​.224921​/gov​.uscourts​.dcd​.224921​.82​.0​_2​.pdf.

Technology, monopoly, and antitrust from a historical perspective  157 an app has, the more attention and user data the app commands. The more attention and user data an app commands, the better the app targets ads. The better the app targets ads, the better the quality of the advertising distribution product that the app offers advertisers. If that is the case, then Google and Facebook might both argue that they maintained their power by making their products better (Woodcock, 2020b). This is not to say, however, that permanent monopoly will be the order of the day in the information age. The economic laws of monopolistic competition remain in effect, and the incumbents will eventually be displaced, as may already be happening to Facebook (Vanian, 2022). If not, antimonopoly activists would be wise to revive Theodore Roosevelt’s proposal to create a Federal Bureau of Corporations and regulate business conduct directly. That would solve the problem of the innovative monopoly. Under current law, the innovative monopoly does not run afoul of the antitrust laws because the innovative monopoly does not degrade competitors’ products to get ahead. Its ability to field superior products is the source of its power. The innovative monopoly is free to abuse its power, so long as it does not gratuitously degrade competing products—until such time as competing technologies overtake it (Gifford & Kudrle, 2015). Under the absolutist approach, the innovative monopoly would be duplicated or broken up, leading to technological regression. But under direct regulation, the innovative monopoly would be preserved until such time as competing technologies overtake it. Until then, it would, however, be forced to charge a reasonable price. So far, there has been little interest in this approach in the United States. The story in Europe is different. There, it is occasionally still possible to condemn excessive pricing by an otherwise virtuous firm (Ackermann, 2012).

10. CONCLUSION The Information Revolution will not be the last technological revolution in history. More will come, whether in quantum computing, biotechnology, artificial intelligence, or other areas. This will lead to further centralization of control over inputs in the hands of private monopolies, to panic, and to calls for antitrust to respond. If the current approach to antitrust in the United States continues, that response will be judicious and focused on distinguishing genuine technological advance from actions that slow the advance of competitors. Perhaps it will one day also include price regulation for the innovative monopolies that antitrust cannot touch. European enforcers may embrace this approach as well.

REFERENCES Ackermann, T. (2012). Excessive Pricing and the Goals of Competition Law. In D. Zimmer (Ed.). The Goals of Competition Law (p. 349). Cheltenham, UK: Edward Elgar. Arthur, W.B. (2011). The Nature of Technology. New York: Free Press. Bain, J.S. (1956). Barriers to New Competition: Their Character and Consequences in Manufacturing Industries. Fairfield, NJ: A.M. Kelley. Bork, R.H. (1978). The Antitrust Paradox: A Policy at War with Itself. New York: Basic Books. Caenegem, R.C. van. (2000). An Historical Introduction to Western Constitutional Law. Cambridge, England: Cambridge University Press. Chamberlin, E.H. (1956). The Theory of Monopolistic Competition: A Re-Orientation of the Theory of Value (7th ed.). Cambridge, MA: Harvard University Press.

158  Research handbook on law and technology Chandler, Jr., A.D. (1990). Scale and Scope: The Dynamics of Industrial Capitalism. Cambridge, MA: Harvard University Press. Cheffins, B. (1989). The Development of Competition Policy, 1890–1940: A Re-Evaluation of a Canadian and American Tradition. Osgoode Hall Law Journal, 27(3), 449–490. Demsetz, H. (1973). Industry Structure, Market Rivalry, and Public Policy. Journal of Law and Economics, 16(1), 1–9. Retrieved from https://doi​.org​/10​.2307​/724822. Demsetz, H. (1982). Barriers to Entry. The American Economic Review, 72(1), 47–57. Dinardo, R.L. & Hughes, D.J. (2018). Imperial Germany and War, 1871–1918. Lawrence, KS: University Press of Kansas. Easterbrook, F.H. (1984). The Limits of Antitrust. Texas Law Review, 63, 1–40. Farley, R.M. & Isaacs, D.H. (2020). Patents for Power: Intellectual Property Law and the Diffusion of Military Technology. Chicago: University of Chicago Press. Fried, B.H. (1998). The Progressive Assault on Laissez Faire: Robert Hale and the First Law and Economics Movement. Cambridge, MA: Harvard University Press. Gerber, D. (2010). Law and Competition in Twentieth Century Europe: Protecting Prometheus. Oxford: Clarendon Press. Gerber, D. (2012). Global Competition: Law, Markets, and Globalization. Oxford: Oxford University Press. Gifford, D.J. & Kudrle, R.T. (2015). The Atlantic Divide in Antitrust an Examination of US and EU Competition Policy. Chicago: University of Chicago Press. Goldschmid, H.J., Mann, H.M., Weston, J.F. & Weston, J.F. (Eds.). (1974). Industrial Concentration: The New Learning. Boston: Little, Brown. Gordon, R.J. (2017). The Rise and Fall of American Growth: The U.S. Standard of Living since the Civil War. Princeton, NJ: Princeton University Press. Hamilton, W. (1941). Temporary National Economic Committee Investigation of Concentration of Economic Power: Monograph No. 31: Patents and Free Enterprise. Washington, DC: Government Printing Office. Hansmann, H. (1996). The Ownership of Enterprise. Cambridge, MA: The Belknap Press of Harvard University Press. Hansmann, H. (2014). All Firms are Cooperatives – And So are Governments. Journal of Entrepreneurial and Organizational Diversity, 2(2), 1–10. Retrieved from https://doi​.org​/10​.5947​/ jeod​.2013​.007. Hawk, B.E. (2018). Antitrust in History. The Antitrust Bulletin, 63(3), 275–282. Retrieved from https:// doi​.org​/10​.1177​/0003603X18783124. Hawk, B.E. (2020). Antitrust and Competition Laws. Huntington, NY: Juris Publishing. Hemphill, C.S. (2016). Less Restrictive Alternatives in Antitrust Law. Columbia Law Review, 116(4), 927–990. Hofstadter, R. (1955). The Age of Reform: From Bryan to F.D.R. New York: Vintage. Hovenkamp, H. (2009a). United States Competition Policy in Crisis: 1890–1955. Minnesota Law Review, 94, 311–367. Hovenkamp, H. (2009b). The Neal Report and the Crisis in Antitrust. Retrieved from https://papers​.ssrn​ .com​/abstract​=1348707. Hovenkamp, H. (2020). Federal Antitrust Policy, the Law of Competition and Its Practice (6th ed.) St. Paul, MN: West Academic Publishing. Huan, K. & Gale, E.M. (1931). Discourses on Salt and Iron: A Debate on State Control of Commerce and Industry in Ancient China, Chapters I-XIX. Leyden: E.J. Brill Ltd. Kaysen, C. & Turner, D.F. (1959). Antitrust Policy: An Economic and Legal Analysis. Cambridge, MA: Harvard University Press. Kirkwood, J.B. (2018). Market Power and Antitrust Enforcement. Boston University Law Review, 98, 1169–1227. Le Gall, A. (2021). European Parliament Committee on Culture and Education: Europe’s Media in the Digital Decade. Retrieved from https://www​.europarl​.europa​.eu​/ RegData​/etudes​/STUD​/2021​ /690873​/ IPOL​_ STU(2021)690873​_ EN​.p​df. Manne, G.A. & Wright, J.D. (2010). Innovation and the Limits of Antitrust. Journal of Competition Law and Economics, 6(1), 153–202.

Technology, monopoly, and antitrust from a historical perspective  159 Mensch, E. & Freeman, A. (1990). Efficiency and Image: Advertising as an Antitrust Issue. Duke Law Journal, 1990(2), 321–373. Mokyr, J. (2016). A Culture of Growth: The Origins of the Modern Economy. Princeton, NJ: Princeton University Press. Morris, I. (2010). Why the West Rules—For Now. New York: Farrar, Straus and Giroux. Moss, D.A. (2002). When All Else Fails: Government as the Ultimate Risk Manager. Cambridge, MA: Harvard University Press. Nelson, P. (1974). Advertising as Information. Journal of Political Economy, 82(4), 729–754. Robinson, J. (1969). The Economics of Imperfect Competition (2nd ed.). New York: St. Martin’s Press. Scherer, F.M. (1996). Industry Structure, Strategy and Public Policy. New York: HarperCollins College Publishers. Schumpeter, J.A. (1994). Capitalism, Socialism and Democracy. London: Routledge. Scott, J.C. (2017). Against the Grain: A Deep History of the Earliest States. New Haven, CT: Yale University Press. Sklar, M.J. (1988). The Corporate Reconstruction of American Capitalism, 1890–1916: The Market, the Law, and Politics. Cambridge, England: Cambridge University Press. Skocpol, T. & Finegold, K. (1982). State Capacity and Economic Intervention in the Early New Deal. Political Science Quarterly, 97(2), 255–278. Skowronek, S.L. (1982). Building a New American State: The Expansion of National Administrative Capacities, 1877–1920. Cambridge, England: Cambridge University Press.. Stigler, G.J. (1957). Perfect Competition, Historically Contemplated. Journal of Political Economy, 65(1), 1–17. Sunstein, C.R. (2016). Fifty Shades of Manipulation. Journal of Marketing Behavior, 1(3–4), 213–244. Retrieved from https://doi​.org​/10​.1561​/107​.00000014. Teachout, Z. (2020). Break ’Em Up: Recovering Our Freedom from Big Ag, Big Tech, and Big Money. New York: All Points Books. The Industrial Reorganization Act: An Antitrust Proposal to Restructure the American Economy. (1973). Columbia Law Review, 73, 635–676. Vaheesan, S. (2014). The Evolving Populisms of Antitrust. Nebraska Law Review, 93, 370–428. Vaheesan, S. (2018). The Twilight of the Technocrats’ Monopoly on Antitrust? Collection: Responses to Unlocking Antitrust Enforcement. Yale Law Journal Forum, 127, 980–995. Vanian, J. (2022). Facebook Scrambles to Escape Stock’s Death Spiral as Users Flee, Sales Drop. CNBC. Retrieved from https://www​.cnbc​.com ​/2022​/09​/30​/facebook​-scrambles​-to​-escape​-death​ -spiral​-as​-users​-flee​-sales​-drop​.html. Woodcock, R.A. (2018). The Obsolescence of Advertising in the Information Age. Yale Law Journal, 127, 2270–2341. Woodcock, R.A. (2020a). Law as an Antimonopoly Project. Retrieved from https://papers​.ssrn​.com​/ sol3​/papers​.cfm​?abstract​_id​=3704450. Woodcock, R.A. (2020b). Google and Shifting Conceptions of What It Means to Improve a Product. Truth on the Market. Retrieved from https://truthonthemarket​.com​/2020​/12​/16​/google​-and​-shifting​ -conceptions​-of​-what​-it​-means​-to​-improve​-a​-product/. Woodcock, R.A. (2021a). Outline of a General Theory of Antitrust. Retrieved from https://papers​.ssrn​ .com ​/abstract​=3794816. Woodcock, R.A. (2021b). The Hidden Rules of a Modest Antitrust. Minnesota Law Review, 105, 2095–2174. Woodcock, R.A. (2022). The Fourth Estate’s Estate. University of Pennsylvania Journal of Business Law, 26 (forthcoming). Retrieved from https://papers​.ssrn​.com​/abstract​=3930679.

10. When worlds collide: copyright law, technology, and legislative drama Ewa Laskowska-Litak1

1. INTRODUCTION It is almost a truism to mention that the relationship between copyright law and technology is important and that the latter enormously impacted the normative framework. It was the invention of printing techniques, easy and economic-sufficient reprinting, and consequently also a cost-effective dissemination channel, that sparked the normative evolution of copyright law (Kretschmer et  al., 2010; Woodmansee, 1984). The evolution of pianola resolved into protection over musical works (Colston & Galloway, 2006; Goldstein, 2003). The introduction of the first personal computers sparked debate about their place within intellectual property law. Most recently, with the advent of the rise of artificial intelligence the basic paradigm of authorship in copyright law faces deconstructive challenges (Guadamuz, 2017; Hilty et al., 2021; Ihalainen, 2018). On the other hand, tokenisation of works, artifactualisation of authorship, and in the background shifting economic values towards data rather than the result of creativity lead to the conclusion that they are not only authors, but also users whose behavioural activities, consumer interests, and purchases are of great relevance. The system of using intellectual goods begins to approach the service model – it is no longer a possession of a copy but an access to its content that interests consumers and producers. In this chapter, I will examine how the legal framework of copyright law reacted to the changing technological environment, illustrated by the example of European Union law, and with which legal instruments subsequent technological developments were addressed. The aim of this chapter is to study the key twists and turns influencing copyright law and to consider whether the impact of technology on copyright law may have been overrated. The chapter is divided into four sections. The first section focuses on the InfoSoc Directive and its legislation process, showing how a constant technicalisation of copyright law emerged at the EU level. The reason I start with this section is the need to indicate how the harmonisation of copyright law initially took place and what were the justifications behind it. The second section analyses the CJEU’s jurisprudence presenting an interesting (yet synthetic) tendency of passing systemic and in-depth decisions regarding the interconnections between copyright and technology. This section does not investigate dogmatic particularities of case law but mainly focuses on the case’s portfolio and the CJEU’s reactive response to the rapidly changing environment. The third part shows a recent response from the EU legislator on the digital market in the form of the newly adopted DSM Directive and its criticism. All three sections may be summarised under one hypothesis: the complicated nature of technology led to casuistic and technology-designed norms in 1  This research was funded by National Science Centre in Poland, grant number: 2021/43/B/ HS5/01156.

160

When worlds collide 

161

copyright law, but when the normative framework failed to adapt to new business strategies the legislators reacted with a novel yet conservative approach that will probably fail in the future. The limited nature of the chapter does not allow for a subsequent and fully justified response that would provide a novel view of copyright struggles. Hence the final section of this chapter presents an alternative ending to copyright law, inspired by the legal design methodology, that I term as ‘tech-designed copyright law’ with a question mark at the end. The purpose of the last section is to try to answer the question of how the overall construction of copyright law could be considered differently if the regulation-making process were to consider primarily the relevant values from the perspective of society and creators.

2. THE FIRST ACT: BEFORE AND AFTER THE FIRST EU TECH ACT: THE INFOSOC DIRECTIVE The competence of the EU legislator to create new copyright legislation was not so certain from the beginning. At least not until the CJEU’s decision in the Deutsche Grammophon case,2 in which the CJEU noted the importance of copyright for the development of the EU internal market. According to the CJEU, the opposite interpretation (excluding the intellectual property law from the scope of EU harmonisation) would lead to significant legal differentiation and failure to achieve one of the objectives of the then Community, which was to create and maintain the internal market (Barta & Markiewicz, 2016 at 589; Georgopoulos, 2012 at 35). This understanding of copyright sets its paradigms from the outset: if it is to have economic relevance (in order for the EU legislator to act), only those aspects of copyright law that affect (preferably directly) the Community’s internal market should be the subject of normative regulation. I will return to this argument later. Studying the history of the further development of copyright law, scholars usually distinguish between two stages (Bechtold, 2016 at 343; Lucas-Schloetter, 2014; van Eechoud et al., 2000 at 3–4). The first phase started with the publication by the European Commission of a Green Paper in 1988 [hereinafter referred to as “GP 1988”],3 as a result of which six key sector-specific directives unifying some aspects of copyright law were adopted. The latter phase is the period of horizontal harmonisation with the adoption of two important directives: the InfoSoc and the Enforcement Directive (Reinbothe, 2002). The GP 1988 was one of the first documents announcing the need for legislative intervention with regard to copyright issues in the digital market. This act indicates four fundamental assumptions for subsequent legislative initiatives in this area: the need to create a Community market (currently: internal market of the European Union), improving the level of intra- and extra-Community competition, and limiting the negative effects associated with the granting of exclusive rights, in particular in the technological context (GP, 1988 at 3–5). Assessing the assumptions for the GP 1988 from a time perspective, the lack of identification of the Internet as a source of future copyright problems draws particular attention (van Eechoud et al., 2000 2  CJEU from 8 June 1971, case C-78/70, Deutsche Grammophon Gesellschaft mbh v. Metro-SBGroßmärkte GmbH & Co. KG, EU:C:1971:59. 3  See Green Paper on Copyright and the Challenge of Technology that was published by the Commission in 1988, European Commission, COM (88) 172 final, Brussels, 7 June 1988 [GP 1988].

162  Research handbook on law and technology at 5). According to the legislative materials, the Internet was not the subject of concern due to the fact that the possibility of broadband Internet access was not envisioned, nor was it imaginable to replace a traditional (tangible) copy of copyright-protected work with an electronic one (Georgopoulos, 2012). In response to the problems described, it was suggested to undertake harmonisation intervention for six areas: piracy, private recording of audiovisual works, distribution rights, exhaustion of law and rental rights, computer programs, databases, and a common regulation for multi- and bilateral contracts (GP, 1988 at 16–19). The discussion that began in this way led to the identification of further areas requiring harmonisation at the Community level: the duration of exclusive rights, personal rights, the right to reproduce, resell works, and broadcasting (van Eechoud et al., 2000 at 7). Let us note two observations here. First and foremost, the initial EU legislative actions were induced by the technological development that challenged the then-economic frameworks and models. It is only the significant impact of technology on business models that has led EU legislators to adopt some systemic solutions to copyright law. One may therefore ask whether the earlier non-technological stage was a more favourable environment for creators and producers. An analysis of the legislative material and the results of the public consultation at the time suggests a negative answer. Briefly stated, it was only the impact of technology reflected in the importance and development of the market economy that led to some changes in the copyright regime. Might it therefore be concluded that EU copyright law has been less focused on the protection of creators since its inception? The second observation is the harmonisation technique chosen by the EU legislator: the sectoral unification (Bechtold, 2016 at 343), in the literature also described as “pointed harmonisation.” It was aimed at addressing particular problems related not to the copyright law but to a specific technology (Hugenholtz, 2013 at 62–64; Derclaye & Cook, 2011). Relevant is what is omitted in this particular legislative approach: it excludes the possibility of a regulation that might even slightly influence the theory or structure of the EU copyright law system. The question remains, why? The initial and dominant justification is the limited competence of the EU lawmakers. Since the legislators can only act within the scope of a clearly delimited competence framework, any normative acts that go beyond this sphere could prove to be invalid (Bently, 1994; Derclaye & Cook, 2011; Rosati, 2014). This very cautious stance impinges on how current copyright issues have been addressed: in a very punctilious and ancillary way, which has probably contributed to the state we are in today, i.e. inflexible to rapid technological, social and cultural changes. The initiated discussion on the GP 1988 led to the presentation of a second document, the GP 1995, which, like its predecessor, indicated three assumptions for the EU harmonisation: the importance of the Common market, the competitiveness of the Member States inside and outside the European Community and, interestingly, strengthening intellectual property rights as instruments stimulating artistic activity and the European cultural heritage (van Eechoud et al. at 7). Thus, unlike the earlier document, GP 1995 is the first act to introduce the legislative assumption to include noneconomic values as an incentive to provide new legislative frameworks. As the next stage for legislation, it distinguished new areas requiring immediate intervention: the right to reproduction, the right to communication to the public, legal protection of information on copyright management and technological security, and the right to distribution; later supplemented with the right to broadcast/broadcast of artistic performances, protection over phonograms, the problem of applicable law and enforcement of intellectual property rights, personal rights, and copyright management (van Eechoud et al. at 8). If we compare both acts (GP, 1988, 1995) we may notice a strong focus on the economic and

When worlds collide 

163

technical aspects of the exploitation of works, apart from the case of personal rights (mentioned in both acts) and the assumption of stimulation of creativity and protection of cultural heritage (in the GP, 1995) (Georgopoulos, 2012). In general, the then-proposed intervention was not aimed at creating a coherent copyright system but adopted a reactive-defensive perspective on the already emerging and dominant technological and economic changes. For instance, the Commission stated explicitly that the concept of originality was not the subject of harmonisation – except in the cases of photographs, databases, and computer programs, on which the Community legislature expressed its opinion in particular provisions (see GP, 1995 at 27; see also Derclaye, 2014).4 Interestingly also, the GP 1995 reflected the pro-author’s perspective, taking as a starting point the need to strengthen the legal position of authors and producers (for example, in relation to online piracy), despite the necessity to strengthen fundamental rights at that time. This initial legislative work was additionally accelerated by the adoption of the WIPO Copyright Treaty (WCT) and the WIPO Performances and Phonograms Treaty (WPPT), both in 1996. In both cases, the incentives for regulation were the technological changes and the increasing popularity of digital works, with the simultaneous limitations related to amendments to the already binding legal provisions and international agreements (Sentfleben at 87). Interestingly, both treaties adopted a possibly neutral technological approach, illustrated mainly by the example of Art. 8 of the WCT. Initially, the copyright law operated with the so-called push-and-pull concepts: the signatories were obliged to grant the rightsholders an exclusive right to public communication of their works by wire or wireless means, including the right to make works available to the public in such a way that one can have access to a work at a place and time chosen by an individual (Senftleben, 2016 at 119; Fiscor at 495–496; Ficsor, 1997 at 207). The push-concept refers to cases in which a work is communicated at a predetermined time to the audience who is not present in the place where the act of such communication is initiated. In contrast, the pull-concept refers to interactive access to works, mostly dependable on an individual decision of each user who has gained access, irrespectively of time and place and even of whether such access is effectively used (Senftleben at 119–120, ftn 79). The aforementioned provision appears rather to be technology-neutral, establishing a scope of exclusiveness independently from future developments within this area. The tricky solution was thus called the “umbrella solution” that for the first time created a technology-independent yet technology-addressed construction (Karnell; Geiger et al., 2021) that is illustrated in the Art. 8 WCT. Although it refers to the place (and time) to distinguish between individual exclusive rights, the subjective change no longer plays such a significant role as it did under the corresponding provisions in the Berne Convention. Therefore, it is a significant simplification of the so-far-developed conglomerate of property rights. Ultimately, however, it led to major differences in national legislation. In some cases, the right of communication to the public concerned both situations where a work was shown to the audience present at the same place where such exploitation was performed and the right to retransmission. In other jurisdictions, public communication applied when the work was made available to the audience absent from the place of such communication, but at a predetermined time. (Ficsor at 212). As mentioned in the GP 1988 and GP 1995, such discrepancies occurred within the 4  See Commission Staff Working Paper on the review of the EC legal framework in the field of copyright and related rights, SEC(2004) 995 (Brussels: 19 July 2004), para. 3.1, p. 14. Retrieved from http:// ec​.europa​.eu​/internal​_market​/copyright​/docs​/review​/sec​-2004​-995​_en​.pdf.

164  Research handbook on law and technology European Community as well. The consequence of this approach is nowadays a distinction between “communication to the public” and “making the work available to the public” in such a way that everyone can have access to it in a place and at a time chosen by them (Bechtold 2016 at 360–361; Rosati, 2020). The second phase of harmonisation of copyright law within the European Union can be easily associated with the need to regulate the technological aspects of intellectual property. Both the InfoSoc Directive and the Enforcement Directive were intended to meet the expectations of authors and producers in order to guarantee them full protection of the expenditure incurred on creating works and placing them on the market (Geiger et al., 2021). The InfoSoc Directive provided, inter alia, the catalogue of exclusive rights and a limited catalogue of limitations and exceptions, excluding the possibility of providing an exception not mentioned by the EU legislation into the national legal framework. Like the later adopted Enforcement Directive, it also introduced the key protection of the so-called TPMs (technological protection measures according to Article 6 of the InfoSoc Directive). In the latter case, a particular mutual interaction of law and technology becomes apparent. Having justified the regulation with a need to guarantee reimbursement of incurred expenses and appreciation of the intellectual effort of authors, lawmakers decided to grant the rightsholders a certain form of exclusivity, even though the subject matter of such protection is not an intellectual good but a technological device, software, or hardware, developed solely with a purpose to control circulation of a copy of a work (Drexl, 2015; Geiger et al., 2021). However, the effectiveness of this system was seriously compromised as a result of cases of elusive exploitation, which took place in the vast majority of cases on the Internet, where identifying the perpetrator was (and still is) disproportionately costly in relation to the scale of the infringement (Barta & Markiewicz, 2016). In response to the growing problem of anonymous dissemination of works on Internet networks, producers and distributors of copyright-protected products began the process of creating and applying a system of technological protection measures (such as DRMs) that was to prevent any form of unauthorised exploitation of works. This, in turn, was associated with the increasing risk of excluding the possibility of applying limitations or exceptions, in other words, a risk of limiting the public domain and free access to works. Therefore, the EU lawmakers’ decision to regulate the TPMs should have at least balanced the interests of two opposing parties: the rightsholders who were paying increasingly higher costs for the technological system of control over copies (easily and cost-effectively copied over the Internet) and users, who in most extreme cases, were deprived of the possibility of free exploitation of works. Noteworthy is in this regard, that protection over TPMs in copyright law was justified with C. Clark’s motto that “the answer to the machine is in the machine” (Brown, 2006; Clark, 1996; Patry, 2012), illustrating both the enormous appreciation for technology but also the rising mistrust of copyright regulations. The following conclusions can therefore be drawn from the first period of copyright development. First, we see the resolute influence of technological developments on the changes that took place in the normative sphere. This can be seen both in the technique (the form) and in the subject matter of regulation. The former was strongly inspired even in the very textual stratum of legal concepts (technology-inspired words, phrases, terms); the latter was strictly directed towards solving a specific problem related to the use of technology (as, for example, in the example of TPMs). Second, the solutions adopted at the time were strongly conservative, casuistic and limited only to a specific problem, on top of a strong economic emphasis.

When worlds collide 

165

There was a lack of a systemic and axiological approach to copyright law, which is probably one of the reasons for our contemporary difficulties with this branch of law.

3. THE SECOND ACT: THE CJEU TO THE RESCUE The InfoSoc Directive entered into force at the beginning of the 21st century, and was adopted by most Member States in the first years after its enactment. It is true that Napster, referred to as the main progenitor of today’s Internet piracy, went bankrupt in 2001, but in its place appeared completely new forms of portals or intermediaries enabling the dissemination of works over the Internet. Torrents (e.g. Kazaa, DirectConnect, eDonkey, eMule) operated without a central server, making it difficult for the rightsholders to shut them down. This, in turn, changed the anti-piracy strategy and focused attention on users who, while downloading a file, at the same time redistributed parts of it, thus exposing themselves to legal consequences. As for today, operating on the basis of the same principles, the most popular system for exchanging files is BitTorrent, popularised, among others, by the famous file search engine The Pirate Bay (Aldred, 2010; Groom, 2017). Pirate files were also uploaded to the http and ftp servers (often public), whose contents could be accessed after entering the log-in and password. Information about them appeared in closed discussion groups and then also on special Warez websites, where hacker groups informed about programs they had broken and how to install a pirated version. The appearance of Warez-type websites for the first time created the possibility of earning money from piracy due to website traffic and generating income from advertisements displayed there. However, over time, it was not hosting services that became the main source of copyright disputes. Only the creation of a tool that allowed every internet user to share their video material with others turned out to be a turning point, and it caused further concerns for distributors of legal content. Within a year of starting operations (2005), YouTube has emerged as one of the fastest-growing websites. A little over a year after its launch, in July 2006, users watched 100 million movies a day and uploaded 65,000 new materials. The site averaged nearly 20 million monthly visitors. In October 2006, YouTube was acquired by Google for $1.65 billion. The deal was concluded after YouTube’s owners presented settlements with media companies, thus avoiding the legal consequences of copyright infringement (Erickson & Kretschmer, 2018; Seidenberg, 2006). It comes as no surprise that the so-far adopted legal framework was not prepared for a rapidly evolving technological order. Their most essential sin was the assumption that a “traditional” (physical) copy of a work is economically equivalent to an electronic or online content, or that the Internet connections would remain slow and expensive (or at least not as widespread as it is now). It was also based on a rather naïve conviction of effectiveness in pursuing claims against anonymous users. However, the peer-to-peer websites and then streaming services challenged those assertions (Chinn, 2016). Creation of an electronic copy of a work became an instant and zero-cost action, wide access to the Internet, and the ineffectiveness of pursuing claims against anonymous users called into question the sense of any normative protection. It was the CJEU and their repeatedly criticised case law, that attempted on the one hand to adjust copyright law to the changing reality, and on the other to enable the implementation of its function of protecting creativity and users. Hence, a number of judgments addressed the problem of linking to works available on the internet (cases from the Svensson case to the quite recent YouTube case), advertising products on websites, reselling used software (UsedSoft and

166  Research handbook on law and technology Ranks), or issues of jurisdiction and determining the court of jurisdiction in cases of infringements of copyright on the internet (Cook, 2014). It is particularly discernible in the example of the linking cases, i.e. with regard to activities that allow quick access to the content posted on the linking page (internal reference) or on other websites (external reference). With the dynamic development of the Internet network and social networks, it becomes crucial to find the information desired by the user as quickly and accurately as possible, especially circumventing the paywall or without paying the subscription fees. Moreover, this problem, seemingly insignificant, in the era of widespread digitisation and the development of technological business models, additionally implies an increase in economic benefits for websites offering search tools or providing links to copyright-protected content, but without fees for accessing content, at the expense of those who create and offer such content for a fee on their website. In this particular example, one may note the difficulties in balancing the opposing positions of rightsholders and users. The former are entitled to decide over a content, thus they are interested in an effective and economic exclusivity; whereas for the latter the free and unlimited access to works is crucial. An intermediary, whose business model is to provide a platform for the dissemination of links for unrestricted access to works, benefits from this opposition. On the one hand, they are guaranteed constant interest from users, who increase traffic (allowing to increase advertising profits), while on the other hand, they escape any liability, since the decision to place an IPR-infringing link on their site rests solely with the users. Therefore, it is not surprising that for over 15 years there has been a discussion as to whether the action consisting of linking to content posted on the internet constitutes an infringement of copyright law or is a lawful behaviour (the very first judgment on linking was made by the German Constitutional Court in 20035) (Bently et al., 2013). Even the CJEU itself does not provide an unequivocal answer to this question (Rosati, 2019, 2020; Tsoutsanis, 2014).

4. LAST BUT NOT LEAST: THE DSM DIRECTIVE The CJEU’s jurisprudence and the outdated InfoSoc and Enforcement Directives were unable to adapt copyright law to the rapidly changing digital environment. Slowly, the process of technicalisation of copyright law went towards a casuistic fragmentation of the subsequent institutionalisation, starting with the scope of rights and ending with the indirect infringement questions (Angelopoulos, 2016; Hugenholtz, 2000; Hugenholtz & Senftleben, 2012; Hugenholtz & van Velze, 2016; Quintais & Rendas, 2018). The answer to the growing challenges and dissatisfaction with the current legal framework sparked a debate about the possibility of adapting a new directive on copyright law. After the protests against ACTA (Bridy, 2011), the EU legislator made an attempt to act transparently and consult all the proposed provisions as much as possible (Quintais, 2020), albeit, with a halfway success. The first version of the directive on copyright in the digital single market appeared already in the second half of 2016. Compared to its original wording, the version voted and initially rejected by the European Parliament in July 2018 differed significantly: a series of amendments (the document itself, which lists all the amendments, has over 200 typewritten pages, and the number of amendments submitted went way over thousand) which makes this proposal 5  See the judgment of the German Constitutional Court (Bundesgerichtshof, BGH) of 17 July 2003 (I ZR 259/00; “Paperboy”), “Computer und Recht” 2003/12, p. 920.

When worlds collide 

167

one of the most widely discussed copyright acts in the history of EU legislation (see EU report from 29 June 2018 r., A8-0245/2018). Additionally, the publication of the draft initiated a wave of consultations between various research centres and institutions, which resulted in the outcomes of the analyses being published in the following year. From the very beginning, two provisions made the most controversies: the proposal to introduce a new type of law (as a related right) in favour of press publishers (Senftleben et  al., 2017) and reform over the liability of the so-called intermediaries for copyright infringement on the Internet (Sartor, 2017). The latter case referred to the platforms hosting or giving users a permission to access a webspace to upload content that violates the copyright law (Angelopoulos, 2016; Frosio, 2017; Sartor, 2016). The initial vote of the Legal Committee on the adoption of the draft was postponed after the publication of a very critical analysis of the press publishers’ rights and after the submission of almost a thousand amendments to the original draft version. However, the Commission decided to introduce a series of amendments and revisions to the directive, and since that moment in 2018, the debate has started to spread across media and EU Member States. The critical public interest in the proposed regulations began to be seen as a reason to refrain from supporting the draft directive (e.g. by Italy). As part of the protest, some internet portals (including the popular Wikipedia) shut down their websites (Laskowska-Litak, 2018). Despite the problems and controversies accompanying the legislative process, the DSM Directive was finally adopted in 2019. As such, the act reflects the current state of copyright law. Starting with provisions mainly focused on technological aspects of work exploitation and ends with the issues of how to maintain the cultural heritage of Europe. The Directive may be thus divided into five main parts: the first one relates to the technological aspects of exceptions and limitations (text and data mining, exceptions for teaching activities, and for cultural heritage institutions). The reason for the adoption of this part of the provisions was mainly the problem of rapid technological development that was previously not addressed by the copyright law, but the EU legislator found it necessary to provide provisions that restore the balance between producers (who are mainly the rightsholders of a software for data analysis) and users (but in this case – the EU legislators maintained their reactive position and did not broaden the exceptions over all user groups). Moreover, the DSM directive refers to the licensing mechanisms for out-of-commerce works, collective management, and for audiovisual works available in the video-on-demand model and access to visual works. The reason for the adoption of this part of the provisions was to challenge the expanding business models of streaming and access services, enabling some kinds of work exploitation and regulating issues that were previously addressed by only some of the Member States. Another group of provisions constitutes the exclusive rights for press publishers, one of the most controversial rights granted to a particular group of interests, and that despite the general criticism raised against the provisions (Bently et  al., 2017; Furgal, 2018; Pihlajarinne & Vesala, 2018; Xalabarder, 2016). The justification for a new ancillary right was the growing popularity of aggregator services based on the principle of aggregating all the latest information and sharing it as snippets so that users do not access the press publisher’s website (which is unprofitable for journalists because there is less traffic on a website) but the aggregator website (Leistner, 2017). The next group of the provisions of the DSM Directive is the regulation of the legal status and liability of websites for the content provided by their users. The famous Article 17 DSM Directive raised the most controversies because it obliges websites to provide some filtering mechanisms so that copyright infringement might be avoided. However, in some cases, a sophisticated algorithmic solution might not only avoid infringements, but also provide censorship.

168  Research handbook on law and technology AI-based program decision-making only further complicates already complex use cases of parody exception (Dipaola et al., 2018; Erickson, 2014; Iphofen & Kritikos, 2021; Senftleben, 2018). Last but not least, the DSM Directive refers to the issues related to fair remuneration of authors. In sum, the DSM Directive addressed the urgent need for copyright law and its relation to technology. By adopting new provisions, even questionable, the EU legislators took a defensive and reactive position towards technological and business changes.

5. TECH-COPYRIGHT LAW BY DESIGN? The path of copyright development mandates two observations and raises one serious question. Firstly we shall note a serious technicisation of copyright law, in particular consisting of the fact that the overwhelming majority of the regulations use technical language, make explicit use of some technical terms, and thus limit and impose unambiguous interpretations (Loughlan, 2006). This thesis is particularly discernible in the analysis of how copyright law has evolved, from a regulation carefully tailored to the need for protection against plagiarism, to a punctilious regulation introduced in situations where technological developments were no longer at all within the systemic analysis of copyright law. Additionally, such regulation only gained its effectiveness when it was universalised, i.e. implemented at the international level. National attempts to normatively subordinate technological change have always proved to be an insufficient tool to combat digital piracy. This leads to the further conclusion that it may not be possible to create an effective legal regulation without it being properly globalised and implemented at the international level. Let us stop for a moment at this point. If we relate the above observation to the current problems of copyright law, if only in the context of artificial intelligence, we may notice that any attempt to address the solution solely within the European Union, may prove to be ineffective. A second observation, not particularly original in copyright theory albeit important at this point, is the strong predominance of economic justification for legal solutions. As such, copyright is a subject of regulation insofar as it has an economic effect (at least within the European Union). A materialistic valuation of copyright law firmly simplifies the whole picture and distorts some of the adjudications, the social or cultural context of which could sometimes lead to divergently opposed results (Chon, 2016; Craig, 2015). Nevertheless, this approach has become the dominant practice of EU lawmakers for many years. It was not until the muchcriticised CJEU case law that this trend was reversed. The question that the above analysis raises, in turn, is whether there is any point in maintaining such a copyright system, which is ill-suited to modern technologies and the needs of the information society, or whether it is possible to replace it with another approach? If one were to answer this question along the lines of the methodology that has guided changes in copyright law to date, one would have to analyse what technological or economic problems copyright law is currently challenged against. The first version of this chapter presented exactly this approach. I focused on the problem of NFTs, AI, and computer-based programs that are beginning to replace humans in making creative choices, and finally social media. However, I do not think this approach is sustainable. As has rightly been noted in copyright law so far (although there have been some isolated voices), a more systemic, in-depth approach is needed, focusing not on a particular partisan problem or a specific property interest, but

When worlds collide 

169

analysing the situation holistically, taking into account the larger social and cultural potential. Let us therefore consider how the problem of copyright’s unfitness can be solved alternatively, without repeating the mistakes from the past. This brings us to an interesting observation from an analysis of the prior technologisation process of copyright law. First, when comparing the changes that were behind the introduction of new copyright regulations with the current shape of intellectual property on the Internet, the case of users and creators remains similar. Initial technological changes required better protection of the interests of authors (and then producers and distributors) against excessive piracy on the internet and unauthorised distribution of electronic copies. However, over time, these changes ceased to keep pace with reality. Hence, the gradual paradigm shift towards the protection of technological protection measures, which, from a legal theoretical point of view, we could consider as a change in the very subject of copyright protection – from an intellectual good that is the result of the author’s own creativity towards the protection of effective technological processes and mechanisms. This opens up the question of what should actually be the subject of protection. And this is not a question unknown to copyright law, and has been asked since its inception (Alexander, 2010; Bently, 1994; Boyle, 1987), but until now mainly from the perspective of rightsholders and producers. What if we were to consider the same question from the perspective of other parties, such as users? Recent research suggests an interesting answer in this case: from the audience’s perspective, it is the access and content of the work that matters (Elkin-Koren, 2012; Koutras, 2016). This, in turn, brings us to the next step, i.e. the appropriate shaping of the scope of exclusive rights that, to a certain extent, could be granted to rights holders. The regulations to date have been mainly based on the assumption of the economic value of certain forms of exploitation. However, if this approach were to be revised and take into account, for example, the need for intercultural and intertextual use of others’ works (Chon, 2016; Craig, 2007; Rachum-Twaig, 2016), for the purposes of innovation and science, should not the scope of rights be shaped depending on the context of exploitation of works? For the sake of argument, let us imagine such a solution. Two alternatives would be possible for the subject of copyright protection. The first would be an approach towards technological protection measures (such as TPMs, access to data, etc.). Such an approach presupposes the vital importance of access to a work or a content of a work. This approach disregards the form of expression (the physical or digital form of the work). Changing the subject of protection from the work to access to the work also would entitle also to move away from the artefact construct, but that is probably a discussion for another occasion. The scope of rights would not be pre-defined in the act, but could be tech-designed, and adapted to the circumstances of actual and technological exploitation. The second solution would be to protect the author as creator, leaving the work and its content completely out of the scope of consideration. This approach is inspired by the recent serious (though, in my view, unjustified) interest in nonfungible tokens, the value of which is measured by the significance of the person issuing such a token. Such a solution presupposes the total (or at least only material) abandonment of the concept of the work as a subject of protection and its replacement by assigning a pivotal role to the figure of the author. The system thus created is aimed at protecting the author personally and economically, ensuring that they receive the due benefits, while respecting personal goods and interests. This approach is not entirely foreign to copyright theory and can also be found in sources related to aesthetics (Sibley; Zemer). In both cases, limitations and exemptions from copyright protection should also play a key role in enabling the public interest to

170  Research handbook on law and technology be served. Unfortunately, a major weakness of both solutions is the need to undertake active and systemic reforms of copyright regulation. In this sense, both proposals remain merely theoretical approaches, although in my view worthy of discussion. In closing the above reflections, I will draw attention to one final conclusion. The strong impact of technology on copyright law has spearheaded regulatory changes toward including a greater role for those who created the works. Over time, however, it turned against them, directing the rationale for copyright reforms towards those who bore the investment in the distribution and production of the works already created. Therefore, perhaps we should ask ourselves whether the time has come for a third shift in the copyright paradigm towards recognising a greater role for those for whom access to works is the flywheel of the entire digital economy.

BIBLIOGRAPHY Alexander, I. (2010). The genius and the labourer: Authorship in eighteenth and nineteenth-century copyright law. Copyright and Piracy: An Interdisciplinary Critique, 1991, 300–308. Aldred, J. (2010). Copyright and the limits of law-and-economics analysis. In L. Bently, J. Davis & J. Ginsburg (Eds.), Copyright and Piracy: An Interdisciplinary Critique (Cambridge Intellectual Property and Information Law, pp. 128–144). Cambridge: Cambridge University Press. Retrieved from: doi:10.1017/CBO9780511761577.008. Angelopoulos, C. (2016). EU Copyright Reform: Outside the Safe Harbours, Intermediary Liability Capsizes into Incoherence – Kluwer Copyright Blog. Retrieved from http://copyrightblog​ .kluweriplaw​.com ​/2016​/10​/06​/eu​- copyright​-reform​- outside​-safe​-harbours​-intermediary​-liability​ -capsizes​-incoherence/. Barta, J. & Markiewicz, R. (2016). Prawo autorskie. Bechtold, S. (2016). Information society directive. In P.B. Hugenholtz & T. Dreier (Eds.), Concise European Copyright Law (Art. 6). Wolters Kluwer. Bently, L. (1994). Copyright and the death of the author. The Modern Law Review, 1994. Bently, L., Derclaye, E., Dinwoodie, G.B., Dreier, T., Dussolier, S., Geiger, Ch., Griffiths, J., Hilty, R., Hugenholtz, P.B., Janssens, M.Ch., Kretschmer, M., Metzger, A., Peukert, A., Ricolfi, M., Senfleben, M., Strowel, A. & Xalabarder, R. (2013), Opinion on the Reference to the CJEU in Case C-466/12 Svensson (February 15, 2013). University of Cambridge Faculty of Law Research Paper” 2013/6. Bently, L., Kretschmer, M., Dudenbostel, T., del Carmen Calatrava Moreno, M. & Radauer, A. (2017). Strengthening the Position of Press Publishers and Authors and Performers in the Copyright Directive Legal and Parliamentary Affairs. Retrieved from http://www​.europarl​.europa​.eu​/ RegData​ /etudes​/STUD​/2017​/596810​/ IPOL ​_ STU​%282017​%29596810​_ EN​.pdf. Boyle, J.D. (1987). The search for an author: Shakespeare and the framers. The American University Law Review, 37, 625. Bridy, A. (2011). ACTA and the Specter of Graduated Response. American University International Law Review, 26(3), 558. Retrieved from https://doi​.org​/10​.2139​/ssrn​.1619006. Brown, I. (2006). The evolution of anti-circumvention law. International Review of Law, Computer & Technology, 20(3), 239–260. Chinn, A. (2016). How has technology affected the copyright framework? A focus on digital rights management and peer-to-peer technology. Computer and Telecommunications Law Review, 22(2), 44–52. Chon, M. (2016). Copyright’s other functions. Journal of Intellectual Property, 15(2), 364–378. Retrieved from https://doi​.org​/10​.2139​/ssrn​.2789876. Clark, Ch. (1996). The answer to the machine is in the machine. In P.B. Hugenholtz (Ed.). The Future of Copyright in a Digital Environment (pp. 139–148). Hague: Kluwer Law International. Colston, C. & Galloway, J. (2006). Modern Intellectual Property Law. Retrieved from https://doi​.org​/10​ .2966​/scrip​.030106​.84.

When worlds collide 

171

Cook, T. (2014). Territoriality and jurisdiction in EU IP law. Journal of Intellectual Property Rights, 19(4), 293–297. Corrales Compagnucci, M., Haapio, H., Hagan, M. & Doherty, M. (Eds.). (2021). Legal Design: Integrating Business, Design and Legal Thinking with Technology. Cheltenham: Edward Elgar Publishing. Craig, C. (2007). Reconstructing the Author-Self: Some Feminist Lessons for Copyright Law. Journal of Gender, Social Policy & the Law, 15(2), 207–268. Craig, C. (2015). Feminist Aesthetics and Copyright Law: Genius, Value, and Gendered Visions of the Creative Self. In I. Calboli & S. Ragavan (Eds.), Diversity in Intellectual Property: Identities, Interests, and Intersections (pp. 273–293). Cambridge: Cambridge University Press. doi:10.1017/ CBO9781107588479.015. Derclaye, E. (2014). The Court of Justice copyright case law: quo vadis? European Intellectual Property Review, 36(11), 716–723. Derclaye, E. & Cook, T. (2011). An EU copyright code: what and how, if ever? Intellectual Property Quarterly, 3, 259–269. Retrieved from http://eprints​.nottingham​.ac​.uk​/1739/. Devlin, J. (2021). The “Insane” Money in Trading Collectible Cards. Retrieved from https://www​.bbc​ .co​.uk​/news​/ business​-56413186. Dipaola, S., Gabora, L. & McCaig, G. (2018). Informing artificial intelligence generative techniques using cognitive theories of human creativity. Procedia Computer Science, 145, 158–168. Retrieved from https://doi​.org​/10​.1016​/j​.procs​.2018​.11​.024. Drexl, J. (2015). European and international intellectual property law between propertization and regulation: How a fundamental-rights approach can mitigate the tension. The University of the Pacific Law Review, 47, 199–219. Eechoud v., M. (2000). The Work of Authorship. Amsterdam University Press. Elkin-Koren, N. (2012). Governing access to user-generated content: The changing nature of private ordering in digital networks. In E. Brousseau, M. Marzouki, & C. Méadel (Eds.), Governance, Regulation and Powers on the Internet (pp. 318–343). Cambridge: Cambridge University Press. doi:10.1017/CBO9781139004145.020. Erickson, K. (2014). User illusion: ideological construction of ‘user-generated content’ in the EC consultation on copyright. Internet Policy Review, 3(4), 1–19. Retrieved from https://doi​.org​/10​.14763​ /2014​.4​.331. Erickson, K. & Kretschmer, M. (2018). ‘This video is unavailable’: Analyzing copyright takedown of user-generated content on Youtube. Journal of Intellectual Property, Information Technology and E-Commerce, 9(1), 1–34. Evans, T.M. (2019). Cryptokitties, cryptography, and copyright. AIPLA QJ, 47, 219–247. European Commission. (1995). ‘Copyright and Related Rights in the Information Society’, Green Paper, COM (95) 382 final, Brussels, 19 July 1995 [GP 1995]. Ficsor, M. (1997). The Law of the Internet. The 1996 WIPO Treaties, Their Interpretation and Implementation. Oxford University Press. Foss-Solbrekk, K. (2021). Three routes to protecting AI systems and their algorithms under IP law: The good, the bad and the ugly. Journal of Intellectual Property Law & Practice, 16(3), 247–258. Retrieved from https://doi​.org​/10​.1093​/jiplp​/jpab033. Frosio, G.F. (2017). From horizontal to vertical: an intermediary liability earthquake in Europe. Journal of Intellectual Property Law & Practice, 12(7), 565–575. Retrieved from https://doi​.org​/10​.1093​/jiplp​ /jpx061. Furgal, U. (2018). Ancillary right for press publishers: An alternative answer to the linking conundrum? Journal of Intellectual Property Law & Practice, 13(9), 700–710. Retrieved from https://doi​.org​/10​ .1093​/jiplp​/jpy003. Geiger, Ch., Schönherr, F., Stamataudi, I., Torremas, P., Karapapa, S. (2021) The Information Society Directive In: I. Stamataudi & P. Torremas (Eds.). EU Copyright Law. Edward Elgar Publishing. Georgopoulos, T. (2012). The Legal Foundations of European Copyright Law In: T.-E. Synodinou (Eds.). Codification of European Copyright Law: Challenges and Perspectives. Wolters Kluwer International Goldstein, P. (2003). Copyright’s Highway. From Gutenberg to the Celestial Jukebox. Retrieved from https://doi​.org​/10​.1017​/CBO9781107415324​.004.

172  Research handbook on law and technology Groom, J. (2017). The Pirate Bay: CJEU rules that operating a torrent file indexing site is a communication to the public. Journal of Intellectual Property Law & Practice, 12(12), 965–968. Guadamuz, A. (2017). Do androids dream of electric copyright? Comparative analysis of originality in artificial intelligence generated works. Intellectual Property Quarterly, 2, 169–186. Retrieved from https://papers​.ssrn​.com ​/sol3​/papers​.cfm​?abstract​_id​=2981304. Hilty, R.M., Hoffmann, J. & Scheuerer, S. (2021). Intellectual property justification for artificial intelligence. In J.-A. Lee, K.-C. Liu & R. M. Hilty (Eds.), Artificial Intelligence and Intellectual Property (pp. 50–72). Oxford: Oxford University Press. Hugenholtz, P.B. (2013). Is Harmonization a good thing? The case of the copyright acquis. In: A. Ohly & J. Pila (Ed.). The Europeanization of Intellectual Property Law. Towards a European Legal Methodology (pp. 62–64). Oxford: Oxford University Press. Hugenholtz, P.B. (2000). The great copyright robbery. Rights allocation in a digital environment. A Free Information Ecology in a Digital Environment, Note 1. Hugenholtz, P.B. & Senftleben, M. (2012). Fair Use in Europe: In Search of Flexibilities. Amsterdam Law School Legal Studies Research Paper (pp. 1–30). Retrieved from https://doi​.org​/10​.2139​/ssrn​ .2013239. Hugenholtz, P.B. & van Velze, S.C. (2016). Communication to a new public? Three reasons why EU copyright law can do without a “new public.” IIC International Review of Intellectual Property and Competition Law, 47(7), 797–816. Retrieved from https://doi​.org​/10​.1007​/s40319​- 016​- 0512​-7. Ihalainen, J. (2018). Computer creativity: artificial intelligence and copyright. Journal of Intellectual Property Law & Practice, 13(9), 724–728. Retrieved from https://doi​.org​/10​.1093​/jiplp​/jpy031. Iphofen, R. & Kritikos, M. (2021). Regulating artificial intelligence and robotics: ethics by design in a digital society. Contemporary Social Science, 16(2), 170–184. Retrieved from https://doi​.org​/10​.1080​ /21582041​.2018​.1563803. Kaminski, M.E. (2017). Authorship, disrupted: AI authors in copyright and first amendment law. U.C. Davis Law Review, 589–616. Retrieved from https://heinonline​.org​/ HOL​/ Page​?handle​=hein​ .journals​/davlr51​&id​= 603​&div​=24​&collection​=journals. Koutras, N. (2016). History of copyright, growth and conceptual analysis: Copyright protection and the emergence of open access. Intellectual Property Quarterly, 2, 135–150. Kretschmer, M. Bently, L. & Deazley, R. (2010). Introduction: The history of copyright history. In R. Deazley, M. Kretschmer & L. Bently (Eds.), Privilege and Property: Essays on the History of Copyright. Cambridge: OpenBook. Laskowska-Litak, E. (2018). Sprawa upadłej dyrektywy prawnoautorskiej i (nie)jednolitego rynku cyfrowego. Europejski Przegląd Sądowy, 9(156), 4–11. Leistner, M. (2017). Copyright law on the internet in need of reform: hyperlinks, online platforms and aggregators. Journal of Intellectual Property Law & Practice, 12(2), jpw190. Retrieved from https:// doi​.org​/10​.1093​/jiplp​/jpw190. Loughlan, P. (2006). Pirates, parasites, reapers, sowers, fruits, foxes… The metaphors of intellectual property. Sydney Law Review, 28(2), 211–226. Lucas-Schloetter, A. (2014). Is There A Concept for European Copyright Law? History, Evolution, Policies and Politics and the Acquis Communautaire. In: I.A. Stamatoudi & P. Torremans (Eds.). EU Copyright Law, Edward Elgard Publishing. Nadini, M., Alessandretti, L., Di Giacinto, F., Martino, M., Aiello, L.M. & Baronchelli, A. (2021). Mapping the NFT revolution: market trends, trade networks, and visual features. Scientific Reports, 11, 20902. Patry, W. (2012). How to Fix Copyright. Oxford University Press. Pihlajarinne, T. & Vesala, J. (2018). Proposed right of press publishers: a workable solution? Journal of Intellectual Property Law & Practice, 13(3), 220–228. Retrieved from https://doi​.org​/10​.1093​/jiplp​/jpx194. Pinto-Gutiérrez, C., Gaitán, S., Jaramillo, D. & Velasquez, S. (2022). The NFT hype: What draws attention to non-fungible tokens? Mathematics, 10, 335. Retrieved from https://doi​.org/ 10.3390/ math10030335. Quintais, J. (2020). The new copyright in the digital single market directive: A critical look. European Intellectual Property Review, 42(1), 28–41. Retrieved from http://dx​.doi​.org​/10​.2139​/ssrn​.3424770.

When worlds collide 

173

Quintais, J.P. & Rendas, T. (2018). EU Copyright Law and the Cloud: VCAST and the intersection of private copying and communication to the public. Journal of Intellectual Property Law & Practice & Practice, 13(9), 711–719. Retrieved from https://doi​.org​/10​.1093​/jiplp​/jpy004. Rachum-Twaig, O. (2016). A genre theory of copyright. Santa Clara High Technology Law Journal, 33(1), 33–89. Reinbothe, J. (2002), ‘A Review of the Last Ten Years and A Look at What Lies Ahead: Copyright and Related Rights in the European Union’, paper given at Tenth Annual Conference on International IP Law & Policy, Fordham University School of Law. New York: New York University. Rosati, E. (2014). Copyright in the EU: In Search of (In)Flexibilities. Journal of Intellectual Property Law & Practice, 9(7), 1–23. Retrieved from https://doi​.org​/10​.1093​/jiplp​/jpu034. Rosati, E. (2019). Editorial copyright in CJEU case law: what legacy? Journal of Intellectual Property Law & Practice, 14(2), 79. Retrieved from https://doi​.org​/10​.1093​/jiplp​/jpy180. Rosati, E. (2020). When does a communication to the public under EU copyright law need to be to a ‘new public’? European Law Review, 45(6), 802–823. Retrieved from https://doi​.org​/10​.2139​/ssrn​ .3640493. Rose, M. (2005). Technology and copyright in 1735: The Engraver’s Act. Information Society, 21(1), 63–66. Retrieved from https://doi​.org​/10​.1080​/01972240590895928. Salami, E. (2021). AI-generated works and copyright law: towards a union of strange bedfellows. Journal of Intellectual Property Law & Practice, 16(2), 124–135. Retrieved from https://doi​.org​/10​ .1093​/jiplp​/jpaa189. Sartor, G. (2016). The right to be forgotten: Balancing interests in the flux of time. International Journal of Law and Information Technology, 24(1), 72–98. Retrieved from https://doi​.org​/10​.1093​ /ijlit​/eav017. Sartor, G. (2017). Providers Liability: From the eCommerce Directive to the future. Retrieved from http:// www​.europarl​.europa​.eu​/ RegData​/etudes​/ IDAN​/2017​/614179​/ IPOL​_IDA(2017)614179​_EN​.p​df. Seidenberg, S. (2006). Copyright in the age of YouTube. ABA Journal, 95, 46–51. Senftleben, M. (2018). Bermuda Triangle – Licensing, Filtering and Privileging User-Generated Content Under the New Directive on Copyright in the Digital Single Market. 1–18. Retrieved from http://dx​.doi​.org​/10​.2139​/ssrn​.3367219. Senftleben, M., Kerk, M., Buiten, M. & Heine, K. (2017). New rights or new business models? An inquiry into the future of publishing in the digital era. IIC International Review of Intellectual Property and Competition Law, 48(5), 538–561. Retrieved from https://doi​.org​/10​.1007​/s40319​- 017​ -0605​-y. Seville, C. (2010). Nineteenth-century Anglo–US copyright relations: The language of piracy versus the moral high ground. In L. Bently, J. Davis & J. Ginsburg (Eds.), Copyright and Piracy: An Interdisciplinary Critique (Cambridge Intellectual Property and Information Law, pp. 19–43). Cambridge: Cambridge University Press. Retrieved from https://doi​.org​/10​.1017​/CBO9780511761577​ .003. Tsoutsanis, A. (2014). Why copyright and linking can tango. Journal of Intellectual Property Law & Practice, 9(6), 495–509. Retrieved from https://doi​.org​/10​.1093​/jiplp​/jpu024. Woodmansee, M. (1984). The genius and the copyright: Economic and legal conditions of the emergence of the “author.” Eighteenth-Century Studies, 17(4), 425. Retrieved from https://doi​.org​ /10​.2307​/2738129. Xalabarder, R. (2016). Press publisher rights in the new copyright in the digital single market draft directive. CREATe Working Paper, 15, 25. Retrieved from https://doi​.org​/10​.5281​/zenodo​.183788.

11. EU consumer law and technology Agnieszka Jabłonowska1

1. INTRODUCTION With the rapid uptake of digital technologies in consumer markets, consumer law has become an increasingly important field in which law and technology intersect. Similarly to data protection, the protection of consumers as weaker parties – once a subject that mostly drew the attention of an expert audience – now regularly makes the headlines of major news outlets.2 This is especially due to the mounting controversies around the problematic practices of the leading information technology companies, such as Meta (Facebook), Alphabet (Google) and Amazon. Companies of this kind offer digital services to consumers – from social networks and search engines to online marketplaces – which have become an indispensable part of markets and society. However, there is also a growing recognition that their business models can expose consumers to different kinds of harm and therefore should be kept in check, among others by means of consumer law (Calo, 2013; Pasquale, 2015; Jabłonowska et  al., 2018; Helberger et al., 2021; Pałka, 2021). The development of consumer law in liberal democracies dates back to the post-war period and, like labour law, was a response to the observed asymmetries in socio-economic relations (Tonner, 2014). In Europe, its expansion coincided with the process of European integration and the field is now primarily shaped by directives and regulations adopted by the EU legislature (Micklitz, 2021). The European Union is at the same time one of the leading global actors in the more recent wave of digital market regulation, as illustrated by the much-publicised Digital Services Act (DSA).3 How the established rules of consumer law and the new legal instruments targeting the digital economy come together is one of the important questions faced by policymakers and by legal scholars. Prominent platform markets are not the only setting in which consumer law and scholarship meet digital technologies. On the one hand, the logic of extracting value from data, which the big techs have introduced and perfected (Cohen, 2019; Zuboff, 2019), appears to spill over to many other markets, including brick-and-mortar stores (Turow, 2021). On the other hand, there is a significant interest in the ways in which other technologies, such as 3D printing and 1  The research leading to this publication was funded by the National Science Centre in Poland (project no. 2018/31/B/HS5/01169). 2  Consider, for example, the Cambridge Analytica scandal or the so-called Facebook Files published by the Wall Street Journal. 3  Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L277/1. See also: Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) [2022] OJ L265/1; proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021) 206 final.

174

EU consumer law and technology  175 blockchain, can transform production and consumption and what it would mean for consumer law (De Franceschi, 2016; Howells, 2020). Against this background, the chapter maps the debates at the intersection of consumer law and technology and reflects on several themes deemed particularly important. It begins with a short introduction to the EU consumer law (Section 2). The impact of digital technologies on consumer markets is discussed thereafter, considering both the practices of online platforms and other socio-technical developments (Section 3). In the remainder of the chapter, the scholarly debates on several selected topics are reconstructed and reflected upon. Particular attention is drawn to the problem of exploitation through personalisation and to the division of responsibility in multi-party settings. Key points are summarised and highlighted in the conclusions.

2. CONSUMER LAW: A FIELD OF RE-ADJUSTMENT Consumer law may not be among the first associations that come to mind when thinking about law and technology. Situated between public and private law in the traditional continental European understanding (Reimann, 2014), the field has long developed on the margins of mainstream reflection. At its core, consumer law is a response to the observed asymmetries in socio-economic relations. It is also a problem-oriented field, and thus a field of constant re-adjustment. In Europe, the trajectory of consumer law development has been profoundly shaped by the concurrent expansion of EU integration (Micklitz, 2021). Accordingly, the relevant consumer rules do not aspire for coherence as an end in itself, but rather follow an instrumentalist rationality, characterising EU law (Michaels, 2011; cf. Weatherill, 2012; Comparato, Micklitz & Svetiev, 2016; Brownsword, 2019). At the same time, this close entanglement with the European project has also left its mark on the substance of the adopted rules. While a comprehensive overview of the EU consumer law would far exceed the limits of this chapter, it is useful to characterise its major building blocks.4 Four broadly applicable directives are discussed in the next sections, roughly following the lifecycle of consumer transactions. 2.1 Unfair Commercial Practices The first building block of the EU consumer law – and one which also merits close attention in view of digital markets – is Directive 2005/29/EC concerning unfair business-to-consumer commercial practices (UCPD).5 The Directive applies to commercial practices “before, during and after a commercial transaction in relation to a product” (Article 3(1) UCPD). As such, it can be triggered early on in the lifecycle of consumer transactions and maintains its relevance thereafter. Prominent examples of commercial practices include advertising and marketing,

For a comprehensive overview, see Reich et al., 2014. Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (“Unfair Commercial Practices Directive”) [2005] OJ L149/22. 4  5 

176  Research handbook on law and technology but the notion is much more encompassing (Article 2(d) UCPD). The UCPD adopts a principle-based approach with a general prohibition of unfair commercial practices at its very core (Article 5(1) UCPD). Unfair commercial practices can be divided into three categories. The first, and most general one, refers to practices that are contrary to the requirements of professional diligence and is the subject of Article 5(2) UCPD. The two other categories are misleading practices (specifically, actions and omissions) and aggressive practices, addressed by Articles 6–7 and 8–9 UCPD, respectively. Each of them can be relevant to various traders’ practices in the digital economy. Indeed, in its recent guidelines the European Commission highlighted that the UCPD applies to “practices and products that involve the use of technologies” and went on to assess a variety of practices, such as the use of tracking and personalisation, under the relevant provisions of the said Directive.6 Moreover, the 2019 reform of the UCPD by means of the Modernisation Directive,7 enriched the act with several provisions targeting online commerce, such as the presentation of product rankings and online reviews. All categories of unfair commercial practices share a common basic construction, whereby the classification of a commercial practice as unfair depends on two main factors: the trader’s breach of a given standard of conduct and the (likelihood of) material distortion of the consumer’s economic behaviour. Violation of diligence requirements is therefore not sufficient to deem a given practice unfair and, in effect, prohibited. The additional criterion concerning consumer behaviour has been subject to vigorous debate in recent years, focusing especially on the relevant consumer benchmark (Mik, 2016; Ebers, 2018). The UCPD makes room for two distinct consumer images: the average consumer – that is one who is reasonably wellinformed and reasonably observant and circumspect8 – and the vulnerable consumer. As will be discussed further in the chapter, the existing benchmarks have been a subject of critique, leading scholars to call for the embracement of “digital asymmetry”, understood as a structural phenomenon affecting all consumers (Helberger et al., 2021, 2022). 2.2 Pre-Contractual Disclosure The second important piece of the EU consumer acquis is Directive 2011/83/EU on consumer rights (CRD).9 The Directive defines traders’ pre-contractual disclosure obligations as well as consumers’ right to withdraw from the contract in certain situations. Traders are required, among others, to inform the consumers about the main characteristics and the price of the goods or services. A significant part of the CRD applies to so-called distance contracts, which

6  Commission Notice – Guidance on the interpretation and application of Directive 2005/29/EC of the European Parliament and of the Council concerning unfair business-to-consumer commercial practices in the internal market [2021] OJ C526/1. 7  Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules [2019] OJ L328/7, hereafter: Modernisation Directive. 8  See: recitals 18–19 and Articles 5(2)–5(3) UCPD. 9  Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council and repealing Council Directive 85/577/EEC and Directive 97/7/EC of the European Parliament and of the Council [2011] OJ L304/64.

EU consumer law and technology  177 also include online contracts. The importance of digital markets is further acknowledged in the more recent amendments to the act. A prominent example is the additional disclosure duties designed specifically for contracts concluded on online marketplaces (Article 6a CRD). The Consumer Rights Directive stresses the role of information rules in market regulation and as such relies on the information paradigm (cf. Grundmann et al., 2001). Rules of this kind have a strong political appeal as they can correspond with several objectives, like fostering autonomy, efficiency or fairness (Wilhelmsson & Twigg-Flesner, 2006; Schauer, 2011; Busch, 2016; Seizov et al., 2019). In the EU context, disclosure duties can be connected to the idea of an active consumer, with his or her own role to play in the development of the internal market (Micklitz, 2012, 2018; Hesselink, 2016; Mak, 2016; Twigg-Flesner et  al., 2018). By reducing information asymmetries between both parties, information rules are supposed to allow consumers, acting in a rational manner, to make better decisions and thereby contribute to more efficient market outcomes (Rischkowsky & Döring, 2008). Not unlike other EU rules, the CRD therefore has a dual objective, concerning consumer protection and market growth. Nowadays, the vision of consumers as rational utility maximisers is subject to growing critiques, coming primarily from behavioural research (Howells, 2005; Marotta-Wurgler, 2012; Ben-Shahar & Schneider, 2014). It is observed that consumer rationality is bounded due to cognitive limits intrinsic in the human mind, among other things (Simon, 1955; Thaler & Sunstein, 2008). This, however, has not led to the eventual demise of the information paradigm in EU consumer law. Instead, closer attention is currently being paid to the modalities of disclosure, so as to improve the relevance of information for consumer decision-making. 2.3 Unfair Contract Terms While the previously discussed disclosure duties are focused mostly on the traders’ processes, EU consumer law can also interfere directly with the substance of consumer transactions. This is the case for one of the oldest pieces of EU consumer legislation, namely Directive 93/13/EEC on unfair terms in consumer contracts (UCTD).10 At its core, the Directive provides that unfair terms in contracts concluded by sellers and suppliers with consumers shall not be binding on the latter. On a closer look, the UCTD testifies to the difficulties in finding a workable compromise within the European Union, especially in areas that touch upon the core of contract law. These difficulties translated into a number of unobvious choices, drawing from the legal norms in place in various Member States and reflecting different varieties of justice (Wilhelmsson, 2008; Reich & Micklitz, 2014). The heart of the UCTD is the fairness test in Article 3(1), providing that a contractual term that has not been individually negotiated shall be regarded as unfair if, contrary to the requirement of good faith, it causes a significant imbalance in the parties’ rights and obligations arising under the contract, to the detriment of the consumer. Accordingly, the fairness test applies to business-to-consumer (B2C) contracts and allows for a substantive control of terms whose content has not been influenced by the consumer. Standard-form B2C transactions are certainly a major group of contracts that fall under this notion. Pre-formulated terms of service in digital consumer markets provide a prominent example. To illustrate, clauses excluding or

10  Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts [1994] OJ L95/29.

178  Research handbook on law and technology limiting the legal liability of the trader in the event of the death or personal injury of the consumer resulting from the trader’s acts or omissions are generally regarded as unfair.11 Not all terms of non-individually negotiated B2C contracts are subject to the fairness test, however. Pursuant to Article 4(2) UCTD, assessment of the unfair nature of the terms shall relate neither to the definition of the main subject matter of the contract nor to the adequacy of the price and remuneration, on the one hand, as against the services or goods supplied in exchange, on the other, in so far as these terms are in plain intelligible language. Core terms are thus excluded from the assessment of substantive fairness, provided that the transparency requirement is complied with. A general transparency requirement for terms offered in writing is also anchored in Article 5 UCTD, emphasising the connection between transparency and fairness. 2.4 Conformity with the Contract The UCTD is not the only EU act affecting the content of consumer transactions. An important role in this regard is also played by rules determining contractual conformity and the associated rights and obligations. Until recently, the focus on the relevant EU law remained on the sale of goods. However, following the adoption of Directive 2019/770 on digital content and digital services (DSD),12 the scope of the acquis was extended to services provided in the digital economy. The DSD defines digital services as: (a) services that allow the consumer to create, process, store or access data in digital form; or (b) services that allow the sharing of or any other interaction with data in digital form uploaded or created by the consumer or other users of that service (Article 2(2) DSD). Services provided by major online platforms, e.g. social media, are therefore covered by the scope of the act. Crucially, in defining the requirements of conformity of digital content and services with the contract, the Directive adopts a mixed approach, combining subjective and objective criteria. The former refers to various features which digital services should have as stipulated by the contract, the latter, in turn, are anchored directly in the EU law. A notable objective criterion is found in Article 8(1)(b) DSD which provides that digital services shall be of the quantity and possess the qualities and performance features … normal for digital content or digital services of the same type and which the consumer may reasonably expect, given the nature of the digital content or digital service and taking into account any public statement made by or on behalf of the trader, or other persons in previous links of the chain of transactions (with several exceptions).

Two distinct criteria must therefore be fulfilled: normality and reasonableness. Both criteria are known from the pre-existing rules on consumer sales.13 Applied to the digital services, however, they remain far from straightforward (Namysłowska & Jabłonowska, Point 1(a) of UCTD Annex. Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services [2019] OJ L136/1. 13  Directive 1999/44/EC of the European Parliament and of the Council of 25 May 1999 on certain aspects of the sale of consumer goods and associated guarantees [1999] OJ L 171/12, Article 2. The Directive was recently replaced by Directive (EU) 2019/771 of the European Parliament and of 11  12 

EU consumer law and technology  179 2022). Indeed, considering the high level of concentration in key digital markets, what is deemed “normal” for a given service may factually be determined by a handful of big tech companies. This draws attention to the intersection between consumer contract law and other legal norms, including digital market regulation, with the latter having a role to play in defining the scope of legitimate design choices. The second important constraint comes from the DSD itself and involves situations when the normal quality of a service does not meet reasonable consumer expectations (Schulze, 2022). The notion of reasonableness is not especially well-grounded in continental legal systems, but draws inspiration from the common law, with several adjustments (Staudenmayer, 2020). As clarified in recital 46 DSD, the standard of reasonableness “should be objectively ascertained, having regard to the nature and purpose of the digital content or digital service, the circumstances of the case and to the usages and practices of the parties involved”. The scholarship observes, moreover, that references to reasonableness as a model of conduct in EU law do not, in themselves, provide much guidance on the relevant degree of diligence (Troiano, 2009). They merely imply that a party is required to balance the conflicting interests in a given situation, yet what constitutes a reasonable balance depends on other factors (Troiano, 2009, p. 773). In the case of the DSD, attention must be drawn to the rationale of a “high level” of consumer protection, which is stressed in the Directive itself and in EU primary law.14 When choosing the right conduct, traders can thus be required to pay careful attention to consumer interests, and not only prioritise their own commercial gain. 2.5 Interim Conclusions: Between Transparency and Fairness As seen from above, EU consumer law has significantly developed over past decades and sets out multiple provisions on transparency and fairness. Its overall complexion remains ambivalent: while explicitly committed to a high level of consumer protection, the vision of an active consumer with his or her role to play in the internal market continues to be prominent in parts of the acquis. Still, EU law also contains notable provisions that go deeper into the content of the bargain, as illustrated by the UCTD and the DSD. Rules of this kind appear to be based on the premise that consumers are especially vulnerable to detriment in certain economic settings and require greater protection, offered by rules that focus on the outcomes and not only processes (Willett, 2018). Crucially, this idea of vulnerability is not something connected to consumers’ personal features, but relates more generally to their more limited resources to avoid the risk of harm and more limited ability to respond to harm, as compared to traders (cf. Herring, 2016). Put differently, this idea of vulnerability, already present in consumer law, is broader than the notion of vulnerable consumers in the UCPD. Key aspects of the existing EU consumer rules took shape when the currently observed expansion of the digital economy was not yet in sight. Before discussing the associated challenges, it is helpful to take a closer look at the ways in which digital technologies transform consumer markets. the Council of 20 May 2019 on certain aspects concerning contracts for the sale of goods, amending Regulation (EU) 2017/2394 and Directive 2009/22/EC, and repealing Directive 1999/44/EC [2019] OJ L136/28. 14  Article 1 DSD; Articles 114(3) and 169(1) of the Treaty on the Functioning of the European Union, consolidated version [2012] OJ C326/47.

180  Research handbook on law and technology

3. TECHNOLOGY AND THE CONSUMER 3.1 The Growth of “Platforms” Notable digital technology companies, like Alphabet (Google), Meta (Facebook), or Amazon, have started as providers of software to consumers and essentially maintain this line of business until today. Their spectacular growth is closely linked to the early days of the internet when new products and services were needed to exploit its potential, which the legislatures actively promoted via liability exemptions (Cohen, 2019). The increasing amount of information available online and the rise of Web 2.0 (O’Reilly, 2012) prompted innovative companies to offer their solutions for addressing consumer needs.15 Search engines made it possible to organise the wealth of information, social media enabled easy communication, while online marketplaces opened new possibilities for online commerce. Through this, access to knowledge and economic opportunities was supposed to be democratised while commercial actors allowing that to happen began to brand themselves as “platforms”, highlighting their seemingly neutral position (Gillespie, 2010). For a number of years, this supposed neutrality of online platforms kept the more worrisome aspects of their development out of sight. This concerns not only the possible exposure of platforms to liability for third-party content and actions, but also the degree to which the functioning of algorithms depends on the processing of personal data and the risks this could pose in the long term (Pałka, 2021). To illustrate, while the original algorithm used by Google primarily ranked webpages, over time the company began to also leverage the subjective component of information retrieval and to serve results that particular users may find relevant to them (Desai, 2015). That, coupled with the business decision to monetise its services via advertising, created an incentive for a massive collection of personal data and allowed Google to establish itself as a key player in the digital advertising ecosystem (CMA, 2020). The risks at play when essential digital services are monetised through advertising become particularly apparent when the business model of Facebook (now Meta) is considered. Van Dijck (2013) shows how the company, on the one hand, mobilised the ideas of “sharing” to make its social network appealing to consumers and, on the other hand, imposed a particular framework for online interactions through interface design. Crucially, what has been promoted through this framework is a double meaning of sharing, which Van Dijck describes as connectedness and connectivity. The former is user-centred, and refers to users distributing information to each other, while the latter is owner-centred and directed at sharing user data with third parties (Van Dijck, 2013, pp. 46–47, 50). Over time, different aspects of the Facebook platform – and even of external websites16 – have been progressively redesigned to promote the sharing’s second meaning. The nature of this shift, however, has remained largely unknown to the consumers. While scholars have warned about it for quite some time (Pasquale, 2015; Zuboff, 2015), the damaging potential of large-scale commercial

15  According to O’Reilly, the core competencies of Web 2.0 companies include: “services, not packaged software, with cost-effective scalability; control over unique, hard-to-re-create data sources that get richer as more people use them; trusting users as codevelopers; harnessing collective intelligence; leveraging the long tail through customer self-service; software above the level of a single device; lightweight user interfaces, development models, and business models”. 16  For illustration, consider the expansion of the “Like” button.

EU consumer law and technology  181 surveillance only gained broader recognition when the Cambridge Analytica scandal was revealed. It became apparent that data collected via Facebook can be exploited to target users with personalised messages that can affect their real-life behaviour (Wylie, 2019). Since then, the logic of extracting value from data, which platforms have perfected, remains in the spotlight of academic debates (Cohen, 2019; Zuboff, 2019; Turow, 2021) and meets with an increased attention from the regulators, across and beyond consumer law. 3.2 Emerging Technologies While the importance of platforms cannot be overstated, other related phenomena also inspire the narratives about transforming markets. Looking at the Gartner hype cycle for emerging technologies (Gartner, 2022), one of such themes is certainly the accelerated development of artificial intelligence (AI). Following the paradigm shift in AI from rule-based programming to machine learning systems, the discipline is now experiencing a phenomenal revival (Russell & Norvig, 2021). Important actors behind this development are again online platforms, having access to both the needed datasets, programming talent and processing power. A tangible example are the so-called virtual assistants offered by most big techs (Stucke & Ezrachi, 2017; Turow, 2021), yet there are also countless systems, such as those underlying the selection of personalised content, that remain unembodied (Yeung, 2017). Not surprisingly, this further exacerbates the concerns referred to earlier in this chapter. Online platforms, however, are not the only actors involved in the AI game. Companies active in different sectors are now deploying machine learning to optimise their processes. Examples range from credit scoring and fraud identification over AI-assisted diagnostics and autonomous vehicles to machine translation and chatbots (Jabłonowska et al., 2018). Much of the recent media buzz is also triggered by the so-called generative AI, able to produce content such as text and images. Realistic pictures and pieces of writing generated on the basis of textual prompts by OpenAI’s DALL·E 2 and ChatGPT make us wonder about the transformative potential of this line of research, which now even enters creative domains (Davenport & Mittal, 2022). Other technological phenomena, whose impact on consumer markets is being debated, include 3D printing, the Internet of Things, distributed ledger technology (blockchain) as well as virtual and augmented reality (Howells, 2020). While there is indeed a certain uptake regarding each of them, its pace remains unhurried. At present, the two most widely discussed topics appear to be smart contracts and the metaverse. The former, following Durovic and Janssen (2018), refer to “software programmes which are often, but not necessarily, built on blockchain technology as a set of promises, specified in digital form, including protocols within which the parties perform on these promises”. In short, if an if-then rule is triggered by the relevant event, smart contracts are capable of automatically enforcing it, for example by transferring an asset. The concept of the metaverse, in turn, has been popularised by Mark Zuckerberg who in 2021 announced the rebranding of Facebook to Meta. The CEO of the notorious social network painted the vision of an immersive platform making it possible for everyone “to teleport instantly as a hologram” and “move across … experiences on different devices – augmented reality glasses to stay present in the physical world, virtual reality to be fully immersed, and phones and computers to jump in from existing platforms” (Zuckerberg, 2021). The investment in the metaverse is seen as the company’s attempt to move beyond its existing revenue stream, coming primarily from online advertising. One year after the

182  Research handbook on law and technology rebranding took place, however, the value of the Meta stock has seen a downward trend and the future of the company remains uncertain.

4. STATE OF DEBATE: RECONSTRUCTION AND REFLECTION The European debates on consumer law and technology characteristically focus on the question of whether existing rules are adequate for the digital age (see Pałka and Brożek in this Research Handbook). The specific topics discussed can be placed, once again, within the narrative of a contract lifecycle. Of course, such a perspective is merely a simplification. As was already stated, for example, digital advertisements are no longer self-standing acts between the advertisers and consumers, but rather form an integral part of consumers’ long-term relationships with platforms. Still, to structure the discussion, it is helpful to start with the problems that typically concern the pre-contractual stage and conclude with those pertaining to contract execution. 4.1 Personalisation and Dark Patterns The issue that appears to attract the most attention when it comes to approaching consumers with product information is personalisation in marketing and pricing. The notion itself is rather vague and appears to be used strategically by the industry to trigger positive associations with at best ambivalent practices. Broadly speaking personalisation relates to extracting profiles from large datasets to present consumers with offers that the traders believe are suitable for people with their features. To understand the notion better, attempts have also been made to distinguish personalisation from other forms of data-driven marketing and pricing. As regards advertising, the notion of personalisation is sometimes used interchangeably with other concepts such as online behavioural advertising (Boerman et al., 2017; Laux et al., 2021) and microtargeting (Ebers, 2018). This, in turn, is set apart from the more traditional ways of tailoring content to its viewers, such as those dependent exclusively on context. In today’s data-driven economy, in which the amount of generated data is constantly reaching new heights, inferring consumer profiles has become commonplace. As explained by Custers (2018), basic techniques like regression, classification or clustering can be used to infer new attributes from the available ones. Both the degree of precision and the sources of such inferences (coming from the same person or from other persons) may vary. Against this background, leading consumer law scholars have argued that personalisation should not be mistaken for individualisation, but is rather “a pre-designed form of quasi-individualised standardisation” in which the individual is replaced by proxies and in which everyone is potentially vulnerable (Helberger et al., 2021, p. 94). Accordingly, the relevant “control architecture” should not be limited to individual rights, but should also be monitored at a collective level (Helberger et al., 2021, p. 104). A similar logic can be transferred to the discussion on personalised pricing, which is a buzzword describing a data-driven form of price discrimination. The latter is an established concept in economic theory and refers to the differentiation of prices charged to consumers for the same or similar products in order to maximise profits, where such differences are not motivated by different cost structures, e.g. different supply costs (Stigler, 1966, p. 209). Its most advanced form is the so-called first-degree price discrimination, which consists of

EU consumer law and technology  183 providing an individualised price for each consumer on the basis of his or her willingness to pay (Steppe, 2017). The prevalence of personalised pricing in digital consumer markets remains a matter of debate. Several surveys carried out in Europe have found no evidence of consistent and systematic use of profiling to implement price variations (BMJV, 2021),17 although single cases were indeed confirmed (ACM, 2022; cf. Mikians et al., 2012). What appears far more prominent is the so-called dynamic pricing, which refers to the flexible price adjustments in response to market conditions, such as the changes in supply and demand or the behaviour of competitors (Grochowski et al., 2022). On the one hand, this difference is important, as not all forms of rapid price adjustments in the digital economy are necessarily connected to the consumers’ personal features. On the other hand, personalised and dynamic pricing share an important similarity, in that they are essentially standardised data-driven practices. Both can, moreover, be applied in ways that can be harmful to consumers, e.g. if prices of travel services are inflated because of urgency resulting from personal circumstances or from a natural disaster affecting more consumers. In both cases, it does not seem sufficient to limit one’s perspective to individual rights and obligations, as the European Union currently appears to be doing with the newly added information duty on personalised prices.18 As observed in the literature, the increasingly sophisticated personalisation of marketing and pricing exacerbates the existing asymmetry between businesses and consumers (Helberger et  al., 2022). Importantly, tailoring offers to consumers is not the only driver behind this dynamic. Another, and equally important one, pertains to the design of online interfaces. It becomes increasingly well-understood that the way in which the options are displayed can systematically affect decision-making (Thaler et al., 2013). This is true for both digital and non-digital settings, yet in the digital economy the possibilities for experimenting with choice architectures are particularly remarkable (Kramer et  al., 2014; Mik, 2016; Yeung, 2017). Growing attention is thus devoted to what became known as “dark patterns”, described by the European Commission as “a type of malicious nudging, generally incorporated into digital design interfaces”.19 Both personalisation and dark patterns are the subject of an intense scholarly and policy reflection, which seeks to assess the protective potential of existing rules, such as the UCPD, as well as the need for new responses. Most recently, targeted provisions on deceptive design were added to the DSA,20 yet their relation with existing rules has been subject to criticism.21 The discussed pre-contractual problems certainly deserve the attention they currently receive. Beyond envisaging prospective regulatory pathways, further research could examine the prevalence and effectiveness of data-driven marketing and pricing, which still remain relatively obscure. While the impact of dark patterns on decision-making is comparably easy to observe, examining personalised communications poses significant challenges since they place consumers in “experience cocoons” (Bodo et al., 2017, p. 139). This also applies to the 17  See also European Commission. (2018). Consumer market study on online market segmentation through personalised pricing/offers in the European Union final report. Retrieved from https://data​ .europa​.eu​/doi​/10​.2818​/990439 18  Article 6(1)(ea) CRD. 19  Commission Notice (n 6) point 4.2.7. 20  Article 25 DSA. 21  See European Law Institute. (2023). European Commission’s Public Consultation on Digital Fairness – Fitness Check on EU Consumer Law. Response of the European Law Institute.

184  Research handbook on law and technology perspective of marketers who may have to rely on information provided by the platforms to assess the effectiveness of their campaigns. Could it turn out, as suggested by Hwang (2020), that platforms’ claims about their targeting capacities are largely overblown? How can personalised environments be monitored so that consumer interests are safeguarded? Which consumer groups are most exposed to the negative effects of personalisation? Can reliable information be provided to advertisers without putting consumer interests at further risk? And finally: is the model of monetising online services through targeted advertisements tenable in the long term? 4.2 Duties and Liabilities in Multi-Party Settings Another major theme in debates about consumer law and technology concerns the duties of the parties involved in three-party relationships in the platform economy. Of interest are especially the peer-to-peer relationships facilitated by online platforms, whose development is captured by the notion of sharing, or collaborative, economy (Davidson et al., 2018; Hatzopoulos, 2018). With the advent of platforms such as Airbnb and Uber, peer-to-peer e-commerce has moved beyond the sale of goods and has begun to transform service markets. This raised a number of questions concerning, among others, the appropriate allocation of responsibilities between actors involved. Although online platforms typically position themselves as mere intermediaries (Gillespie, 2010; Codagnone et al., 2018), they appear to also be the parties with the strongest factual position. Accordingly, attempts have been made in the scholarship to clarify and possibly expand the scope of platforms’ obligations vis-à-vis consumers (Maultzsch, 2018; Tereszkiewicz, 2018; Devolder, 2019; Busch et al., 2020). The original responses from the EU law- and policymakers were rather cautious and focused on exploiting the potential of existing rules. In particular, the fairness test set out in the UCPD was elaborated in the Commission’s guidelines,22 platforms were called upon to make their standard terms more balanced,23 and new information duties for contracts concluded through online marketplaces were added to the CRD.24 Further-reaching requirements were only imposed on the platforms through a subsequent legislative initiative, namely the DSA. For example, the regulation clarifies in Article 6(3) that the established liability exemptions for the providers of hosting services do not apply with respect to the liability under consumer protection law, where the platform design could lead an average consumer to believe that the underlying product or service is provided either by the online platform itself or by a recipient of the service who is acting under its authority or control. The act offers no further guidance about the relevant criteria, although a possible scenario could be where the identity of the seller or service provider is not displayed to the consumer and where contacting the contracting party is only possible through the platform. Still, what the DSA ultimately achieves is merely the removal of the liability exemption, leaving it to consumer protection law to determine conditions for liability. It is not even entirely

Commission Notice (n 6) points 4.2.1. and 4.2.2. See, e.g. European Commission, Factsheet of the changes implemented by Airbnb, 2019, https:// commission​.europa​.eu​/system​/files​/2019​- 07​/airbnb​_factsheet​.pdf 24  Article 4(5) Modernisation Directive. 22 

23 

EU consumer law and technology  185 clear what the latter is supposed to encompass, and particularly whether it is limited to rules on B2C transactions.25 The problematic intersection of EU platform regulation with consumer law as well as other legal acts is not limited to liability exemptions. To illustrate, following recent amendments, the CRD requires the providers of online marketplaces to inform the consumers whether the third party offering the goods, services or digital content is a trader or not, based on the latter’s declaration.26 The DSA appears to kick in at a later stage and imposes further traceability requirements concerning the suppliers who qualify as traders.27 In those cases, however, a higher standard of behaviour is expected of platforms, as they should also “make best efforts to assess whether the information [provided by the traders] is reliable and complete”. It is not evident why the provisions envisaged for the peer-to-peer economy offer a lower level of consumer protection than those directed at B2C e-commerce. Moreover, the DSA also contains novel provisions requiring online platforms to design and organise their online interfaces in a way that enables traders to comply with their obligations regarding pre-contractual disclosure, compliance and product safety information.28 However, the requirement is limited again to standard B2C transactions. This remains somewhat at odds with the recently proposed regulation on data collection and sharing relating to short-term accommodation rental services,29 which operates with a more neutral notion of a “host”. More than six years after the adoption of the European agenda for the collaborative economy,30 the EU legislature is still visibly struggling to find a coherent framework for three-party relationships. Besides further exploring the allocation of responsibilities in the platform economy, as well as the relation between established instruments of consumer law and the more recent wave of platform regulation, further research could also look into empirical questions, such as the capacity of online platforms to mitigate the risks inherent in their operation. How many problems are actually reported to companies like Airbnb and what measures are typically taken in response? The opacity of platform practices in this domain is quite remarkable, considering their importance for the shaping of consumer expectations, the degree of possible exposure to liability and the related insurance costs. Another noteworthy aspect of the platform economy concerns the existence of the so-called Brussels effect, which is the influence of EU law beyond its geographical borders (Bradford, 2020). Do regulatory initiatives in Europe affect the platforms’ global practices? The developments observed in the domain of standard terms, where platforms appear to apply more balanced terms only for certain jurisdictions, challenge this hypothesis.

See also: Judgment of the Court of 4 October 2018, C-105/17, Kamenova, ECLI:EU:C:2018:808. Article 6a(1)(b) CRD. 27  Article 30 DSA. 28  Article 31 DSA. 29  Proposal for a Regulation of the European Parliament and of the Council on data collection and sharing relating to short-term accommodation rental services and amending Regulation (EU) 2018/1724, COM(2022) 571 final. 30  Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions, A European agenda for the collaborative economy, COM(2016) 356 final. 25 

26 

186  Research handbook on law and technology

5. CONCLUSIONS The chapter examined the intersection of consumer law technology, focusing on the European law perspective. The analysis revealed that the core themes of transparency and fairness maintain their relevance in the platform economy. Existing rules provide a useful starting point for protecting consumers as weaker parties in transforming markets and are treated as such by the law- and policymakers. The protection afforded to consumers could become even stronger in the future, if the existing rules were subject to a more radical re-interpretation. For the UCPD this could be done by embracing the concept of digital asymmetry advocated by Helberger et  al. (2022), while with respect to other acts – for example, by giving a bold reading to the notions of transparency and reasonable expectations. Moreover, further research could explore the relationship between consumer law and the more recent wave of platform regulation and pose empirical questions concerning consumer protection in the digital economy. Due to the limits of this chapter, a number of important topics could not be analysed with the attention they deserve. For example, the impact of emerging technologies, such as artificial intelligence, was only remarked briefly and the associated regulatory developments have been left out of scope. Online platforms of course are leading players in this field, and much of the discussion about the use of personalisation is also a discussion about fair uses of AI. However, as was mentioned, other actors can deploy AI systems too, with credit scoring being a prominent example. These sector-specific developments are certainly substantial and lend themselves to a separate in-depth study. Some of the other technologies discussed throughout the chapter are yet to show their relevance for consumer markets. At the moment we seem to be quite far from both the vision of the metaverse and a wider use of 3D printing by consumers. The same is generally true for blockchain and smart contracts, although the associated discussions and experiments can provide plenty of food for thought. With their promise of self-execution, smart contracts could bring significant value to consumers, who often benefit from different legal rights but lack the resources to enforce them. Deploying smart contracts to consumers’ advantage is nonetheless unlikely to happen at traders’ own initiative. Moreover, the consequences of full self-execution, even if only applied to a limited domain, are difficult to estimate, making us wonder if imperfect enforcement is not a design feature of every legal system – as is the (quite convenient) vagueness of the legal rules.

BIBLIOGRAPHY ACM. (2022). Following ACM actions, Wish bans fake discounts and blocks personalized pricing. Retrieved from www​.acm​.nl​/en​/publications​/following​-acm​-actions​-wish​-bans​-fake​-discounts​-and​ -blocks​-personalized​-pricing. Ben-Shahar, O. & Schneider, C.E. (2014). More than you wanted to know. Princeton: Princeton University Press. BMJV. (2021). Empirie zu personalisierten Preisen im E-Commerce. Retrieved from www​.bmjv​.de​/ DE​ /Service​/ Fach​-publikationen​/ Empirie​-Studie​.html. Bodo, B., Helberger, N., Irion, K., Zuiderveen Borgesius, F., Moller, J., van de Velde, B., Bol, N., van Es, B. & de Vreese, C. (2017). Tackling the algorithmic control crisis: The technical, legal, and ethical challenges of research into algorithmic agents. Yale Journal of Law & Technology, 19, 133–180. Boerman, S.C., Kruikemeier, S. & Zuiderveen Borgesius, F.J. (2017). Online behavioral advertising: A literature review and research agenda. Journal of Advertising, 46(3), 363–376.

EU consumer law and technology  187 Bradford, A. (2020). The Brussels effect: How the European Union rules the world. Oxford: Oxford University Press. Brownsword, R. (2019). Law, technology and society: Re-imagining the regulatory environment. London: Routledge. Busch, C. (2016). The future of pre-contractual information duties: From behavioural insights to big data. In Twigg-Flesner, C. (Ed.). Research handbook on EU consumer and contract law (pp. 221– 240). Cheltenham: Edward Elgar Publishing. Busch, C., Dannemann, G., Schulte-Nölke, H., Wiewiórowska-Domagalska, A. & Zoll, F. (2020). An introduction to the ELI model rules on online platforms. Journal of European Consumer and Market Law, 9(2), 61–70. Calo, R. (2013). Digital market manipulation. George Washington Law Review, 82, 995. CMA. (2020). Online platforms and digital advertising: Market study final report. Retrieved from www​.gov​.uk ​/cma​-cases​/online​-platforms​-and​-digital​-advertising​-market​-study. Cohen, J.E. (2019). Between truth and power: The legal constructions of informational capitalism. Oxford: Oxford University Press. Codagnone, C., Karatzogianni, A. & Matthews, J. (2018). Platform economics: Rhetoric and reality in the “sharing economy”. Binkley: Emerald Group Publishing. Comparato, G., Micklitz, H.W. & Svetiev, Y. (2016). The regulatory character of European private law. In Twigg-Flesner, C. (Ed.). Research handbook on EU consumer and contract law (pp. 35–67). Cheltenham: Edward Elgar Publishing. Custers, B.H.M. (2018). Profiling as inferred data: amplifier effects and positive feedback loops. In Bayamlioğlu, E., Baraliuc, I., Janssens, L. & Hildebrandt, M. (Eds.). Being Profiled: Cogitas ergo Sum: 10 years of ‘Profiling the European citizen’ (pp. 112–116). Amsterdam: Amsterdam University Press. Davenport, T.H. & Mittal, N. (2022). How generative AI is changing creative work. Retrieved from www​.hbr​.org​/2022​/11​/ how​-generative​-ai​-is​-changing​-creative​-work. Davidson, N.M., Finck, M. & Infranca, J.J. (Eds.). (2018). The Cambridge handbook of the law of the sharing economy. Cambridge: Cambridge University Press. De Franceschi, A. (Ed.). (2016). European contract law and the digital single market. Antwerp: Intersentia. Desai, D.R. (2015). Exploration and exploitation: An essay on (machine) learning, algorithms, and information provision. Loyola University Chicago Law Journal, 47, 541. Devolder, B. (2019). The platform economy: Unravelling the legal status of online intermediaries. Cambridge: Intersentia. Durovic, M. & Janssen, A. (2018). The formation of blockchain-based smart contracts in the light of contract law. European Review of Private Law, 26(6), 753–771. Ebers, M. (2018). Beeinflussung und Manipulation von Kunden durch „Behavioral Microtargeting. MMR, 7, 423–428. Gartner. (2022). What’s new in the 2022 Gartner Hype Cycle for Emerging Technologies. Retrieved from  www​.gartner​.com ​/en ​/articles ​/what​-s​-new​-in​-the ​-2022 ​-gartner​-hype ​- cycle ​-for​- emerging​ -technologies. Gillespie, T. (2010). The politics of ‘platforms’. New Media & Society, 12(3), 347–364. Grochowski, M., Jabłonowska, A., Lagioia, F. & Sartor, G. (2022). Algorithmic price discrimination and consumer protection. A digital arms race? Technology and Regulation, 36–47. Grundmann, S., Kerber, W. & Weatherill, S. (2001). Party autonomy and the role of information in the internal market – An overview. In Grundmann, S., Kerber, W. & Weatherill, S. (Eds.). Party autonomy and the role of information in the internal market (pp. 3–38). Berlin: De Gruyter. Hatzopoulos, V. (2018). The collaborative economy and EU law. Oxford: Bloomsbury Publishing. Helberger, N., Lynskey, O., Micklitz, H.W., Rott, P., Sax, M. & Strycharz, J. (2021). EU consumer protection 2.0 – Structural asymmetries in digital consumer markets. A joint report from research conducted under the EUCP2.0 project. Retrieved from www​.beuc​.eu​/sites​/default​/files​/publications​ /beuc​-x​-2021​- 018​_eu​_consumer​_protection​_2​.0​.pdf. Helberger, N., Sax, M., Strycharz, J. & Micklitz, H.W. (2022). Choice architectures in the digital economy: Towards a new understanding of digital vulnerability. Journal of Consumer Policy, 45(2), 175–200.

188  Research handbook on law and technology Herring, J. (2016). Vulnerable adults and the law. Oxford: Oxford University Press. Hesselink, M.W. (2016). Contract theory and EU contract law. In Twigg-Flesner, C. (Ed.). Research handbook on EU consumer and contract law (pp. 508–534). Cheltenham: Edward Elgar Publishing. Howells, G. (2005). The potential and limits of consumer empowerment by information. Journal of Law and Society, 32(3), 349–370. Howells, G. (2020). Protecting consumer protection values in the fourth industrial revolution. Journal of Consumer Policy, 43(1), 145–175. Hwang, T. (2020). Subprime attention crisis: Advertising and the time bomb at the heart of the internet. New York: FSG originals. Jabłonowska, A., Kuziemski, M., Nowak, A.M., Micklitz, H.W., Pałka, P. & Sartor, G. (2018). Consumer law and artificial intelligence: Challenges to the EU consumer law and policy stemming from the business’ use of artificial intelligence: Final report of the ARTSY project. EUI Working Paper LAW 2018/11. Retrieved from https://cadmus​.eui​.eu​/ handle​/1814​/57484. Kramer, A.D.I., Guillory, J.E. & Hancock, J.T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences of the United States of America, 111(24), 8788–8790. Laux, J., Wachter, S. & Mittelstadt. (2021). Neutralizing online behavioural advertising: Algorithmic targeting with market power as an unfair commercial practice. Common Market Law Review, 58(3), 719–750. Mak, V. (2016). The consumer in European regulatory private law. In Leczykiewicz, D. & Weatherill, S. (Eds.). The image of the consumer in EU law: Legislation, free movement and competition law (pp. 381–400). Oxford: Hart Publishing. Marotta-Wurgler, F. (2012). Does contract disclosure matter? Journal of Institutional and Theoretical Economics, 168(1), 94–123. Maultzsch, F. (2018). Contractual liability of online platform operators: European proposals and established principles. European Review of Contract Law, 14(3), 209–240. Michaels, R. (2011). Of islands and the ocean: the two rationalities of European private law. In Brownsword, R., Micklitz, H.W., Niglia, L. & Weatherill, S. (Eds.). The foundations of European private law (pp. 139–158). Oxford: Hart Publishing. Micklitz, H.W. (2012). The expulsion of the concept of protection from the consumer law and the return of social elements in the civil law: a bittersweet polemic. Journal of Consumer Policy, 35(3), 283–296. Micklitz, H.W. (2018). The politics of justice in European private law: Social justice, access justice, societal justice. Cambridge: Cambridge University Press. Micklitz, H.W. (2021). The intellectual community of consumer law and policy in the EU. In Micklitz, H.W. (Ed.). The making of consumer law and policy in Europe (pp. 63–92). Oxford: Hart Publishing. Mik, E. (2016). The erosion of autonomy in online consumer transactions. Law, Innovation and Technology, 8(1), 1–38. Mikians, J., Gyarmati, L., Erramilli, V. & Laoutaris, N. (2012). Detecting price and search discrimination on the internet. In Proceedings of the 11th ACM workshop on hot topics in networks (pp. 79–84). Namysłowska, M. & Jabłonowska A. (2022). Artificial intelligence and platform services: EU consumer (contract) law and new regulatory developments. In Ebers, M., Poncibò, C. & Zou, M. (Eds.). Contracting and contract law in the age of artificial intelligence (pp. 221–248). Oxford: Hart Publishing. O’Reilly, T. (2012). What is web 2.0? Design patterns and business models for the next generation of software. In Mandiberg, M. (Ed.). The social media reader (pp. 32–52). New York: New York University Press. Pałka, P. (2021). The world of fifty (interoperable) Facebooks. Seton Hall Law Review, 51(4), 1193–1239. Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Cambridge: Harvard University Press. Reich, N. & Micklitz, H.W. (2014). The court and sleeping beauty: the revival of the Unfair Contract Terms Directive (UCTD). Common Market Law Review, 51(3), 771–808. Reich, N., Micklitz, H.-W., Rott, P. & Tonner, K. (2014). European consumer law. Antwerp: Intersentia. Reimann, M. (2014). The American advantage in global lawyering. The Rabel Journal of Comparative and International Private Law, 78(1), 1–36.

EU consumer law and technology  189 Rischkowsky, F. & Döring, T. (2008). Consumer policy in a market economy: Considerations from the perspective of the economics of information, the new institutional economics as well as behavioural economics. Journal of Consumer Policy, 31(3), 285–313. Russell, S. J. & Norvig, P. (2021). Artificial intelligence: A modern approach. Hoboken: Pearson. Schauer, F. (2011). Transparency in three dimensions. University of Illinois Law Review, 1339. Schulze, R. (2022). § 327e BGB. In Schulze, R. (Ed.). Bürgerliches Gesetzbuch: Handkommentar (11 edn). Baden-Baden: Nomos. Seizov, O., Wulf, A.J. & Luzak, J. (2019). The transparent trap: A multidisciplinary perspective on the design of transparent online disclosures in the EU. Journal of Consumer Policy, 42(1), 149–173. Simon, H.A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118. Staudenmayer, D. (2020). The directives on digital contracts: First steps towards the private law of the digital economy. European Review of Private Law, 28(2), 219–250. Steppe, R. (2017). Online price discrimination and personal data: A general data protection regulation Perspective. Computer Law & Security Review, 33(6), 768–785. Stigler, G.J. (1966). The theory of price. New York: Macmillan. Stucke, M.E. & Ezrachi, A. (2017). How digital assistants can harm our economy, privacy, and democracy. Berkeley Technology Law Journal, 32(3), 1239–1300. Tereszkiewicz, P. (2018). Digital platforms: regulation and liability in the EU law. European Review of Private Law, 26(6), 903–920. Thaler, R.H. & Sunstein, C.R. (2008). Nudge: Improving decisions about health, wealth, and happiness. New Haven: Yale University Press. Thaler, R.H., Sunstein, C.R. & Balz, J.P. (2013). Choice architecture. In Shafir, E. (Ed.). The behavioral foundations of public policy (pp. 428–439). Princeton: Princeton University Press. Tonner, K. (2014). From the Kennedy message to full harmonising consumer law directives: A retrospect. In Purnhagen, K. & Rott, P. (Eds.). Varieties of European economic law and regulation: Liber amicorum for Hans Micklitz (pp. 693–707). Dordrecht Springer. Troiano, S. (2009). To what extent can the notion of ‘reasonableness’ help to harmonize European contract law? Problems and prospects from a civil law perspective. European Review of Private Law, 17(5), 749–787. Turow, J. (2021). The voice catchers. New Haven: Yale University Press. Twigg-Flesner, C., Schulze, R. & Watson, J. (2018). Protecting rational choice: information and the right of withdrawal. In Howells, G., Ramsay, I. & Wilhelmsson, T. (Eds.). Handbook of research on international consumer law (pp. 111–138). Cheltenham: Edward Elgar Publishing. Van Dijck, J. (2013). The culture of connectivity: A critical history of social media. Oxford: Oxford University Press. Weatherill, S. (2012). The Consumer Rights Directive: How and why a quest for “coherence” has (largely) failed. Common Market Law Review, 49(4), 1279–1317. Wilhelmsson, T. (2008). Various approaches to unfair terms and their background philosophies. Juridica International, 14, 51–57. Wilhelmsson, T. & Twigg-Flesner, C. (2006). Pre-contractual information duties in the acquis communautaire. European Review of Contract Law, 2(4), 441–470. Willett, C. (2018). Re-theorising consumer law. The Cambridge Law Journal, 77(1), 179–210. Wylie, C. (2019). Mindf* ck: Cambridge analytica and the plot to break America. New York: Random House. Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication & Society, 20(1), 118–136. Zuboff, S. (2015). Big other: surveillance capitalism and the prospects of an information civilization. Journal of information technology, 30(1), 75–89. Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. London: Profile books. Zuckerberg, M. (2021). Founder’s Letter. Retrieved from www​.about​.fb​.com ​/news​/2021​/10​/founders​ -letter.

12. Criminal law and technology Sofie Royer and Rune Vanleeuw

1. INTRODUCTION The unstoppable development of new technologies that find their way into our societies, inevitably entails new tools to commit criminal offences, often in an easier way and at a larger scale than before. Criminals are, for instance, using deep fake technology in order to spoof faces and commit online identity fraud.1 Moreover, new types of harmful or reprehensible behaviour emerge due to the development of new technologies. The main question that will be addressed in the first part of this chapter, is whether existing criminal offences are sufficient and adequate to fight those new types of behaviour.2 We mainly focus on the conceptualisation of criminal offences in the legislation of Member States of the Council of Europe (hereafter: CoE). As some criminal codes still date from Napoleonic times, and criminal laws cannot be interpreted by analogy according to the principle of legality,3 this urges legislators to reflect on new legal provisions. The non-consensual distribution of sexual images, cyberflashing, doxing, and the illegal use of stalkerware are examples of behaviours that hardly fit into existing provisions, and will, therefore, be discussed in the first part of this chapter. Apart from some European directives and the CoE’s Convention of Cybercrime4 (hereafter: Cybercrime Convention), criminal laws fall under the sovereignty of states. As a consequence, this chapter will be illustrated with legal provisions of different countries without, however, being exhaustive. We will not touch upon criminal offences, which, regardless of the prevention challenges they might provoke, turn out to be less challenging from a legal perspective, such as phishing and the possession of 3D-printed weapons. On the other hand, the development of new technologies entails opportunities for law enforcement authorities. They can use advanced technologies in order to track down perpetrators faster. However, the use of those technologies might often be at odds with various human rights, such as the right to protection of private life and the freedom of expression, which are enshrined in the European Convention of Human Rights (hereafter: ECHR). The second part 1  ENISA. (2022, January 20). Beware of Digital ID attacks: your face can be spoofed! Retrieved from  https://www​.enisa​.europa​.eu ​/news​/enisa​-news​/ beware ​- of​- digital​-id​-attacks​-your​-face ​- can​-be​ -spoofed. 2  We find that legislators all over Europe reach for the introduction of criminal provisions when new types of harmful or reprehensible behaviour arise. Whereas one might rightly doubt whether the introduction of criminal legislation is the most appropriate answer to those societal challenges, we will assess those new types of behaviour from a legal-technical perspective, without making any normative claims or going into a more philosophical debate. 3  Art. 7 ECHR states that no one shall be held guilty of any criminal offence on account of any act or omission, which did not constitute a criminal offence under national or international law at the time when it was committed. 4  CoE. (2001, November 23). Convention on Cybercrime. Retrieved from https://rm​ .coe​ .int​ /1680081561.

190

Criminal law and technology  191 of this chapter will, therefore, be dedicated to new technologies and the criminal procedure and human rights concerns they provoke. Topics that will be covered are the legal grounds of open-source investigations, decryption orders, and the lawfulness of data driven investigations. The same remark applies here. As a supra- or international criminal procedure does not exist, the chapter will refer to examples from different national legislations, especially of CoE Member States, and, of course, the relevant case law of the European Courts. This book chapter is dedicated and limited to some pressing questions of substantial criminal law and criminal procedure law in the strict sense. First, it will, therefore, not touch upon other legal domains, such as data retention, disinformation, cyberwarfare, and others, although common grounds certainly are conceivable. Second, besides the above-mentioned legal questions, cybercrime also challenges different fundamental principles of substantial criminal law and criminal procedure, such as the territoriality principle, which means that in principle, states can only prosecute the criminal offences that take place within their territory. As cybercriminals operate without being hindered by national borders, questions arise on the localisation of criminal offences and the competent jurisdiction. Due to the constraints of this chapter, we do not touch upon all those challenges. Third, as this area of the law is quickly evolving, we will focus on the recently adopted legislation and ongoing debates in case law and legal doctrine. We will not go into detail about planned reforms, some of which keep dragging on. Fourth and finally, we will not address legal-philosophical questions about the (non)sense of criminal law and punishment in a digital world.

2. SUBSTANTIVE CRIMINAL LAW 2.1 Typology of Cybercrime The term cybercrime typically covers a wide range of criminal offences or illegal conduct by both individuals and groups against computers, computer-related devices, or information technology networks, as well as traditional crimes that are facilitated through the use of the internet and/or technology (Donalds & Osei-Bryson, 2019). Providing a single clear and precise definition, however, is no mean feat. As a matter of fact, the process within the United Nations to debate and draft a Cybercrime Treaty has been hindered by a lack of consensus among Member States about what constitutes ‘cybercrime’ (Rodriguez & Baghdasaryan, 2022). Both academic and institutional definitions of cybercrime tend to be either overly broad and vague, including any harmful behaviour that involves a computer in some way, or they are too reductive to be of any use when trying to gain a comprehensive understanding of the concept (Phillips et al., 2022). In addition, any definition risks becoming outdated relatively quickly due to the ever-evolving nature of cybercrime. As such, instead of referring to a single definition, it seems preferable to employ a classification system of cybercrime for the sake of this book chapter, as it allows for more adaptability. Here too, however, there are several options for categorisation. A first approach can be found in the CoE’s Cybercrime Convention, which distinguishes between five types of offences: (1) offences against confidentiality, (2) copyright-related offences, (3) content-related offences, (4) computer-related offences, and (5) dissemination of racist and xenophobic material through computer systems. While this typology is certainly the most widely adopted classification system, the criteria to distinguish between the different categories are not exactly consistent.

192  Research handbook on law and technology Whereas the first three categories focus on the object of legal protection, the last two are rather related to the method used to commit the crime. As a result, the different categories can sometimes overlap. Another approach is to categorise cybercrime according to the victimisation associated with the offence (Correia, 2019; Yar, 2006). This type of classification system goes beyond the cybertechnology aspect and allows us to take a closer look at the relationships between cybercriminals and their victims. As such, a distinction could be made between crimes against property (e.g. hacking, online fraud, IP theft, etc.), crimes against morality (e.g. deep nudes, non-consensual dissemination of intimate images, etc.), crimes against the individual (e.g. cyberstalking), and crimes against the state (e.g. cyberterrorism, cyber espionage, etc.). While the distinction criterion is certainly more consistent, it seems somewhat outdated to frame offences such as nonconsensual dissemination of intimate images as an offence ‘breaching laws on obscenity and decency’, rather than recognising it as a breach of a person’s sexual integrity and dignity. A third classification system, and the one that will be adopted in this contribution, is the ‘old crimes-new tools vs. new crimes-new tools’ typology, sometimes referred to as the ‘cyber-enabled vs. cyber-dependent’ dichotomy (Phillips et al., 2022). Within this system, we can categorise cybercrime based on the intended target and the means of these forms of crime. The first category ‘old crimes-new tools’ refers to all forms of crime that are further facilitated by digital tools, such as cyber harassment or hate speech. The latter category ‘new crimes-new tools’ refers to criminal offences where the digital equipment is both the intended target and the means to accomplish these forms of crime, such as cryptojacking. We propose an extension of this dichotomy with a third category of ‘cybercrime-as-a-service’, which encapsulates the other two categories and refers to situations in which cybercriminals sell their cybercrime services to customers so that they can be deployed on a large scale (Hyslip, 2020). 2.2 Old Crimes, New Tools 2.2.1 Cyber harassment 2.2.1.1 Stalkerware Stalkerware refers to a type of malware that is used to collect private data to enable a person to secretly spy on a targeted individual and perpetuate further abuse (Han et al., 2021). While the software in and of itself is not considered malicious – considering it is openly marketed and sold by developers as parental control or anti-theft applications – it is increasingly being used to stalk (ex-)partners as well as random individuals. In 2021, Apple caused quite a stir after the release of their so-called AirTags (Matei, 2022). These are small tracking devices – barely larger than a 2-euro coin – meant to attach to valuables or items that are easily lost, such as keys and wallets. The devices are connected to the owner’s personal Apple account and continuously emit a Bluetooth signal, which can then be picked up by any compatible Apple device within a radius of 100 metres. The AirTag’s location information is uploaded to the cloud, allowing the owner to track the device through the Find My app. The AirTags make use of wireless radio technology, which measures the tag’s relative distance to other devices. Consequently, the AirTag’s location can be determined accurately to the centimetre. While it was originally designed as a useful trinket, it quickly became clear that it is also an easy-to-use and affordable tool for stalkers. For example, in 2022 a UK man was sentenced to nine weeks in prison for stalking his ex-girlfriend after gluing an AirTag under her car bumper (Moore, 2022).

Criminal law and technology  193 Other striking examples include face search engines, such as PimEyes,5 which allow their users to upload a picture and match it with other images in their database through the use of facial recognition. These databases usually contain millions of images that were scraped without permission from all across the web, which raises serious concerns about their compatibility with European data protection regulation. While the tools are often promoted as a way for law enforcement to track down criminals or for regular people to keep tabs on their online reputation, there are very few – if any – technical measures in place to prevent users from abusing the tool to stalk others. These tools essentially allow anyone to track someone down across the internet with a simple click of a button. Over the last few years, we have seen a substantial rise in the adoption of dedicated national anti-stalking laws in the European Union (hereafter: EU) (Van Der Aa, 2018). As of 2022, only a handful of countries, such as Latvia, Lithuania, and Denmark, remain without specific stalking or harassment provisions. This can likely be attributed in large part to the coming into force of the CoE Convention on Preventing and Combating Violence Against Women and Domestic violence in 2013.6 The existing legislation can be roughly divided into two categories.7 On the one hand, there are those provisions that require the inclusion of an exhaustive list of possible stalking tactics (e.g. contacting the victim through specific means of communication, trying to come physically close to the victim, etc.), thereby greatly limiting the possibility to adapt to new forms of technology-facilitated stalking. On the other hand, the more general criminal definitions require the victim to have experienced distress or an uprooting of their daily life. While these provisions are more flexible, the question arises to what extent one can argue that a victim’s peace of mind is seriously disturbed as long as the victim does not realise they are being digitally stalked. As a result, even the more general anti-stalking legislation might prove insufficient to properly address cases of cyberstalking. Aware of the issue, the European Commission launched a proposal for a new directive in 2022 on combatting violence against women and domestic violence.8 If that proposal is adopted, Member States will be required to explicitly criminalise various forms of cyberstalking. Those offences would include the use of ICTs to place someone under constant surveillance without permission to monitor their activities and movements. Nevertheless, it should be noted that effective criminalisation is necessary, but not sufficient on its own in the fight against gender-based cyber violence. It is unrealistic that there would be a total ban on the development of technologies that can also be used as stalkerware. Therefore, companies bringing such technologies to the market should be aware of the potential misuse and make every effort to limit it. A great responsibility rests on their shoulders to pay attention to the ethical aspects when developing such a product.

5  Face recognition search engine and reverse image search. PimEyes. (n.d.). Retrieved November 8, 2022, from https://pimeyes​.com​/e. 6  CoE. (2011, May 11). Convention on Preventing and Combating Violence against Women and Domestic Violence. Retrieved from: https://www​.refworld​.org​/docid​/548165c94​.html. 7  Legal definitions in the EU member states. European Institute for Gender Equality. (n.d.). Retrieved November 10, 2022, from https://eige​.europa​.eu​/gender​-based​-violence​/regulatory​-and​-legal​-framework ​/ legal​-definitions​-in​-the​-eu​?vt​%5B​%5D ​=127. 8  Proposal for a Directive of the European Parliament and of the Council on combating violence against women and domestic violence, COM/2022/105 final, March 8, 2022.

194  Research handbook on law and technology 2.2.1.1.2 Doxing One common tactic used in online harassment is doxing. The term doxing derives from the phrase ‘dropping dox’ or ‘dropping documents’ (Douglas, 2016; Steffens, 2020). It refers to the search for and online publication of personal information about an individual, typically with the intent to harass, intimidate, humiliate, threaten, or punish the victim (Douglas, 2016). In essence, doxing requires the release of documentary evidence of identity knowledge to remove some degree of anonymity from the identified individual. From a technical perspective, doxing is usually quite unremarkable. Perpetrators will generally rely on publicly available sources, such as internet search engines, social media, and online databases.9 For instance, reverse mobile phone lookup services allow perpetrators to discover inter alia the city or state associated with a mobile phone number. If this information is then combined with visual clues about locations in photos posted on social media, perpetrators would already have a good idea of the general area a victim moves in. Depending on the type of information that is released, we can distinguish three types of doxing (Douglas, 2016). The first involves the release of identifying information of a person who had previously remained anonymous or was known by a pseudonym. In this regard, the question arises whether there is indeed a legal right to online anonymity (HRC, 2015; Moyakine, 2016). On the one hand, online anonymity is often used as a cover for trolls and cyberbullies to post vile, false and damaging comments online. In particular, in cases where these accounts garner a considerable following, we can ask ourselves whether it is not in the public interest to reveal the identity of an individual who spreads hate speech at such a level. On the other hand, it is clear that online anonymity may be essential for individuals to exercise their civil and political rights in situations where they need to remain unidentified. The second type of doxing involves the release of specific information that allows a victim to be physically located. This includes someone’s home address, work address, licence plate information, etc. This information often follows from deanonymisation and can make the victim more vulnerable to physical harassment. Politicians, journalists, and activists – in particular female ones – from all over the world have been confronted with similar situations of doxing (Clapson, 2022; Sleigh, 2022).10 The third and final type of doxing releases information with the express intent to undermine the victim’s credibility, reputation and/or character. The goal is to shame the victims for supposed hypocrisy or any non-conforming social behaviour. An excellent example is the concept of non-consensual dissemination of intimate images (hereafter: NCII), more commonly known under its misnomer ‘revenge porn’. Research seems to suggest that the phenomenon is very much on the rise (Carter, 2022). While not all cases of NCII necessarily have a doxing aspect to them, the two often go together. Oftentimes it is assumed that the images or videos are distributed by a disgruntled (ex-)partner who aims to humiliate the victim, in retaliation for ending a relationship. However, perpetrators are not always (ex-)partners, nor are they always motivated by revenge. In some cases, for instance, hackers will breach personal accounts and steal private information and nude images from unsuspecting victims, only to repost them on social media and/or dedicated websites. It is a continuation of the long-standing practice of  9  Kaspersky (n.d.). What is Doxing – Definition and Explanation. Kaspersky, from https://www​ .kaspersky​.com ​/resource​-center​/definitions​/what​-is​-doxing. 10  See UNESCO. (2021). The Chilling: global trends in online violence against women journalists. Unesco Library. Retrieved from https://unesdoc​.unesco​.org​/ark:​/48223​/pf0000377223.

Criminal law and technology  195 ‘slut-shaming’, whereby perpetrators post women’s intimate images ‘because they know it will make them unemployable, undateable, and at risk for sexual assault’ (Citron, 2014). Oftentimes the act of doxing someone as such is not illegal, as it tends to fall outside the scope of criminal laws related to blackmail, defamation, harassment, and stalking (Douglas, 2016). Unlike blackmail, for instance, doxing does not necessarily involve elements of extortion or coercion. Doxing is usually intended to punish or intimidate a victim, rather than having said victim comply with specific demands. Doxing also differs from defamation in that it does not rely on the revelation of (false) information that is damaging to the reputation of the victim. Finally, while some cases of targeting doxing could potentially expose the victim to physical harassment and/or stalking, it is uncertain if the mere release of private information would qualify as such. For this reason, some countries have started introducing legislative proposals to enable the prosecution of doxing. In France, for instance, a provision against doxing was adopted following the murder of Samuel Paty in 2020. Article 223-1-1 of the French Criminal Code now criminalises the act of revealing, disseminating, or transmitting, by any means whatsoever, information relating to the private, family, or professional life of a person which makes it possible to identify or locate them for the purpose of exposing them, their family or their possessions to a direct risk of harm. The new offence is punishable by imprisonment up to three years. In 2021, a legislative proposal has been submitted to the Belgian Parliament, copying the new French criminalisation in general.11 The proposed offence only covers a risk of harm to persons and not goods. In addition, the provision specifies that this criminalisation may not lead to a violation of the freedom of information. The Netherlands has followed suit and has submitted its own legislative proposals with the intent to criminalise the distribution and forwarding of personal data aimed at scaring someone, seriously inconveniencing them or hindering them in the course of their profession.12 If the proposal is approved, perpetrators may face a maximum prison sentence of up to one year. 2.2.2 Hate speech Whereas hate speech is not exclusively committed online, social media platforms have turned out to be environments where hate speech can flourish. In their 2019 Plan of Action on Hate Speech, the United Nations defined hate speech in general as any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.13

11  Wetsvoorstel tot beteugeling van ‘doxing’, October 6, 2021. Retrieved from https://www​.lachambre​.be​/ FLWB​/ PDF​/55​/2235​/55K2235001​.pdf. 12  Wijziging van het Wetboek van Strafrecht, het Wetboek van Strafrecht BES, het Wetboek van Strafvordering en het Wetboek van Strafvordering BES in verband met de strafbaarstelling van het zich verschaffen, verspreiden of anderszins ter beschikking stellen van persoonsgegevens voor intimiderende doeleinden (strafbaarstelling gebruik persoonsgegevens voor intimiderend doeleinden), 36 171, nr. 2 (2022). Retrieved from: https://www​.tweedekamer​.nl​/downloads​/document​?id​=2022D30685. 13  UN. (2019). United Nations Strategy and Plan of Action on Hate Speech. Retrieved from: https:// www​.un​.org​/en​/genocideprevention​/documents​/ UN​%20Strategy​%20and​%20Plan​%20of​%20Action​ %20on​%20Hate​%20Speech​%2018​%20June​%20SYNOPSIS​.pdf.

196  Research handbook on law and technology However, not all forms of hate speech constitute behaviour that is punishable under criminal laws. Definitions in international instruments criminalising hate speech are often much narrower, focusing on outlawing incitement to hatred, violence, and discrimination.14 Although hate speech can target all kinds of people or groups, most national criminal law provisions in Europe are aimed at fighting online racism inspired by two important international pieces of legislation. In 2003, a First Protocol has been added to the Cybercrime Convention in order to force Member States to criminalise acts of a racist and xenophobic nature committed through computer systems.15 Five years later, the European Union adopted a Framework Decision on combatting certain forms and expressions of racism and xenophobia by means of criminal law.16 The Framework Decision urged Member States to criminalise amongst others publicly inciting to violence or hatred directed against a group of persons or a member of such a group and publicly denying crimes of genocide, crimes against humanity and war crimes. Whereas the Protocol aims specifically at racist offences committed by means of computer systems, the Framework Decision is technology-neutral aiming at both offline and online racism. If it were up to the European Commission, the list of ‘EU crimes’17 will be extended, so that the European Union can also establish minimum rules concerning the definition and sanctions of hate speech and hate crime on the basis of gender, sexual orientation and disability and age.18 The above-mentioned criminal provisions that exist in most European countries, focus on the prosecution of the author of criminally illegal hate speech. However, this is not always feasible in practice. First of all, perpetrators, also referred to as trolls, often manage to hide their real identities while posting malicious comments for the purpose of causing conflicts on the internet (Buckels et al., 2014; Santos et al., 2022), making use of, amongst others, virtual private networks, proxy services, and peer-to-peer networks.19 Second, individual criminal responsibility falls short when the collective aspect is exactly what makes behaviour reprehensible or harmful, e.g. in the case of dogpiling. In an online environment, this term is used to refer to a situation whereby a large group of users within a certain online space bombard victims with a barrage of offensive, insulting, and threatening messages. The goal is to overwhelm the target in order to silence, discredit, humiliate, or force an apology out of them. In some cases, dogpiling attacks can be traced back to specific extreme communities or individuals who (in)directly encourage their followers to harass certain people. More often than not, however, people will deny having ever taken part in a 14  UN. (2019). Promotion and protection of the right to freedom of opinion and expression. Retrieved from: https://documents​-dds​-ny​.un​.org​/doc​/ UNDOC​/GEN​/ N19​/308​/13​/ PDF​/ N1930813​.pdf​ ?OpenElement. 15  CoE. (2003, January 28). Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems. Retrieved from: https://rm​.coe​.int​/168008160f. 16  Council. (2008, November 28). Framework Decision nr. 2008/913/JHA on combating certain forms and expressions of racism and xenophobia by means of criminal law. Retrieved from.​https:/​/eur​ -lex​.europa​.eu​/ legal​-content​/ EN​/ TXT​/ PDF/​?uri​= CELEX​:3200​8F0913​&from​=EN 17  Article 83 of the Treaty on the Functioning of the European Union. 18  Retrieved from: https://www​.europarl​.europa​.eu​/ legislative​-train​/theme​-a​-new​-push​-for​-european​ -democracy​/file​-hate​-crimes​-and​-hate​-speech. 19  Human Rights Council. (2015). Report of the Special Rapporteur on the promotion and the protection of the right to freedom of opinion and expression. Retrieved from: https://www​.undocs​.org​/ Home​/ Mobile​?FinalSymbol​=A​%2FHRC​%2F29​%2F32.

Criminal law and technology  197 dogpiling attack. From where they stand, they are simply critiquing, offering their opinion, or calling out behaviour that they deem ‘bad’. They perceive themselves as one of several individuals offering ‘valid criticism’ worthy of individual responses. Regardless of whether a dogpiling attack is a coordinated effort or not, to the victim a dogpile will usually feel like a single wave of hate directed right at them (Jhaver, 2018). As such, it is difficult to pinpoint who can be held responsible in such scenarios. In this regard, we can ask ourselves whether our current criminal justice system is at all adjusted to deal effectively with this type of harmful behaviour. As a result of those limitations of individual criminal liability, there is a current shift towards holding internet intermediaries who are involved in the distribution of hate speech, (criminally) liable. In 2021, the European Union adopted a new regulation on countering the distribution of terrorist content online.20 Terrorist content is considered to include material that incites the commission of a terrorist offence, where such material, directly or indirectly, such as by the glorification of terrorist acts, advocates the commission of terrorist offences, thereby causing a danger that one or more such offences may be committed. Upon the order of a competent authority, a hosting service provider must remove terrorist content or block access to it in all Member States and within one hour of receiving the removal order. Hosting service providers must also take proactive measures against the dissemination of terrorist content. Internet intermediaries are, however, required to retain the content for six months so that potential evidence is not lost and so that it can be restored in case the removal was not justified (Bellaert, 2022). Moreover, in 2022, the European Union has adopted the Digital Services Act, which withdraws the fundamental prohibition of internet intermediaries to monitor internet content and increases the obligations of internet intermediaries when it comes to the fight against illegal and harmful content online.21 The shift is also noticeable in the case law of the European Court of Human Rights (hereafter: ECtHR). In 2015, the ECtHR seemed to have limited liability of intermediaries to large professionally managed and commercial internet news portals, which publish news articles of their own and invite readers to comment on them (Brunner, 2016).22 The Court, however, appears to have abandoned that view. Worth mentioning in that regard is the case of the French politician Sanchez who posted a message about a political opponent on his Facebook profile.23 The post provoked several hateful and Islamophobic reactions. Being sentenced to a criminal fine for failing to remove the post, Sanchez argued that his right to freedom of expression had been violated. Although politicians in principle enjoy a broad interpretation of their right to freedom of expression, the ECtHR found that the Facebook posts clearly incited hatred and violence. Furthermore, by making his profile accessible to everyone, Sanchez accepted a duty to monitor the reactions to his posts. As a politician, he thus carried greater responsibility than an average Facebook user. As a result, the criminal conviction for his failure to promptly 20  Eur. Parliament and Council. (2021, April 29). Regulation nr. 2021/784 on addressing the dissemination of terrorist content online. Retrieved from: https://eur​-lex​.europa​.eu​/ legal​-content​/ EN​/ TXT/​?uri​ =celex​%3A32021R0784. 21  Eur. Parliament and Council (2022, October 19), Regulation nr. 2022/2065 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act). Retrieved from: https:// eur​-lex​.europa​.eu​/ legal​-content​/ EN​/ TXT/​?uri​=uriserv​%3AOJ​.L_​.2022​.277​.01​.0001​.01​.ENG​&toc​= OJ​ %3AL​%3A2022​%3A277​%3ATOC. 22  ECtHR. (2015, June 16). Delfi AS v Estonia. Nr. 64569/09. 23  ECtHR. (2021, September 2). Sanchez v France. Nr. 45581/15.

198  Research handbook on law and technology remove the post did not violate his right to freedom of expression according to the ECtHR as it pursued the legitimate aim of protecting the rights of third parties (Lemmens, 2022). 2.2.3 Sexual criminal offences 2.2.3.1 Cyberflashing Another recent criminal phenomenon that can be referred to as cyberflashing, consists of ‘sending unrequested and imposed sexual pictures using dating apps, message apps or texts, or using Airdrop (a mix of Bluetooth and Wi-Fi, creating a two-way channel between phones being less than 10 metres away) or Bluetooth’.24 It involves both isolated incidents of sexual exposure as well as continued abusive conduct, such as (cyber)stalking, street harassment, grooming and domestic violence contexts (McGlynn & Johnson, 2021). In some cases, the images are accompanied by threatening or abusive language with the aim of causing distress to the people involved. As the smartphone is considered to be a strictly personal environment, cyberflashing can be experienced as a privacy violation, as well as a violation of sexual integrity (Morten Birk, 2020). Despite cyberflashing essentially being the online equivalent of the creepy exhibitionist with a long raincoat who exposes themselves to people in the street, countries still seemingly struggle to bring it within existing offences. Some countries, such as Scotland and Ireland, have broadly formulated sexual offences legislation that can encompass emerging means of perpetration. However, since these provisions were sometimes designed with the intention of combatting other forms of offending, they remain largely unknown to cover cyberflashing (McGlynn & Johnson, 2021). For instance, in Ireland the Criminal Law (Sexual Offences) Act 2017 makes it illegal to intentionally engage in […] any [offensive] behaviour of a sexual nature which, having regard to all the circumstances, is likely to cause fear, distress or alarm to any person who is, or might reasonably be expected to be, aware of any such behaviour.25

While this provision could in theory enable the prosecution of cyberflashing, it was originally introduced to replace previous offences related to indecent exposure. The provision thus seems focused on protecting public morality, rather than the individual right to sexual autonomy and bodily integrity. As such, it is not always clear how those provisions can be applied to the prosecution of cyberflashing. Another obstacle might be that indecent exposure provisions require that the offence is committed in public, whereas an image sent to someone’s smartphone probably does not meet that requirement. Other countries that do not have such broad offences at their disposal, will usually opt for a tailored provision specifically targeted at combatting cyberflashing. For instance, the Dutch Criminal Code contains a specific offence criminalising whoever knows or has serious reason to suspect that an image is offensive to someone’s morality and still sends it other than at their request.26 In 2022 England and Wales announced plans to include a specific cyberflashing offence in the upcoming Online Safety Bill, with perpetrators facing up to two years

24  Retrieved from: https://rm​.coe​.int​/the​-relevance​-of​-the​-ic​-and​-the​-budapest​-convention​-on​-cyber crime​-in​-a​/1680a5eba3. 25  Section 45(3) jo. Section 45(6) of the Criminal Law (Sexual Offences) (Ireland) Act 2017. 26  Art. 240 Dutch Criminal Code.

Criminal law and technology  199 imprisonment and/or a fine.27 Such specific provisions have the advantage that they clearly criminalise this behaviour, thus sending a strong signal that cyberflashing constitutes unacceptable conduct and recognising the harmful effects on victims. However, there is also a risk that specific provisions are both over- and under-inclusive. This might result in undesirable censorship of, for instance, artistic pictures. 2.2.3.2 Virtual sexual assaults The concept of ‘virtual sexual assault’ covers a wide spectrum of behaviour, ranging from purely verbal harassment (e.g. unwanted sexualised comments) to environmental harassment (e.g. cyberflashing, forcing someone to watch sexual or violent content, etc.) to physical harassment (e.g. unwanted touching, obstructing movement, obscene gestures, etc.).28 In essence, it includes any form of non-consensual sexual behaviour performed by virtual characters, to one another, acting through representations in a virtual environment (Danaher, 2019). Although these interactions take place in a virtual world, they can have real-world effects on victims (Dibbel, 1998). In 2022, a SumOfUs29 researcher reported being sexually assaulted in Meta’s virtual reality platform Horizon Worlds.30 She recalls being led into a private room at a party where she was raped by a user while others watched and passed around a virtual bottle of vodka. The researcher noted that when a user is touched by another in the Metaverse, the hand controllers vibrate, creating ‘a very disorienting and even disturbing physical experience during a virtual assault’. This concept can be further categorised on the basis of the virtual characters involved in the assault. Regarding the latter, the conceptualisation changes depending on whether the act involves a human perpetrator, a human victim, or both. Virtual sexual assaults involving only virtual victims should obviously be distinguished from cases involving human victims. In 2019, for instance, a controversial choose-your-own-adventure video game called ‘Rape Day’ was launched. The game allowed players to play as a ‘menacing serial killer-rapist’ and encouraged them to rape women to progress the plot. In its original copy, the game included scenes depicting the rape of babies, necrophilia, and incest, all of which were later removed.31 While this game, and those who revel in committing acts of virtual sexual assault, could be considered morally reprehensible, it would probably be an overreach to criminalise this type of virtual sexual assault as no human victims are physically or psychologically harmed by the act (Danaher, 2019). Having said that, there can be exceptions to the rule. For example, many countries have already criminalised the possession, viewing, sharing, and creation of virtual child sexual abuse images – even though such imagery depicts fictitious minors. Such a prohibition could easily be extended to cover the development or playing of video games which include virtual child sexual abuse imagery. In addition, we can make a distinction based on the mode of virtual interaction involved. Virtual sexual assaults that take place in text-based virtual environments or where avatars 27  ‘Cyberflashing’ to become a criminal offence. (2022). Gov​.​uk. Retrieved November 10, 2022, from https://www​.gov​.uk ​/government ​/news​/cyberflashing​-to​-become​-a​-criminal​-offence. 28  Retrieved from: http://www​.lindsayblackwell​.net​/wp​-content​/uploads​/2019​/09​/ Harassment​-in​ -Social​-Virtual​-Reality​-CSCW​-2019​.pdf. 29  SumOfUs is a global non-profit advocacy organisation which focuses on campaigns to hold corporations accountable on issues such as climate change, discrimination, human rights, animal rights, etc. 30  Retrieved from: https://www​.sumofus​.org​/images​/ Metaverse​_ report​_ May​_2022​.pdf. 31  Retrieved from: https://www​.bbc​.com​/news​/ blogs​-trending​- 47484397.

200  Research handbook on law and technology are controlled via a mouse and/or keyboard differ from those that take place through a VR headset or other highly immersive interactions. Furthermore, more and more developers are looking to integrate haptic technology into their virtual environments, which uses tactile sensations in order to simulate the sense of touch. While head-to-toe haptic body suits are not yet available, there are already highly advanced haptic gloves, body armour and mouth haptics that can simulate a realistic sense of touch on different body parts. If haptic technology was involved in virtual assaults, there would also be physical effects on the victim’s body – further blurring the line between virtual sexual assault and real-world sexual assault. Once again, we can ask ourselves to what extent developers of virtual or mixed reality technology can be held responsible for the sexual misconduct of its users. To the extent that developers make useful or necessary contributions to the virtual sexual assault, it is not excluded that they could be criminally liable as a co-perpetrator or an accomplice. 2.2.3.3 Deepnudes Another phenomenon that we will increasingly face in the near future is deepfakes. The term ‘deepfake’ is a contraction of deep learning and fake and refers to manipulated or synthetic digital content. The technology to manipulate image and audio material is getting better, more user-friendly, and accessible. Within a few years, 90 percent of the content on the internet is estimated to be synthetic.32 Most of the applications may be innocuous, others raise purely ethical questions, such as bringing dead people to life in a deepfake video for the sake of a reconstruction in a criminal investigation. However, without a doubt, the technology is and will be more often used for malicious purposes. For instance, the European cybersecurity agency, ENISA, warned that faces are now being spoofed using deepfake technologies in order to circumvent identity checks.33 One of the most common malicious applications of deepfakes are deepnudes, manipulated nude or sex images. Those images can be spread to smear someone’s reputation, scare, or to blackmail someone (Isik, 2022). Making or distributing deepnudes can be punished as extortion or stalking depending on the specific circumstances of a case. It should also be mentioned that nowadays, several countries in Europe and beyond have adopted legislation to criminalise the non-consensual distribution of intimate images, sometimes inadequately referred to as ‘revenge porn’ (Powell & Henry, 2017). It remains to be seen whether those more recent criminal offences can fully cover the harmful spreading of deepnudes (Goudsmit, 2021). 2.3 New Crimes, New Tools Although the ‘new crimes, new tools’ category includes many different offences, such as ransomware, distributed denial of service (DDoS) attacks, and hacking, within the scope of this contribution we will limit ourselves to the example of cryptojacking, which involves cryptocurrencies. A cryptocurrency is a digital currency that relies on cryptography to secure its transactions, which makes it nearly impossible to counterfeit or double-spend. Cryptocurrencies

Retrieved from: https://www​.digitalks​.eu​/synthetische​-media. ENISA. (2022, January 20). Beware of Digital ID attacks: your face can be spoofed!. Retrieved from: https://www​.enisa​.europa​.eu​/news​/enisa​-news​/ beware​-of​-digital​-id​-attacks​-your-face​-can-be-spoofed. 32  33 

Criminal law and technology  201 usually run on a distributed public ledger based on blockchain technology and are not issued by any central authority, rendering them theoretically immune to government regulation. In order to obtain cryptocurrency, you can either purchase coins from other coin-holders or cryptocurrency exchanges, or create new coins through a so-called ‘mining process’. The latter involves using computer power to solve cryptographic hash puzzles to verify blocks of transactions on a blockchain network. Over the years these puzzles have grown more and more complex, increasing the demand for computing power and, by extension, energy.34 As such, people have started to look for different ways to mine cryptocurrency without incurring massive energy or hardware costs, one of which is ‘cryptojacking’. Interpol defines cryptojacking as a type of cybercrime ‘where a criminal secretly uses a victim’s computing power to generate cryptocurrency’ (CCB, 2018). This usually occurs when the victim unwittingly installs ‘coin miners’ software, for instance, by clicking on a phishing link or visiting an infected website. Once the program is installed, it will mine cryptocurrencies in the background. While victims may notice slower performance, lags, overheating, or excessive power consumption, the software typically does not damage the victim’s computer or data.35 For an individual, this may simply be an annoyance, but organisations with multiple infected devices or servers may incur huge costs. Besides the obvious increase in energy consumption, cryptojacking software ages the hardware by overwhelming processing cores and batteries. In addition, server shutdowns can cause significant losses due to business interruption as well as deal a damaging blow to the company’s reputation.36 This can be considered data or system interference within the meaning of the Cybercrime Convention. Even if we disregard these costs, the miner is using the victim’s computer resources for criminal purposes without their knowledge or consent. The question arises of whether cryptojacking can be considered a criminal offence. At first glance, the answer seems to be yes. In 2019, two Romanian cybercriminals were convicted in the United States of wire fraud after infecting over 400,000 computers with cryptojacking malware.37 Most EU Member States have similar cybersecurity laws in place to crack down on cases of IT fraud or hacking, inspired by the Cybercrime Convention. Therefore, a person who gains access to an IT system with fraudulent intent and obtains an unlawful economic benefit by installing cryptojacking malware could likely be prosecuted under those laws. However, a decision of Japan’s Supreme Court is worth mentioning in this regard. It ruled that cryptojacking scripts that make use of JavaScript code cannot be considered malware. It was argued that the cryptojacking JavaScript code that ran in the background ‘without the user’s permission’ was no different from your everyday targeted advertisements or services such as Google Analytics. Therefore, the Court seemed to suggest that as long as cryptojackers remain brazen and are not duplicitous about the code they run, the use of cryptojacking JavaScript code is not illegal. The decision is quite remarkable, as Japan has ratified the Cybercrime Convention.

Retrieved from: https://digiconomist​.net​/ bitcoin​-energy​-consumption. Retrieved from: https://www​.imperva​.com​/ learn​/application​-security​/cryptojacking/. 36  Retrieved from: https://www​.imperva​.com​/ learn​/application​-security​/cryptojacking/. 37  Two Romanian Cybercriminals Convicted of All 21 Counts Relating to Infecting Over 400,000 Victim Computers with Malware and Stealing Millions of Dollars. (2019). The United States Department of Justice. Retrieved November 8, 2022, from https://www​.justice​.gov​/opa​/pr​/two​-romanian​-cybercriminals​-convicted​-all​-21​-counts​-relating​-infecting​-over​- 400000​-victim. 34  35 

202  Research handbook on law and technology 2.4 Cybercrime-as-a-Service Cybercrime-as-a-Service (CaaS) is an umbrella term used to describe an organised business model where cybercriminals, malware developers, and other threat actors sell their services to customers on the dark web (Hyslip, 2020). It is a cross-cutting phenomenon that encapsulates all other cybercrime subcategories. Many traditional cybercrimes such as phishing, ransomware, and distributed denial of service attacks are now available as a service on an unseen scale (Europol, 2014). As a result, cybercriminals are no longer required to possess any technical knowledge to launch a cyberattack, making this type of crime much more accessible. One of the challenges, however, is that the CaaS suppliers are usually organised as legitimate businesses and that the creation of malware in itself is not always illegal (Hyslip, 2020). For instance, computer scientists, security researchers, or ethical hackers might use malware to find vulnerabilities in computer systems. Article 6 of the Cybercrime Convention requires the criminalisation of the production, sale, procurement for use, import, or distribution of computer programs, designed for the purpose of committing cybercrimes. While it is probably hard to deny that certain types of malware, such as ransomware, would be used for legitimate purposes, it might be hard to prove the malicious intent of CaaS suppliers regarding all the services that they provide. As a result, they are often left off the hook. In addition, the prosecution of cybercriminals often necessitates internationally coordinated actions, which raises questions of territorial jurisdiction (Europol, 2021).

3. CRIMINAL PROCEDURE LAW New technologies do not only affect the way in which crimes are being committed, but also the way in which perpetrators are tracked down and prosecuted. In this part of our contribution, we will exclusively focus on the collection of digital evidence, which is highly impacted by the development of new technologies. Law enforcement authorities resort to new investigative techniques, which raises questions on the safeguards for the protection of the different human rights at stake, such as the right to a fair trial and the right to protection of private life and personal data. The challenges due to the digitisation of criminal trials will not be touched upon (Loo, 2022). 3.1 Procedural Foundations in the Fight against Cybercrime New possibilities for committing crimes require new (legal) tools for tracking down perpetrators. In cases of cyber-enabled crime, an IP address is often the only lead to identifying a perpetrator. Aware of the need for a solid legal framework, the CoE adopted the Convention on Cybercrime already in 2001. This Convention has been ratified by 67 countries, some of which are located outside Europe, such as the United States and Japan. Besides a list of, at the time, new criminal offences (supra), the Convention contains a number of procedural provisions in order to facilitate criminal investigations and proceedings. The investigative measures introduced by the Convention are intended to prosecute the criminal offences mentioned in the Cybercrime Convention and the ones committed by means of a computer system, and to collect evidence of criminal offences in electronic form (art. 14). Part of those provisions put in place cooperation duties, such as the expedited preservation of stored computer data

Criminal law and technology  203 (art. 16), the expedited preservation and partial disclosure of traffic data (art. 17), and the production order (art. 18). Other provisions invest law enforcement authorities with the power to search and seize stored computer data (art. 19), to collect traffic data in real-time (art. 20), and to intercept content data (art. 21). In most international and national legislation, codes on criminal procedure mainly focus on evidence gathering and thus on the collection of data, whereas less attention is paid to the processing of that data afterwards. However, in 2016 the European Union adopted Directive 2016/680, also referred to as the Law Enforcement Directive, in order to introduce the principles of data protection and data subject rights in the context of personal data processing activities by law enforcement authorities.38 One of the provisions of this Directive entails the obligation to carry out a data protection impact assessment when a type of processing of personal data, in particular using new technologies, is likely to result in a high risk to the rights and freedoms of natural persons (Marquenie, 2017). This Directive is of the utmost importance when it comes to many of the investigative measures mentioned below. 3.2 Open-Source Investigations In 2022, law enforcement authorities caught a notorious Italian mafia boss, linking images on Google StreetView to a Facebook page of a restaurant.39 This is only one of the spectacular examples of what is called open-source investigation (OSINT). Law enforcement authorities can of course manually search the internet for the purposes of revealing the truth in criminal cases. However, they are increasingly making use of automated tools, such as web scraping or web crawling technologies with a view to retrieving relevant information from social media platforms and other publicly available websites. This is also called data mining. Those practices are likely to entail an interference with, amongst others, the right to privacy (art. 8 ECHR), which includes a reasonable privacy expectation even in public places (Koops, 2013; Oerlemans, 2017). For an interference to be justified, an accessible and foreseeable legal basis is required.40 In most countries, however, law enforcement authorities rely on their general patrolling powers in public places without a specific legal framework providing the necessary safeguards.41 In the Netherlands, an attempt is being made to include a provision in the Code of Criminal Procedure on the systematic collection of personal data from publicly accessible

38  Eur. Parliament and Council (2016, April 27). Directive nr. 2016/680 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA. Retrieved from https://eur​-lex​.europa​.eu​/ legal​-content​/ EN​/ TXT​/ PDF/​?uri​= CELEX​ :32016L0680​&from​=EN. 39  Giuffrida, A. (2022, January 5). Italian mafia fugitive arrested in Spain after Google Street View sighting. The Guardian. Retrieved from https://www​.theguardian​.com​/world​/2022​/jan​/05​/italian​-mafia​ -fugitive​-arrested​-in​-spain​-after​-google​-maps​-sighting. 40  ECtHR. (2018, April 24). Benedik v Slovenia. Nr. 62357/14; EtCHR. (2010, September 2). Uzun vGermany. Nr. 35623/05. 41  E.g. Art. 3 Dutch Police Act (retrieved from: https://wetten​.overheid​.nl​/BWBR0031788​/2022​- 07​ -08) or Art. 26 Belgian Police Act (retrieved from: https://www​.ejustice​.just​.fgov​.be​/cgi​_loi​/change​_lg​ _2​.pl​?language​=nl​&nm​=1992000606​&la=N).

204  Research handbook on law and technology sources (Klaar, 2022). The public prosecutor would be able to issue an order to collect personal data from publicly accessible sources for a period of three months.42 Another question that arises in that regard is whether law enforcement authorities can make use of illegally obtained open-source data. On the dark web, for instance, databases containing personal data obtained from data breaches can easily be found. Can law enforcement authorities, for instance, search those databases for the password of the social media account of a suspect? It is rather doubtful that those practices comply with European data protection regulation, particularly in the absence of a solid legal framework. Some companies even openly collect personal data and offer services violating (European) data protection legislation. A famous example is the one of Clearview AI, a database using facial recognition technology in order to identify people. For the purposes of the database, images of millions of people were collected without obtaining the necessary consent and thus clearly violating (European) data protection legislation (Jasserand, 2022; Rezende, 2020). Journalists discovered that despite the illegal nature of the database, law enforcement authorities all over the world had been using it in order to identify suspects and victims.43 As a result, several privacy authorities blew the whistle on the police forces. In Sweden, for instance, an administrative fine has been imposed on the police authority, in addition to an order to erase the personal data obtained from Clearview AI and inform the data subjects.44 In Belgium, the Supervisory Body for Police Information came to a similar conclusion upon the revealing of the use of Clearview AI by the Belgian police. It moreover suggested that a clear legal basis should be put in place before those kinds of personal data processing activities could be resumed.45 3.3 Compelled Decryption It is impossible to imagine our daily lives without information systems and devices, such as computers, smartphones, and smartwatches, on which we rely for all kinds of activities. As a result, they often are a treasury of information in ongoing criminal investigations. However, the use of (end-to-end) encryption to secure those information systems and devices renders the work of law enforcement authorities more difficult, also referred to as the ‘going dark’ problem (Koops, 2018). Cracking the encryption often turns out to be a challenge, being both a time and resource-consuming activity. As a consequence, law enforcement authorities all over the world compel suspects to reveal their codes or to biometrically unlock their devices, while suspects face (additional) criminal sentences when they refuse to collaborate. The question has arisen whether compelled decryption entails a violation of the privilege against

Proposed Art. 2.8.8 Dutch Code of Criminal Procedure. Mac, R., Haskins, C., Pequeño IV, A. (2021, August 21). Police In At Least 24 Countries Have Used Clearview AI. BuzzFeed.News. Retrieved from: https://www​.buzzfeednews​.com​/article​/ryanmac​ /clearview​-ai​-international​-search​-table. 44  European Data Protection Board. (2021, February 12). Swedish DPA: Police unlawfully used facial recognition app. Retrieved from: https://edpb​.europa​.eu​/news​/national​-news​/2021​/swedish​-dpa​ -police​-unlawfully​-used​-facial​-recognition​-app​_en. 45  L’organe de Contrôle de l’Information Policière (2022, February 4). Rapport de contrôle de relatif à l’utilisation de l’application ClearView AI par la police intégrée. Retrieved from: https://www​.organedecontrole​.be​/files/. 42 

43 

Criminal law and technology  205 self-incrimination, which is a crucial part of the presumption of innocence and the right to a fair trial.46 The privilege against self-incrimination, which encompasses the right to remain silent, aims to avoid miscarriages of justice by protecting suspects against improper compulsion by state actors (Lamberigts, 2022). The privilege applies to criminal proceedings with respect to all types of criminal offences, meaning that the prosecution has to prove that the suspect is guilty, while he/she is not expected to collaborate when it comes to the collection of evidence. In particular, the privilege protects against evidence that is obtained by coercion. Not only torture, but also facing a criminal sentence when not abiding with a request from a law enforcement authority is considered coercion.47 According to the ECtHR, however, coercion can sometimes be allowed depending on the nature and the degree. This is the case for evidence existing independent of the will of the suspect. For instance, a suspect has to endure a minor interference with his/her physical integrity when blood, hair or urine samples are taken.48 In some cases, documents are not considered to be existing independent of the will of the suspect. The suspect can thus refuse to hand them over, even though the case law remains hazy about pre-existing documents.49 In the case of compelled decryption, a distinction should be made between biometric decryption orders (e.g. using a fingerprint or a face or iris scan to unlock the suspect’s information system) and other decryption orders, which force the suspect to reveal his/her password. Up to now, the ECtHR has not decided on the question of whether law enforcement authorities can compel suspects either way without violating the privilege against self-incrimination. As a result, legislation and case law go in all directions. In Germany, compelled decryption is not allowed. A draft aiming to introduce the power has been discussed in Parliament. Even though the power would be subject to several restrictions, it was not adopted. Some countries, however, have created a solid legal basis for decryption orders. In Norway, for instance, a legal basis for biometric decryption has been added to the criminal procedure as an investigative power in 2017. Ever since police authorities have been allowed to use force in order to compel a suspect to biometrically decrypt the locked device (Bruce, 2017). Whereas the Dutch Parliament initially refused to introduce compelled decryption, the Supreme Court approved the use of compelled biometric decryption in 2021 (van Toor, 2020). The Court considered the coercion needed to unlock a phone with the use of a fingerprint or a face scan to be proportionate (den Os, 2022).50 The legal basis for decryption orders was eventually introduced in the Dutch Code of Criminal Procedure in 2022, but is limited to biometric decryption, i.e. fingerprints or face or iris scans.51 Finally, in some countries, such as the United Kingdom, the decryption order has been approved by the judiciary (Conings, 2018). In 2018, the French Constitutional Court approved compelled decryption, as it does not force a

46  Art. 6, § 2 ECHR; European Parliament and the Council. (2016, March 9). Directive on the strengthening of certain aspects of the presumption of innocence and of the right to be present at the trial in criminal proceedings. Retrieved from: https://eur​-lex​.europa​.eu​/ legal​-content​/ EN​/ TXT/​?uri​ =celex​%3A32016L0343. 47  ECtHR. (1996, December 17). Saunders v United Kingdom. Nr. 19187/91, para 70. 48  ECtHR. (2006, July 11). Jalloh v Germany. Nr. 54810/00, para 102. 49  ECtHR. (1993, February 25). Funke v France. Nr. 10828/84, para 44. 50  Dutch Supreme Court. (2021, February 9). ECLI:NL:HR:2021:202. 51  Art. 558 Dutch Code of Criminal Procedure.

206  Research handbook on law and technology suspect to confess and does not violate the right to remain silent.52 In 2020, both the Belgian Constitutional and the Belgian Supreme Court, although highly debated, reached the same conclusion (Royer, 2020). 3.4 Data Driven Investigations A couple of years ago, cryptophones became widely used in the underworld, especially within criminal organisations involved in international drug trafficking. In addition to built-in encryption software promising top-secret communication, cryptophones are designed with specific features, such as the possibility to send self-destructing messages and to use kill codes or emergency passwords to wipe the device clean when needed (Royer, 2021). Needless to say, the use of those cryptophones hinders the work of law enforcement authorities in ongoing criminal investigations. As a consequence, they joined forces and undertook several operations in order to secretly intercept (hundreds of) millions of messages that were sent via encrypted networks. The biggest operations of this sort so far were the ones carried out against among other services named EncroChat,53 ANOM,54 and Sky ECC.55 All these operations have in common that communication was secretly intercepted on a large scale, that the operations were carried out by different countries implying the cross-border exchange of evidence, and that law enforcement authorities are not inclined to reveal much of the technical details of those operations afterwards. It can nevertheless be presumed that hacking tools, such as malware installing a backdoor on the devices, were used in order to get access to the servers on which the communication data was stored (Sommer, 2022). The various data driven investigations have meanwhile led to a number of separate investigations and cases brought before the court. In the majority of those cases, the arguments of the defence challenging the operations and the evidence are dismissed and suspects are convicted based on the evidence that stems from the data driven investigation (Oerlemans, 2022). The German Federal Court of Justice, for instance, found that the conviction of a drug offender based on data that were transmitted from the French to the German authorities was valid (Wahl, 2022).56 The Dutch Supreme Court also confirmed a conviction based on hacking evidence, without granting the right to the defence of having access to all the collected data.57 In France, the Constitutional Court ruled in turn that the fact that not all details of the 52  French Constitutional Court. (2018, March 30). Nr. 2018-696. Retrieved from: www​.conseil​-constitutionnel​.fr​/conseil​-constitutionnel​/francais​/ les​-decisions​/acces​-par​-date​/decisions​-depuis1959​/2018​ /2018​- 696 ​-qpc​/decision​-n​-2018​- 696 ​-qpc​-du​-30 ​-mars​-2018​.150855​.html. 53  Europol. (2020, July 2). Dismantling of an encrypted network sends shockwaves through organised crime groups across Europe. Retrieved from https://www​.europol​.europa​.eu​/media​-press​/newsroom​/news​ /dismantling​-of​-encrypted​-network​-sends​-shockwaves​-through​-organised​-crime​-groups​-across​-europe. 54  Europol. (2021, June 8). 800 criminals arrested in biggest ever law enforcement operation against encrypted communication. Retrieved from https://www​.europol​.europa​.eu​/media​-press​/newsroom​/news​ /800​-criminals​-arrested​-in​-biggest​-ever​-law​-enforcement​-operation​-against​-encrypted​-communication. 55  Europol. (2021, March 10). New major interventions to block encrypted communications of criminal networks. Retrieved from https://www​.europol​.europa​.eu​/media​-press​/newsroom​/news​/new​-major​ -interventions​-to​-block​-encrypted​-communications​-of​-criminal​-networks. 56  Bundesgerichtshof. (2021, March 2). Nr. 5 StR 457/21. 57  Dutch Supreme Court. (2022, June 28). ECLI:NL:HR:2022:900. Retrieved from: https://uitspraken​.rechtspraak​.nl​/inziendocument​?id​= ECLI​:NL​:HR​:2022​:900​&showbutton​= true​&keyword​ =ECLI​%3aNL​%3aHR​%3a2022​%3a900.

Criminal law and technology  207 operation were disclosed in order to protect the confidentiality of national defence, did not violate the right to a fair trial.58 The operations are nevertheless controversial from a human rights perspective and the admissibility of the evidence has been challenged by the defence on multiple occasions (Griffiths, 2022). While writing this book chapter, the ECtHR has not yet rendered a decision on the possible human rights infringement,59 but several cases have been submitted, the outcome of which will without any doubt turn out to be crucial to the future of the cases stemming from the mass hack operations.60 In the following paragraphs, we give an overview of possible human rights concerns, both from the angle of Article 8 ECHR and Article 6 ECHR. First of all, the lawfulness of the operations is questioned in light of the right to the protection of private life (Art. 8 ECHR). Whereas most countries have legislation requiring the authorisation of a judge in order to secretly intercept communication, it is argued that these rules are not tailored to the size of the data driven investigations, especially when it comes to the safeguards that apply to the processing of the huge amount of data after the interception (Galič, 2022). In other words, the proportionality of the mass gathering of data is questioned, especially since the interception of the millions of messages is often based on the suspicion that the service provider is a member of a criminal organisation. The question arises whether this amounts to a fishing expedition, as the suspicion of the individual users only intervenes at a later stage. When justifying the extent of the operations, law enforcement authorities argue that the above-mentioned cryptophone providers offered their services almost exclusively to criminals and criminal organisations, referring among others to the cost of a subscription. One could wonder, however, where the line will be drawn, as the use of encryption becomes ever more widespread. Other service providers, for instance, offer privacy-preserving technologies, such as virtual private networks, which are being used both for criminal and non-criminal purposes. As they facilitate cybercrime, Europol refers to those technologies as grey infrastructures.61 However, also common service providers, such as WhatsApp and Signal, build their systems more privacy-friendly, e.g. using end-to-end-encryption. In other words, how many presumed criminal users does an encrypted communication infrastructure need to count for it to be targeted by law enforcement authorities? Or conversely, can the mere fact of using an encrypted messaging application amount to a reasonable suspicion, needed, for instance, to justify a pretrial detention (Art. 5 ECHR)? In the case Akgün v. Turkey, the ECtHR ruled that this was not sufficient. In that particular case, the applicant was placed in pretrial detention on suspicion of being a member of a so-called terrorist organisation, because he had been using an encrypted messaging application called ByLock. According to the ECtHR, the Turkish Court ordering the pretrial detention did not have sufficient information on the nature of the 58  French Constitutional Court. (2022, April 8). Nr. 2022-987. Retrieved from: https://www​.conseil​ -constitutionnel​.fr​/decision ​/2022​/2022987QPC​.htm. 59  It did, however, in cases of bulk interception by intelligences agencies, in which the ECtHR emphasised the need of ex post safeguards. ECtHR. (2021, May 25). Big Brother Watch and Others v The United Kingdom. Nr. 58170/13, 62322/14 and 24960/15; ECtHR. (2021, May 25). Centrum för Rättvisa v Sweden. Nr. 35252/08. 60  ECtHR. Yalinçinka v Turkey. Nr. 15669/20 and ECtHR. A.L. and E.J. v France. Nr. 44715/20 et 47930/21. 61  Europol. (2021). Internet Organised Crime Threat Assessment (IOCTA) 2021. Publications Office of the European Union, Luxembourg, 18. Retrieved from: https://www​.europol​.europa​.eu/.

208  Research handbook on law and technology messaging application to conclude that this application was exclusively used by members of a so-called terrorist organisation.62 From the perspective of the right to a fair trial, the lack of transparency surrounding the operations is problematic. Law enforcement authorities refuse to disclose the details of the operations, aiming to protect their modus operandi or even the national secrecy. In Belgium, the Minister of Justice has been summoned – without success – in order to disclose the full criminal file of the Sky ECC investigation, as suspects were only given partial access to a daughter file that was extracted from the mother file that contains all the intercepted data.63 Even if an operation is covered by a state secret, a minimum of transparency with regard to the technical details is required according to the French Supreme Court. The Court ruled that if a certificate of truthfulness, proving the accuracy of the evidence, has not been added to the case file, the evidence is not legal and cannot be used in court.64 As most of the abovementioned operations took place in a cross-border context, defence lawyers have also asked courts on multiple occasions to verify whether investigative measures have been carried out in accordance with the legislation of the member-state. The answer to that argument often refers to the principle of mutual recognition in the European Union, which prevents Member States from examining the regularity of the investigative measures (Schermer, 2022). Moreover, defence lawyers argue that because of the lack of transparency, the reliability of the evidence cannot be tested (Oerlemans, 2022). In case of a lack of transparency on the used hacking methods, defendants are at risk of having fabricated evidence used against them. This argument will become more important, as artificial intelligence technologies are being used to filter the huge amounts of messages (Quezada Tavárez, 2022). When assessing the right to an adversarial trial in cases involving bulk data collected by the prosecution, the ECtHR specified that the defendant does not have a right to access the entire (primary) dataset, but must have the possibility to access the data selected by the public prosecutor (the so-called secondary dataset).65 A final question relates to the probative value of the intercepted messages. In other words, can a judge only rely on this type of evidence in order to convict a suspect, or is circumstantial evidence required? In a solitary decision, a Dutch Court argued that the defence and the Court itself had only access to a limited number of messages sent from the account of the suspect, all stemming from the same source, the Sky ECC operation. As a consequence, the content of the messages could not be tested. The Court found that insufficient to convict the suspect, as other means of proof supporting the guilt of the suspect were not available.66

ECtHR. (2021, November 22). Akgün v Turkey. Nr. 19699/18. X. (2022, September 30). Advocaten van drie verdachten uit Operatie Sky ECC dagvaarden minister Van Quickenborne. Vrt nws. Retrieved from: https://www​.vrt​.be​/vrtnws​/nl​/2022​/09​/30​/advocaten​ -van​-drie​-verdachten​-uit​-operatie​-sky​-ecc​-dagvaarden​-mi/. 64  Art. 230-1 French Code of Criminal Procedure; French Supreme Court. (2022, October 11). Nr. 21-85.148. 65  ECtHR. (2019, October 25). Rook v Germany. Nr. 1586/15, para 70; ECtHR. (2019, November 4). Sigurður Einarssonand others v Iceland. Nr. 39757/15, para 91–92. 66  Dutch Court Zeeland-West-Brabant. (2022, September 13). ECLI:NL:RBZWB:2022:5151. Retrieved from: https://uitspraken​.rechtspraak​.nl ​/inziendocument​?id​=ECLI​:NL​:RBZWB​:2022​:5151. 62 

63 

Criminal law and technology  209 3.5 Increasing the Number of Cooperation Duties of Third Parties Cybercrime has never stopped increasing since the adoption of this Cybercrime Convention (supra), and has even boomed during the COVID-19 pandemic.67 Moreover, new technologies, such as cloud computing, cryptocurrencies, and non-fungible tokens, have not only transformed our society in an extraordinary manner, but also the way in which crime is committed. Whereas law enforcement authorities traditionally had to collect evidence from the accused, nowadays digital evidence is often in the hands of third parties, such as service providers or, when it comes to financial means such as cryptocurrencies, of exchange or wallet providers. This leads to new challenges and legal questions. First of all, the question arises whether third parties are obliged to retain certain information. Data retention is a relevant topic in different areas and in some more debated than in others. For instance, the Fifth Anti-Money Laundering Directive imposed information duties on virtual currency exchange providers and custodian wallet providers in order to make it more difficult to use cryptocurrencies for criminal purposes.68 Another example can be found in the telecom sector, where data retention has been a highly debated matter for three decades given the privacy-invasive character of bulk retention of metadata of communication. An attempt of the European Union to deal with this question in a Directive has failed after an intervention of the Court of Justice.69 Ever since national legislators have struggled with the issue, which has led to abundant case law from both the Court of Justice and the national Supreme Courts.70 Furthermore, questions arise as to whether third parties can be compelled to comply with requests from law enforcement authorities to provide access to data, and under which conditions. A much-discussed case in this regard is the one in which Skype has been convicted for refusing to cooperate with Belgian law enforcement authorities who wanted to tap communication of a suspect sent via Skype. The company argued in vain that it could not cooperate due to end-to-end encryption of its communication network. The Belgian Supreme Court ruled in a controversial decision that the company had to comply with the request, as it was offering their services to Belgian citizens.71 An attempt by the Belgian legislator to add a general backdoor for the purpose of requests from law enforcement has eventually not been included in the legislation. Besides, end-to-end encryption itself has often been challenged as illegal or

67  Europol. (2020, October 6). COVID-19 sparks upward trend in cybercrime. Retrieved from https:// www​.europol​.europa​.eu​/media​-press​/newsroom ​/news​/covid​-19​-sparks​-upward​-trend​-in​-cybercrime. 68  Eur. Parliament and Council (2018, May 30). Directive nr. 2018/843 amending Directive (EU) 2015/849 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing. Retrieved from: https://eur​-lex​.europa​.eu​/ legal​-content​/ EN​/ TXT/​?uri​= CELEX​ %3A32018L0843. 69  Court of Justice. (2014, April 8). Digital Rights Ireland Ltd. ECLI:EU:C:2014:238. Retrieved from: https://eur​-lex​.europa​.eu​/ legal​-content​/ EN​/ TXT/​?uri​= CELEX​%3A62012CJ0293. 70  Some of the most important decisions: Court of Justice. (2016, December 21). Tele2 Sverige AB. ECLI:EU:C:2016:970; Court of Justice. (2018, October 2). Ministerio Fiscal. ECLI:EU:C:2018:788; Court of Justice. (2020, October 6). La Quadrature du Net e.a./Premier ministre e.a. ECLI:EU:C:2020:791; Court of Justice. (2021, March 2). Prokuratuur. ECLI:EU:C:2021:152; Court of Justice. (2022, September 20). SpaceNet and Telekom Deutschland. ECLI:EU:C:2022:702. 71  Court of Cassation. (2019, February 19). ECLI:BE:CASS:2019:CONC.20190219.1 Retrieved from: https://juportal​.be​/content​/ ECLI​:BE​:CASS​:2019​:CONC​.20190219​.1​/ NL​?HiLi​= eNp​LtDK​2qs6​0MrA​ utjK​1Uir​Orix​IVbL​OtDK​E8IO​9IwN​cQXw​jKB8​iXws​A r1UPww==.

210  Research handbook on law and technology threatened with outlawing on account of its alleged detrimental impact on criminal investigations (Koops, 2013). An additional difficulty is linked to the fact that service providers are often located in different Member States that the one of the requesting law enforcement authority. As a consequence, the EU has adopted a legal framework in 2023, which should facilitate cross-border access to electronic evidence bylaw enforcement authorities. At the level of the CoE, on the other hand, a Second Additional Protocol to the Convention has been adopted in 2022.72 An evaluation of the current Cybercrime Convention showed that additional measures were deemed needed to enhance cooperation and the ability of criminal justice authorities to obtain electronic evidence, in particular from service providers.73 Finally, as individual perpetrators cannot always be identified or prosecuted in an online environment, there is an increasing focus within the European Union on the responsibility of internet intermediaries when it comes to preventing and combatting criminally illegal content (supra).

4. CONCLUSION Cybercrime challenges different fundamental principles of criminal law. In this contribution, we did not cover all those challenges, but focused on the criminalisation of new reprehensible or harmful phenomena. We came to the conclusion that, due to the fact that criminal law is predicated on the concept of individual responsibility, there are some inherent limits to the extent to which it can provide an answer to certain harmful online phenomena. Therefore, we will probably see a further shift to the responsibility of internet intermediaries to prevent future harms. Moreover, the question arises whether it is better to opt for broad technologyneutral legislation instead of technology-specific legislation in order to address those new phenomena. On the one hand, technology-neutral provisions are more flexible and allow us to keep up with fast-paced changes in technology and cybercrime. On the other hand, the vagueness or ambiguity of those provisions can also cause difficulties when trying to apply them to specific new forms of cybercrime. When it comes to the collection of evidence, the technologies discussed have shown that digitisation obviously creates new opportunities for law enforcement authorities, such as open-source investigation and data driven investigations. Nevertheless, obstacles arise from the increasing use of encryption and the fact that digital evidence is often in the hands of third parties, such as service providers, which are sometimes located in different countries and refuse to comply with requests from law enforcement authorities. Common to all these new types of evidence gathering is that they increasingly interfere with the protection of the right to privacy and the right to a fair trial. Whereas international human rights standards require a solid legal basis in that regard, traditional criminal procedure laws do not always seem to cover these methods of investigation. 72  CoE. (2022, May 12). Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence. Retrieved from https://rm​.coe​.int​/1680a49dab. 73  CoE. (2021, November 17). Explanatory report to the Second Additional Protocol to the Convention on Cybercrime on enhanced co-operation and disclosure of electronic evidence. Retrieved from https://search​.coe​.int ​/cm ​/pages​/result​_details​.aspx​?objectid​= 0900001680a48e4b.

Criminal law and technology  211

REFERENCES Bellaert, W., Selimi, V. & Gouwy, R. (2022). The end of terrorist content online? In G. Vermeulen & W. Bellaert (Eds.). EU Criminal Policy: Advances and Challenges. Antwerp: Maklu. Bruce, I. (2017). Forced biometric authentication – on a recent amendment in the Norwegian Code of Criminal Procedure. Digital Evidence and Electronic Signature Law Review, 14, 26–30. Brunner, L. (2016). The liability of an online intermediary for third party content: The watchdog becomes the monitor: Intermediary liability after Delfi v Estonia. Human Rights Law Review, 16(1), 163–174. Buckels, E.E., Trapnell, P.D. & Paulhus, D.L. (2014). Trolls just want to have fun. Personality and Individual Differences, 67, 97–102. Carter, D. (2022). Revenge porn cases rise sharply in Belgium. The Brussels Times. Retrieved from https://www​.brusselstimes​.com ​/229460​/revenge​-porn​-cases​-rise​-sharply​-in​-belgium. Centre for Cybersecurity Belgium. (2018). Cryptojacking: Wat is het en waarom is het belangrijk? Retrieved from https://ccb​.belgium​.be​/nl​/document​/cryptojacking​-wat​-het​-en​-waarom​-het​-belangrijk. Citron, D.K. (2014). Hate Crimes in Cyberspace. Cambridge: Harvard University Press. Clapson, C. (2022). Police protection for minister after death threats. VRT News. Retrieved from https:// www​.vrt​.be​/vrtnws​/en ​/2022​/06​/15​/police​-protection​-for​-minster​-after​-death​-threats/. Conings, C. & Kerkhofs, J. (2018). U hebt het recht te zwijgen. Uw login kan en zal tegen u worden gebruikt? Over ontsleutelplicht, zwijgrecht en nemo tenetur. Nullum Crimen, 5, 547–472. Correia, S.G. (2019). Responding to victimisation in a digital world: a case study of fraud and computer misuse reported in Wales. Crime Science, 8(1), 1–12. Danaher, J. (2019). The law and ethics of virtual sexual assault. In W. Barflied & M. Blitz (Eds.). The Law of Virtual and Augmented Reality. Cheltenham: Edward Elgar Publishers. den Os, T., Reumer, P. & van Toor, D. (2022). Gezichtsherkenning als lakmoesproef voor het biometrisch ontgrendelen van elektronische gegevensdragers. Computerrrecht, 4, 259–267. Dibbel, J. (1998). My Tiny Life: Crime and Passion in a Virtual World. USA: New York: Henry Holt and Company. Donalds, C. & Osei-Bryson, K.-M. (2019). Toward a cybercrime classification ontology: A knowledgebased approach. Computers in Human Behavior, 92, 403–418. Douglas, D.M. (2016). Doxing: a conceptual analysis. Ethics and Information Technology, 18, 199–210. Europol. (2021). Internet Organised Crime Threat Assessment (IOCTA) 2021. Publications Office of the European Union, Luxembourg, 18. Retrieved from https://www​.europol​.europa​.eu/. Europol. (2014). The Internet Organised Crime Threat Assessment (iOCTA) 2014. Publications Office of the European Union, Luxembourg. Retrieved from https://www​.europol​.europa​.eu/. Galič, M. (2022). Bulkbevoegdheden en strafrechtelijk onderzoek. Tijdschrift voor Bijzonder Strafrecht & Handhaving, 2022(2), 130–137. Goudsmit, M. (2021). What makes a sex crime? A fair label for image-based sexual abuse. Boom Strafblad, 2(2), 67–73. Griffiths, C. & Jackson, A. (2022). Intercepted communications as evidence: The admissibility of material obtained from the encrypted messaging service EncroChat. Journal of Criminal Law, 86(4), 271–276. Han, Y., Roundy, K. & Tamersoy, A. (2021). Towards stalkerware detection with precise warnings. ACSAC 2021 – Proceedings of Annual Computer Security Applications Conference, 957–969. Hyslip, T.S. (2020). Cybercrime-as-a-service operations. In T.J. Holt & A.M. Bossler (Eds.). The Palgrave Handbook of International Cybercrime and Cyberdeviance (pp. 815-846). Cham: Springer International Publishing. Isik, S. (2022, April 10). De grootste slachtoffers van deepfakes zijn vrouwen. NRC. Retrieved from https:// www​.nrc​.nl​/nieuws​/2022​/04​/10​/de​-grootste​-slachtoffers​-van​-deepfakes​-zijn​-vrouwen​-a4110451. Jasserand, C. (2022, April 28). Clearview AI: illegally collecting and selling our faces in total impunity? CiTiP Blog. Retrieved from https://www​.law​.kuleuven​.be​/citip​/ blog​/clearview​-ai​-illegally​-collecting​ -and​-selling​-our​-faces​-in​-total​-impunity​-part​-i/. Jhaver, S., Ghoshal, S., Bruckman, A. & Gilbert, E. (2018). Online harassment and content moderation: The case of blocklists. ACM Transactions on Computer-Human Interaction, 25(2), 1–33. Klaar, R.J.A. (2022). De strafvorderlijke normering van het geautomatiseerd overnemen van persoonsgegevens uit publiek toegankelijke bronnen met behulp van webcrawlers. Platform Modernisering Strafvordering. Retrieved from https://www​.mod​erni​seri​ngst​rafv​ordering​.nl/.

212  Research handbook on law and technology Koops, B.J. (2013). Police investigations in Internet open sources: Procedural-law issues. The Computer Law and Security Report, 29(6), 654–665. Koops, B.J. & Kosta, E. (2018). Looking for some light through the lens of “cryptowar” history: Policy options for law enforcement authorities against “going dark.” The Computer Law and Security Review, 34(4), 890–900. Lamberigts, S. (2022). Corporations and the Privilege Against Self-Incrimination. London: Bloomsbury. Lemmens, K. (2022). Freedom of expression on the Internet after Sanchez v France: How the European Court of Human Rights accepts third-party ‘Censorship’. European Convention on Human Rights Law Review, 525–550. Loo, J. & Findlay, M. (2022). Digitised justice: The new two tiers? Criminal Law Forum, 33(1), 1–38. Matei, A. (2022). ‘I was just really scared’: Apple AirTags lead to stalking complaints. The Guardian. Retrieved November 10, 2020, from https://www​.theguardian​.com​/technology​/2022​/jan​/20​/apple​ -airtags​-stalking​-complaints​-technology. Marquenie, T. (2017). The police and criminal justice authorities directive: Data protection standards and impact on the legal framework. The Computer Law and Security Review, 33(3), 324–340. McGlynn, C. & Johnson, K. (2021). Cyberflashing Laws: Comparative Perspectives. In C. McGlynn & K. Johnson (Eds.). Cyberflashing (pp. 89-104). Bristol: Bristol University Press. Moore, A. (2022). ‘I didn’t want it anywhere near me’: how the Apple AirTag became a gift to stalkers. The Guardian. Retrieved November 8, 2022, from https://www​.theguardian​.com​/technology​/2022​ /sep​/05​/i​- didnt​-want​-it​-anywhere​-near​-me​-how​-the​-apple​-airtag​-became​-a​-gift​-to​-stalkers#:~​:text​ =This​%20month​%20at​%20Swansea​%20crown​,meant​%20and​%20initially​%20ignored​%20them. Morten Birk, H.M. (2020). ‘Directly in your face’: A qualitative study on the sending and receiving of unsolicited ‘Dick pics’ among young adults. Sexuality & Culture, 24(1), 72–93. Moyakine, E. (2016). Online anonymity in the modern digital age: Quest for a legal right. Journal of Information Rights, Policy and Practice, 1(1). Oerlemans, J.J. (2017). Investigating Cybercrime. Amsterdam: Amsterdam University Press. Oerlemans, J. & Van Toor, D. (2022). Legal aspects of the EncroChat operation: a human rights perspective. European Journal of Crime, Criminal Law and Criminal Justice, 30(3), 309–328. Phillips, K., Davidson, J.C., Farr, R.R., Burkhardt, C., Caneppele, S. & Aiken, M.P. (2022). Conceptualizing cybercrime: Definitions, typologies and taxonomies. Forensic Sciences, 2(2), 379–398. Powell, A. & Henry, N. (2017). Beyond ‘revenge pornography’. In A. Powell & N. Henry (Eds.). Sexual Violence in a Digital Age. Palgrave Studies in Cybercrime and Cybersecurity. London: Palgrave Macmillan, 117–152. Quezada Tavárez, K., Vogiatzoglou, P. & Royer, S. (2021). Legal challenges in bringing AI evidence to the criminal courtroom. New Journal of European Criminal Law, 12(4), 1–21. Rezende, I.N. (2020). Facial recognition in police hands: Assessing the ‘Clearview case’ from a European perspective. New Journal of European Criminal Law, 11(3), 375–389. Rodriguez, K. & Baghdasaryan, M. (2022). UN Committee To Begin Negotiating New Cybercrime Treaty Amid Disagreement Among States Over Its Scope, 15 February. Retrieved from https:// www​.eff​.org​/nl​/deeplinks​/2022​/02​/un​- committee​-begin​-negotiating​-new​- cybercrime​-treaty​-amid​ -disagreement​-among. Royer, S. & Yperman, W. (2020). Wankele argumenten van hoogste Belgische hoven in uitspraken over decryptiebevel. Nullum Crimen, 5, 441–445. Royer, S. & Dewitte, P. (2021, March 16). Drawing the line between privacy by design and criminal liability, CiTiP Blog. Retrieved from https://www​.law​.kuleuven​.be​/citip​/ blog​/drawing​-the​-line​ -between​-privacy​-by​-design​-and​-criminal​-liability/. Santos, I.L.S., Pimentel, C.E., & Mariano, T.E. (2022). Online trolling: The impact of antisocial online content. Social Media Use, and Gender. Psychological Reports, 2. Schermer, B.W. & Oerlemans, J.J. (2022). De EncroChat-jurisprudentie: teleurstelling voor advocaten, overwinning voor justitie? Tijdschrift voor Bijzonder Strafrecht & Handhaving, 82–89. Sleigh, J. (2022). Violent farm protest outside Dutch Ag Minister’s home. The Scottish Farmer. Retrieved from https://www​.thescottishfarmer​.co​.uk ​/news​/20257636​.violent​-farm​-protest​-outside​ -dutch​-ag​-ministers​-home/.

Criminal law and technology  213 Sommer, P. (2022). Evidence from hacking: A few tiresome problems. Forensic Science InternationalDigital Investigation, 40, 1–7. Steffens, T. (2020). Attribution of Advanced Persistent Threats. Heidelberg, Germany: Springer Vieweg. Van Der Aa, S. (2018). New trends in the criminalization of stalking in the EU member states. European Journal on Criminal Policy and Research, 24(3), 315–333. van Toor, D., Albers, W., Taylor Parkins-Ozephius, C. & Beekhuis, T. (2020). De ontgrendelplicht in rechtsvergelijkend perspectief (deel 1). Computerrecht, 4, 233. Wahl, T. (2022). Germany: Federal court of justice confirms use of evidence in EncroChat cases. Eurcrim. Retrieved from https://eucrim​.eu​/news​/germany​-federal​-court​-of​-justice​-confirms​-use​-of​ -evidence​-in​-encrochat​-cases/. Yar, M. (2006). Cybercrime and Society. London: Sage.

13. Privacy at a crossroads Artur Pericles Lima Monteiro1

The right to privacy is at a crossroads. It was once the subject of intense theoretical disputations around not only its value and delineation (see, e.g., the essays compiled at Schoeman, 1984a), but also its very existence (e.g., Thomson, 1975) as a “distinct and coherent” right (Schoeman, 1984b).2 A pragmatic turn (Solove, 2002, 2008) thrust it into the spotlight of the most salient disputes about the regulation of technology, surveillance, and informational capitalism. Yet at the same time that privacy law’s reach has become so broad that it is often (and not approvingly) referred to as the “law of everything” (Purtova, 2018), there is no agreement on whether and how it can actually provide the answers that society expects to the most pressing questions of our days. If anything, the agreement increasingly seems to be that it can’t. The most exciting paths now being chartered instead are perhaps best read as moving “beyond privacy” (Pałka, 2020)—and toward “data governance” (Viljoen, 2021). This chapter tracks this trajectory of the right to privacy, focusing on informational privacy. It outlines its past of intense theoretical disputes and calls attention to the shift in the conceptualization of privacy embodied by the pragmatic turn. It also discusses the latest developments in the transition from privacy law to data governance. It raises questions we might want to answer before abandoning privacy law as an old toy. And it also discusses potential problems and limitations for data governance. Before we proceed, a note on terminology. While the discussion here adopts “the right to privacy” unqualifiedly, it speaks to what is often termed informational privacy (e.g., Roessler, 2005, pp. 110–141), or information privacy (e.g., Richards, 2006). It does not take up other aspects of the right to privacy, such as decisional privacy (Roessler, 2005, p. 79), alternatively constitutional privacy (Tugendhat, 2017, p. 132), under which questions on reproductive rights (e.g., abortion and access to contraceptives), among others, are often discussed (see Marmor, 2015, pp. 23–25 for an objection to the decisional dimension of privacy). Informational privacy, taken as “the dimension of privacy that concerns information or data about a person” (Roessler, 2017, p. 200), is often seen as synonymous with data protection (e.g., Flaherty, 1991, p. 832). Further, there is debate about whether data protection is a synonym for (this dimension of) privacy, or is distinct from it, is in the service of privacy only, or other rights and interests as well (see, e.g., Gellert & Gutwirth, 2013). And of course positive law can regulate data protection in a particular manner, attaching to it requirements, procedures, effects and 1  I am grateful to Przemek Pałka and Olia Kanevskaia Whitaker for their generous feedback and to Marina Federico for insightful discussion and literature recommendations. 2  Note that this theoretical question is not resolved by the fact that privacy is codified, e.g., under the International Covenant on Civil and Political Rights, regional human rights systems, or national constitutions. Thomson’s objection was that “every right in the right to privacy cluster is also in some other right cluster” (1975, p. 313). The codification of a right to privacy does not negate that claim, both because bills of rights also have a symbolic meaning and because it would still need to be shown that Thomson was wrong that a privacy violation can occur without a violation of another, non-derivative right. See, for instance, Scanlon (1975).

214

Privacy at a crossroads  215 institutions that do not attach to the right to privacy more generally (Kokott & Sobotta, 2013). I do not mean to take a view regarding this debate. My goal is to briefly examine a shift in thinking about these issues, under whatever heading they are cabined.

1. PRIVACY BEFORE THE TURN Legal scholars writing about the right to privacy cannot seem to resist the urge to trace it to Samuel Warren and Louis Brandeis’s 1890 Harvard Law Review article “The right to privacy” (Warren & Brandeis, 1890; see, e.g., Citron, 2022, pp. xii–xiii; Regan, 1995, pp. 14–15; Richards, 2022, p. 17; Solove, 2008, p. 1; Solow-Niederman, 2022, p. 368; Waldman, 2018, p. 11). There is perhaps good reason for this. Warren and Brandeis were responding to the challenges emerging technology and business models—“[r]ecent inventions and business methods” (Warren & Brandeis, 1890, p. 195)—posed to “the protection of the person”. That might well serve as a description of what drives so much of the discussion about privacy today. Their argument was that a right to privacy should be recognized as part of “the more general right of the individual to be let alone” and the principle of “inviolate personality” (Warren & Brandeis, 1890, p. 205) which law, particularly tort law, undertakes to protect, regardless of “the interposition of the legislature” (Warren & Brandeis, 1890, p. 195). There is debate about the extent to which Warren and Brandeis amounted to a revolution in thinking about privacy, and the extent to which they were successful. Neil Richards and Daniel Solove argue that they “did not write on a nearly blank slate” (Richards & Solove, 2007, p. 145) and rather deliberately took US thinking on privacy on a divergent path from the English roots of confidentiality, which the famous 1890 article deliberately deemphasizes. James Whitman contends that the article ought to be seen as “an unsuccessful continental transplant” (2004, p. 1204) given how much it drew on German personality rights and French case law, noting it got a “cold reception” (2004, p. 1208). Meghan Richardson follows the trail from Rudolf von Jhering to classic liberalism and hypothesizes that, “through Warren and Brandeis, the ideas of earlier thinkers such as Bentham, Mill and von Jhering were extended further in ways that their originators might not have contemplated, but might eventually have approved” (Richardson, 2017, p. 9). The contribution she credits the US duo with is “popularizing” the right to privacy because of how they framed snapshot photography as impacting the average person, whereas it was “earlier largely treated as a bourgeois right” (Richardson, 2017, p. 9). What matters to the discussion in this chapter is how the Warren–Brandeis article came to grasp the right to privacy. Whatever its intellectual contribution or immediate impact on adjudication might have been, “the right to privacy“ is characteristic of a mode of thinking about privacy that had staying influence. In 1960, William Prosser identified four separate torts that had been adopted by courts in the United States “by the use of a single word supplied by Warren and Brandeis” (Prosser, 1960, p. 422). To him, these torts represented “four distinct kinds of invasion of four different interests of the plaintiff, which are tied together by the common name, but otherwise have almost nothing in common” (Prosser, 1960, p. 389). He was challenged by Edward Bloustein (1964), who argued not only were the privacy torts tied by a common thread, but also that it extended to the regulation of government interferences under the Fourth Amendment to the US Constitution—and that missing that thread jeopardized the development of the law of privacy, including for the tort remedies available.

216  Research handbook on law and technology Prosser was the reporter for the Restatement (Second) of Torts, and his view prevailed at least when it comes to the classification of the privacy torts (Schwartz & Peifer, 2010, pp. 1938–1939). That, of course, doesn’t preempt conceptual articulation of the right to privacy. In fact, up to the 1990s, scholarship was marked by energetic arguments about the right to privacy, its existence as a standalone right, its value and its legal specification. Texts such as James Rachels (1975), Ruth Gavison (1984), Stanley Benn (1984), Judith Jarvis Thomson (1975), and Jeffrey Reiman (1976) debated what privacy means, what is its justification, and how the law should recognize it. These works remain widely cited today, including in highly influential work (Hadjimatheou, 2017; e.g., Koops et  al., 2017; Richards & Hartzog, 2017; Roessler, 2005; Waldman, 2018). But they typically figure as relics of another era, a mode of thinking about privacy that has been left behind.

2. THE PRAGMATIC TURN Indeed, the dominant thinking about privacy today stands in contrast to those texts. A perfect summary is provocatively provided by Woodrow Hartzog: “What is privacy? That’s the wrong question” (Hartzog, 2021). Hartzog notes that Daniel Solove “has been extraordinarily influential for scholars, policymakers, and practitioners” (2021, p. 1680) and can take considerable credit for that now prevailing stance, which sees “chaos and futility [in] competing conceptualizations of privacy” (2021, p. 1679). Solove has been influential in his push for a pragmatic turn. To Hartzog, this turn means that: Instead of squabbling over the binary boundaries of privacy, people who understand privacy as more of a vague umbrella term can leave the line-drawing question for another day and get to work identifying problems created by specific conduct, articulating the values implicated by those problems, and crafting solutions to the problems that serve those values. (Hartzog, 2021, p. 1681)

Whether Solove thinks conceptualizing privacy is not a priority but best left “for another day” (as Hartzog put it) or deems it “a quixotic search” (Calo, 2011, p. 1140) that should be dropped entirely is perhaps an open question. Regardless, Solove clearly positions his work in contrast with the earlier concerns in privacy thinking. In fact, not only that, he blames that mode of privacy thinking for the failures he diagnoses in addressing concrete problems: “The difficulty in articulating what privacy is and why it is important has often made privacy law ineffective and blind to the larger purposes for which it must serve” (Solove, 2002, p. 1090). Solove’s theory is instead oriented toward “attempting to solve certain problems” (Solove, 2002, p. 1129). This is the reason why he catalogs 16 types of “privacy problems”, or “privacy violations”, and emphasizes the different components that go into responding to each of them (Solove, 2006). It would be a mistake, however, to see this turn to just be about a focus on policy and having privacy get results. The pragmatic turn goes further than just setting aside the question of whether there is an overarching value in different manifestations of privacy problems. Solove is clear that “there is no overarching value of privacy” (2002, p. 1145). He also rejects a nonconsequentialist account of the value of privacy (Solove, 2002, pp. 1144–1145). Here Solove’s objection is not practical but about meta-ethics. He does not seek to refute the idea that privacy can have intrinsic value; his contrary position is premised on the idea that arguing about intrinsic value does not go beyond describing it as a “mere taste”, such as a preference for vanilla ice cream (Solove, 2008, p. 84).

Privacy at a crossroads  217 While not necessarily committed to a normative position on the value of privacy (or even to grounding regulation on privacy: Gellert & Gutwirth, 2013; Kokott & Sobotta, 2013) data protection regimes such as the European Union’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) are in agreement with the motivation behind the pragmatic turn in that its operation does not hinge on identifying what should be protected as private. Any data about a person attracts data protection legislation,3 triggering an extensive list of record-keeping and other compliance mechanisms. While “privacy may hold much emotive and symbolic appeal” (Bennett, 1992, p. 13), data protection avoids what pragmatists see as the hopelessly subjective task of sorting through what should or should not count as covered (Bieker, 2022, pp. 179, 259). That shouldn’t be overstated: value judgments can still be decisive in assessing whether, for instance, legitimate interests in data processing exist.4 But the undeniable point is that data protection runs from another starting point—so much so that many have insisted that the latter has outgrown it and has a separate life (see, e.g., Gellert & Gutwirth, 2013). In practice, this makes data protection so ambitious that its scope can extend to virtually any activity—it becomes “the law of everything” (Purtova, 2018). That is true of comprehensive legislation such as the GDPR and other legislation fashioned after it and, to a lesser extent, regimes like the CCPA.5

3. DATA GOVERNANCE AND THE FUTURE OF PRIVACY The pragmatic turn extricated privacy from conceptual paralysis. The right to privacy gained ground with data protection legislation, not just in the European Union, but globally, including in several US states. Yet this new ground gained after the turn has come at a cost. Joris van Hoboken has described it in terms of a privacy disconnect: “a divide between the demands for [the] legitimacy [of pervasive data processing] and what current privacy governance offers in practice” (Hoboken, 2019, p. 256). That is, while data protection is expected to respond to the most dramatic questions facing society in terms of informational capitalism and widespread surveillance, this legislation has been made a victim of its own success—and can’t deliver on its promises. The pragmatic turn made it operative (impactful, if not effective) to an extent that privacy law never was. Yet the complicated operation of the data protection machine does not provide answers directly. It relies on a proceduralized model that largely does not contain definitive, substantive positions (Gutwirth & Hert, 2006), and so can plausibly be articulated as legitimizing a sweep of data

3  Different regimes have their own delineations on the concept of personal data and personally identifiable information. This is a relevant factor for comparing their scope, but one that has no impact on their categorization as creatures of the pragmatic turn. 4  It is noteworthy that under the GDPR adopts the language of “reasonable expectations” as a factor for determining the legality of data processing on the basis of legitimate interests. See Recital 47. Also see the opinion of the Article 29 Data Protection Working Party (2014) on legitimate interests. While that opinion was issued under the Data Protection Directive (Directive 95/46/EC), the European Data Protection Board has relied on it to discuss lawful grounds for data processing under the GDPR. See, e.g., the guidelines on consent (2020). 5  Although not as wide as the GDPR in scope, the CCPA still goes much further than prior, sectoral, legislation in the United States. See Chander et al. (2021, p. 1759), which contrasts the “data protection” language of the GDPR to the information privacy of US legislation.

218  Research handbook on law and technology processing activities. This means that before enforcement action is taken—and survives judicial review—business practices that most see as objectionable and in contradiction with data protection legislation are speeding ahead in full steam, coated in the veneer of legality gifted by proceduralized privacy law. Even if that could be overcome, data protection regimes are faced with more fundamental objections, aimed at the role of consent (Richards & Hartzog, 2019; Solove, 2013). Notice and consent, or choice, is described as unfeasible in practice, not least because of informational overload. It is canonical in scholarship that consent-based models are obsolete and doomed to fail. Yet, at the risk of heresy, perhaps the demise of consent might have been announced prematurely: Apple’s privacy changes (Kollnig et  al., 2022) were leveraged by swathes of users, and albeit imperfect seem consequential enough that Mark Zuckerberg blamed them for at least part of Meta’s gloomy prospects (Conger & Chen, 2022). It is not clear that data protection law wouldn’t be able to overcome the practical problems surrounding consent with modifications that preserve the edifice while making consent more meaningful. One potential route would be to establish a baseline, deviation from which would require consent. This might be achieved with default rules regarding data practices to which consent might be given, thus creating an incentive for abiding by such default rules at the same time that departure from them would only then merit attention from the data subject. Operating systems could be set up on the basis of those default rules or other user preferences, thereby granting consent automatically without the user being asked each time, but only for processing that doesn’t match their preferences. Another potential approach would be enacting more detailed, sectoral regulation, such as what the United States’s HIPAA (Health Insurance Portability and Accountability Act) Privacy Rule does for patient data disclosures to family members, for instance.6 The point here is that consent doesn’t have to look like it does right now, with cookie banners galore. A second challenge to consent as the centerpiece of data protection blames it for its insignificance. Consent is rendered meaningless given the data “tyranny of the minority”, that is, the fact that often “the volunteered information of the few can unlock the same information about the many” (Barocas & Nissenbaum, 2014, p. 61). That is, if consent is the legitimation for data processing, then individuals are deprived of their autonomy when inferences about them can be made on the basis of data collected from a smaller number who have consented. This time, the challenge seems to prove too much: while it is true that information about a population can be obtained without agreement from the majority of the population, that holds for “big data” as it does for statistics, opinion polls, and focus groups. Why tackling this should be the metric of success for privacy law is a question that has been in general loudly ignored. This brings us to data governance (Viljoen, 2021). Data governance law—defined by Viljoen as “the legal regime that governs how data about people is collected, processed, and used”—offers an exciting path for overcoming much of the criticism toward data protection by exploring collective self-determination. If responding to the tyranny of the minority might be over-indexing for privacy law, it would be precisely under the remit of data governance’s aim “to account for population-level interests in the digital economy” (Viljoen, 2021, p. 653). It should be noted, however, that proponents of data governance have not addressed those pre-digital instances of obtaining population information from a limited number of members.

6 

See Privacy of Individually Identifiable Health Information, 45 C.F.R. § 164.510(b) (2016).

Privacy at a crossroads  219 Now, the approach of data governance can productively offer avenues for collective selfdetermination in situations where the affected collective can be discerned easily and with regards to a correspondingly limited set of data practices (e.g., Salomé Viljoen’s Waterorg, a hypothetical water management authority for a drought-afflicted region tracking water consumption data “to ensure water will be distributed fairly and responsibly as it becomes scarcer” (Viljoen, 2021, p. 635)). We can see how some data processing that could potentially be impractical under current privacy law might be beneficially adopted with an institutional arrangement that allows for collective self-determination. Such unleashing shows data governance in tension with privacy law and as its new and improved replacement. Even if we ignore that, however, the project of data governance could render irrelevant privacy law as characterized by the pragmatic approach. If privacy is to be valued only instrumentally (Richards, 2022, p. 68) and measured by the impacts it has on a particular social practice (Solove, 2002, p. 1144), then data governance and privacy law share the same success criteria: which data processing (or abstention from processing) produces the best societal outcome. This suggests a competition between two regulatory frameworks — one which privacy law is likely to lose. It could hardly expect to have a better answer to that question about societal outcomes than one backed by the democratic imprimatur of the self-government claimed by data governance. Data governance would then make privacy law redundant and antiquated, no more than a cumbersome compliance exercise. There are likely to be serious challenges to data governance in practice, particularly when the affected population cannot plausibly be thought to deliberate about its own interests, because individuals don’t see themselves as members of a community—suppose, for instance, that we are talking about the class of people with type A blood. This is not so much an objection to the data governance approach itself as perhaps a limitation to it. A more serious problem with data governance presents itself if proponents assume that this approach would replace all privacy considerations. Even when the affected population is clearly defined and can deliberate as a community, privacy interests might be dramatically different across various groups. It might be the case, for instance, that a proposed smart-city data processing would implicate practices of a religion professed by a minority of the population. This shows that data governance and privacy law not only complement each other, as Lisa Austin notes,7 but also that the latter might restrict the options available for collective selfgovernment sought by the former. This should not be surprising if we think about democratic data governance just as we do about government generally, where constitutional rights impose boundaries to what even democratically elected officials can require. An objection to the example above might suggest that what is at stake is a different right, namely religious freedom, so that, in cases where only the right to privacy is implicated, data governance schemes would control. This need not be so, nor would questions only be raised with regard to specific social groups. COVID-19 contact-tracing apps are a good illustration. Even though the importance of contact-tracing was established and a decentralized approach was developed that minimized data processing, the European Data Protection Board (EDPB) still advised that such apps should be voluntary, “a choice that should be made by individuals as a token of collective responsibility”, emphasizing the importance of “individual trust” (Jelinek, 2020). While there

7 

“Privacy remains of vital importance”. Austin (2022, p. 305).

220  Research handbook on law and technology might be disagreement about whether this conception of the right to privacy is compelling, this shows that there is still much theoretical work to be done.

4. CONCLUSION We might say an ambitious and encompassing legislation that has ceded with theoretical vexations vastly expanded the preserves of privacy law, but has been lacking in addressing a number of concerns regarding data processing. The perhaps natural culmination of the pragmatic turn and the privacy disconnect is a turn away from privacy and toward fresh thinking on data governance. That is the crossroads the right to privacy is at: at the same time its reach and significance arguably have attained unprecedented levels, it seems to be on the brink of being left behind like an old toy which has outlived its owner’s infancy.

REFERENCES Article 29 Data Protection Working Party. (2014). Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC. Retrieved from https://ec​.europa​.eu​/ justice​/article​-29​/documentation​/opinion​-recommendation​/files​/2014​/wp217​_en​.pdf. Austin, L. (2022). From privacy to social legibility. Surveillance & Society, 20(3), 302–305. Barocas, S. & Nissenbaum, H. (2014). Big data’s end run around anonymity and consent. In S.B. Julia Lane Victoria Stodden (Ed.), Privacy, big data, and the public good: Frameworks for engagement (pp. 44–75). Cambridge: Cambridge University Press. Benn, S.I. (1984). Privacy, freedom, and respect for persons (1st ed., pp. 223–244). Cambridge: Cambridge University Press. Bennett, C.J. (1992). Regulating privacy. Ithaca: Cornell University Press. Bieker, F. (2022). The right to data protection: individual and structural dimensions of data protection in EU law. Den Haag: T.M.C. Asser Press. Bloustein, E.J. (1964). Privacy as an aspect of human dignity: an answer to Dean Prosser. New York University Law Review, 39(6), 962–1007. Calo, R. (2011). The boundaries of privacy harm. Indiana Law Journal, 86(3), 1131–1162. Chander, A., Kaminski, M.E. & McGeveran, W. (2021). Catalyzing privacy law. Minnesota Law Review, 105(4), 1733–1802. Citron, D.K. (2022). The fight for privacy: protecting dignity, identity, and love in the digital age. W.W. New York: Norton. Conger, K. & Chen, B.X. (2022). Apple’s privacy changes could cost Meta big time. The New York Times, 1. Retrieved from https://www​.nytimes​.com​/2022​/02​/03​/technology​/apple​-privacy​-changes​-meta​.html. European Data Protection Board (2020). Guidelines 05/2020 on consent under Regulation 2016/679. Retrieved from https://edpb​.europa​.eu​/sites​/default ​/files​/files​/file1​/edpb​_guidelines​_202005​_consent​ _en​.pdf Flaherty, D.H. (1991). On the utility of constitutional rights to privacy and data protection. Case Western Law Review, 41(3), 831–855. Gavison, R. (1984). Privacy and the limits of law (F.D. Schoeman, Ed. pp. 346–402). Cambridge: Cambridge University Press. Gellert, R. & Gutwirth, S. (2013). The legal construction of privacy and data protection. Computer Law & Security Review, 29(5), 522–530. Gutwirth, S. & Hert, P.D. (2006). Privacy, data protection and law enforcement: opacity of the individual and transparency of power (E. Claes, A. Duff & S. Gutwirth, Eds. pp. 61–104). Antwerp: Intersentia. Hadjimatheou, K. (2017). Surveillance technologies, wrongful criminalisation, and the presumption of innocence. Philosophy & Technology, 30(1), 39–54.

Privacy at a crossroads  221 Hartzog, W. (2021). What is privacy? That’s the wrong question. The University of Chicago Law Review, 88(1), 1677–1688. Hoboken, J. van. (2019). The privacy disconnect. In R.F. Jørgensen (Ed.), Human rights in the age of platforms (pp. 255–284). London: The MIT Press. Jelinek, A. (2020). Ref: OUT2020-0028. Retrieved from https://edpb​.europa​.eu​/sites​/default​/files​/files​/ file1​/edp​blet​tere​cadv​isecodiv​-appguidance​_final​.pdf. Kokott, J. & Sobotta, C. (2013). The distinction between privacy and data protection in the jurisprudence of the CJEU and the ECtHR. International Data Privacy Law, 3(4), 222–228. Kollnig, K., Shuba, A., Kleek, M.V., Binns, R. & Shadbolt, N. (2022). Goodbye tracking? Impact of iOS App Tracking Transparency and Privacy Labels. 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 508–520). Koops, B.-J., Newell, B.C., Timan, T., Skorvanek, I., Chokrevski, T. & Galič, M. (2017). A typology of privacy. University of Pennsylvania Journal of International Law, 38(2), 483–575. Marmor, A. (2015). What is the right to privacy? Philosophy & Public Affairs, 43(1), 3–26. Pałka, P. (2020). Data management law for the 2020s: the lost origins and the new needs. Buffalo Law Review, 68(2), 559–640. Prosser, W.L. (1960). Privacy. California Law Review, 48(3), 383–423. Purtova, N. (2018). The law of everything. Broad concept of personal data and future of EU data protection law. Law, Innovation and Technology, 10(1), 40–81. Rachels, J. (1975). Why privacy is important. Philosophy & Public Affairs, 4(4), 323–333. Regan, P.M. (1995). Legislating privacy. Chapel Hill: University of North Carolina Press. Reiman, J.H. (1976). Privacy, intimacy, and personhood. Philosophy & Public Affairs, 6(1), 26–44. Richards, N.M. (2006). The information privacy law project. Georgetown Law Journal, 94(4), 1087–1140. Richards, N.M. (2022). Why privacy matters. Oxford: Oxford University Press. Richards, N.M. & Hartzog, W. (2017). Privacy’s trust gap: a review. Yale Law Journal, 126, 1180–1224. Richards, N.M. & Solove, D.J. (2007). Privacy’s other path: recovering the law of confidentiality. Georgetown Law Journal, 96(1), 123–182. Richards, N. & Hartzog, W. (2019). The pathologies of digital consent. Washington University Law Review, 96(6), 1461–1503. Richardson, M. (2017). The right to privacy: origins and influence of a nineteenth-century idea. Cambridge: Cambridge University Press. Roessler, B. (2005). The value of privacy (R. D. V. Glasgow, Trans.). Cambridge: Polity. Roessler, B. (2017). Privacy. Proceedings of the Aristotelian Society, 117(2), 187–206. Scanlon, T. (1975). Thomson on privacy. Philosophy & Public Affairs, 4(4), 315–322. Schoeman, F.D. (Ed.). (1984a). Philosophical dimensions of privacy: an anthology. Cambridge: Cambridge University Press. Schoeman, F.D. (1984b). Privacy: philosophical dimensions. American Philosophical Quarterly, 21(3), 199–213. Schwartz, P.M. & Peifer, K.-N. (2010). Prosser’s “Privacy” and the German right of personality: are four privacy torts better than one unitary concept? California Law Review, 98(6), 1925–1987. Solove, D.J. (2002). Conceptualizing privacy. California Law Review, 90, 1087–1155. Solove, D.J. (2006). A taxonomy of privacy. University of Pennsylvania Law Review, 154(3), 477–564. Solove, D.J. (2008). Understanding privacy. Cambridge: Harvard University Press. Solove, D.J. (2013). Privacy self-management and the consent dilemma. Harvard Law Review, 126, 1880–1903. Solow-Niederman, A. (2022). Information privacy and the inference economy. Northwestern University Law Review, 117(2). Thomson, J.J. (1975). The right to privacy. Philosophy & Public Affairs, 4(4), 295–314. Tugendhat, M. (2017). Liberty intact: human rights in English law. Oxford: Oxford University Press. Viljoen, S. (2021). A relational theory of data governance. Yale Law Journal, 131(2), 573–654. Waldman, A.E. (2018). Privacy as trust. Cambridge: Cambridge University Press. Warren, S.D. & Brandeis, L.D. (1890). The right to privacy. Harvard Law Review, 4(5), 193–220. Whitman, J.Q. (2004). The two Western cultures of privacy: dignity versus liberty. Yale Law Journal, 113(6), 1151–1221.

14. When computers say no: towards a legal response to algorithmic discrimination in Europe Raphaële Xenidis1

1. INTRODUCTION Concerns over breaches of fundamental rights arising from the deployment of algorithmic systems have increased in recent years. In particular, research across the globe shows that algorithmic systems used in various decision-making processes can discriminate against legally protected groups. For instance, in a landmark decision the Italian Tribunale di Bologna found that the reputational ranking algorithm used by the delivery platform Deliveroo to give riders access to a system for booking working shifts was indirectly discriminatory.2 In deciding which riders to prioritise, the system constructed a measure of their ‘reliability’ and ‘participation’ that did not take into account legally protected reasons for absence from work such as strike actions, illness, disability, personal beliefs, or care duties (still performed by women in majority). By treating all cancellations of work shifts indistinctly, the system unfairly limited riders’ work opportunities. In Austria, the so-called ‘AMS’ algorithm was commissioned by the national employment agency to grant or withhold job seeker support based on a prediction of their chances of finding employment. Researchers showed that in some versions, the predictive system assigned a negative weight to female job candidates (in particular, when they had care duties3) and that it took into account features such as candidates’ migration background, health impairments and age, thus making the system potentially discriminatory against legally protected groups (Kayser-Bril, 2019; Alhutter et al., 2020). Research has brought to light numerous other examples of algorithmic discrimination in Europe (for a recent overview, see Wulf, 2022). To a certain extent, anti-discrimination laws in place in Europe can address algorithmic discrimination. Yet, thorny questions arise regarding the interpretation and the application of these laws. Existing legislation also exhibits gaps and shortcomings, especially in the context of machine learning systems. This chapter examines these problems and proposes reflections on how to enforce equality in the algorithmic society. To do so, it first scrutinises the roots and mechanics of algorithmic discrimination and proposes working definitions with the aim of disentangling existing semantic confusions. Second, this chapter investigates the 1  This research is linked to a Marie Skłodowska-Curie Fellowship project conducted at iCourts at the University of Copenhagen and the University of Edinburgh, School of Law. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 898937. 2  Tribunal of Bologna, Order no. 2949/2019, 31 December 2020. Retrieved from https://www​.bollettinoadapt​.it​/wp​-content​/uploads​/2021​/01​/Ordinanza​-Bologna​.pdf 3  By contrast, care duties did not influence men’s score negatively.

222

When computers say no  223 shortcomings of the existing anti-discrimination law framework, distinguishing between regulatory, conceptual, doctrinal and procedural gaps. Finally, this chapter proposes some reflections on enforcing (algorithmic) equality. In so doing, this chapter reflects on the normative implications of different possible interpretations of the legal framework in light of the problem of algorithmic discrimination.

2. FROM ALGORITHMIC BIAS TO ALGORITHMIC DISCRIMINATION 2.1 How Does Algorithmic Discrimination Arise? The well-known phrase ‘garbage in, garbage out’, recast by Mayson as ‘bias in, bias out’ (Mayson, 2018), places the focus on data as the origin of algorithmic discrimination. Because structural inequalities are ingrained in any social data, as the aggregated product of past discriminatory decisions, learning algorithms internalise and re-enact such patterns of inequality. Examples such as the now infamous Amazon CV screening prototype, which learnt from past hiring decisions to systematically discriminate against female job candidates, show that data-driven discrimination is a reality (Dastin, 2018). However, algorithmic discrimination can also originate elsewhere. As illustrated by the story of Dr Selby, a gym customer who could not access the women’s changing room because the system associated the prefix ‘Dr’ with male rather than female clients, stereotypes also pervade problem definition and model design (Turk, 2015). At the operationalisation stage too, algorithmic discrimination can arise. Human agents display ‘disparate interactions’ with the output of algorithmic decision support systems, for example, overestimating the risks posed by racialised groups and underestimating those posed by majority groups in pretrial release decisions (Greene & Chen, 2019). Hence, the sources of algorithmic discrimination are multiple and difficult to disentangle. Ultimately, discrimination is likely to result from complex ‘co-production’ processes at the intersection of technological deployment, social practices and political objectives (Alhutter et al., 2020). Because of narratives emphasising data as the source of algorithmic discrimination, policy discussions on how to address the problem have given particular attention to the accuracy of training datasets.4 For example, Article 10 of the draft EU AI Act foresees quality requirements for data collection, data preparation and processing, and the identification of data gaps. Ensuring that training and validation data is representative is an important aspect of addressing algorithmic discrimination. So-called ‘accuracy-affecting injustices’ can indeed bias data collection and explain why some algorithmic systems underperform for, and underserve, certain population groups (Hellman, 2021). For instance, collecting data via users’ internet access can lead to certain communities being under-represented in the data collected, i.e. older users or residents of rural or economically deprived areas where the internet infrastructure is underdeveloped. In turn, an algorithmic system trained with that data might not adequately account for the needs and behaviours of these communities, therefore potentially leading to injustices 4  For example, accuracy is a legal requirement for high-risk systems in the draft EU AI Act. See inter alia Recitals 43 and 49, Art. 13(3)(b)(ii), Art. 15(1)(2) in Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts COM(2021) 206 final, (2021).

224  Research handbook on law and technology and disadvantages. Such harms can in part be addressed by improving the representativeness and accuracy of training and validation data. However, when ‘non-accuracy-affecting injustices’ are responsible for algorithmic discrimination, measures improving data collection or quality are ineffective (Hellman, 2021). For instance, if an algorithmic system used to calculate workers’ pay was trained on average pay data across Europe, the output would likely exhibit a difference between men’s and women’s pay. This difference corresponds to the gender pay gap, which is about 13% on average in Europe.5 The data used to train the system is factually correct, but it reflects historical injustices. Addressing algorithmic discrimination in this case requires treating not only the symptoms (data representativeness) but also the roots (gender inequality) of such disadvantage. Policy and legal responses therefore need to go beyond requiring data accuracy, quality and transparency and to also make use of measures, such as positive action, that address the structural causes of algorithmic discrimination. 2.2 Clearing Some Semantic Confusions Two main strands of disciplinary semantics coexist and can give rise to confusion. On the one hand, computer science and ethics literature mainly use the terms ‘bias’ and ‘fairness’ to capture the harms and injustices of algorithmic systems and the means available to address them. On the other hand, the legal literature qualifies unlawful algorithmic distinctions between groups as ‘discrimination’ and frames means of redress in terms of ‘equal treatment’. Juxtaposing the two disciplinary frameworks raises difficult questions: How do notions of bias and fairness map onto equality law? In other terms, when does bias qualify as discrimination? What bias is unlawful? And what fairness metrics does equality law require? The draft EU AI Act offers an interesting case study for interrogating the overlaps and differences between these different terms. The regulatory proposal implicitly equates addressing algorithmic bias with preventing algorithmic discrimination. At first sight, it gives the impression that algorithmic discrimination takes on an important place in the regulatory apparatus that it sets up. Both the explanatory memorandum and Recital 28 indicate that when classifying an AI system as high-risk, it is of particular relevance to consider ‘[t]he extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter’ including ‘non-discrimination’ (Art. 21 EUCFR) and ‘equality between women and men’ (Art. 23 EUCFR). Recitals 35, 36 and 37 warn that AI systems used in core sectors such as education, employment and essential services are liable to ‘violate […] the right not to be discriminated against’ and ‘perpetuate historical patterns of discrimination’.6 Recital 44 explicitly refers to non-discrimination law when stressing the importance of high-quality data requirements to ensure that a high-risk AI system ‘does not become the source of discrimination prohibited by Union law’. 5  European Commission. (7 November 2022). The gender pay gap situation in the European Union. Retrieved from https://commission​.europa​.eu​/strategy​-and​-policy​/policies​/justice​-and​-fundamental​ -rights​/gender​-equality​/equal​-pay​/gender​-pay​-gap​-situation​-eu​_en 6  The wording changed slightly in the compromise version of November 2022: Permanent Representatives Committee (25 November 2022). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts: General approach, Brussels, 14954/22. Retrieved from https://data​.consilium​.europa​.eu​/doc​/document​/ST​-14954​-2022​-INIT​/en​/pdf

When computers say no  225 Despite these numerous acknowledgements of the problem of algorithmic discrimination in the explanatory memorandum and the preamble, the binding part of the proposal mainly uses the term ‘bias’ and only mentions ‘discrimination’ twice.7 Yet the polysemy and broad scope of the term ‘bias’ (e.g. Friedman & Nissenbaum, 1996) paves the way for legal uncertainty concerning the interpretation of the obligations falling on providers and users of algorithmic systems.8 The EU’s High-Level Expert Group on Artificial Intelligence defines algorithmic bias as ‘systematic and repeatable errors in a computer system that create unfair outcomes, such as favouring one arbitrary group of users over others’.9 This definition is much broader than the definition of discrimination, which, in EU law, captures unfair outcomes only when they harm protected groups within certain contexts that fall within the scope of non-discrimination law. In other words, anti-discrimination law does not address bias itself, but rather some of the harms which it creates for legally protected groups. EU anti-discrimination law creates a harmonised set of minimum requirements for the 27 EU member states as well as EEA countries, candidates for EU membership and countries that have approximated their legislation to EU equality law. It sets three conditions for algorithmic bias to amount to discrimination. First, algorithmic bias has to create harm or disadvantage to a protected group or based on a protected ground. The personal scope of EU anti-discrimination law includes race or ethnic origin, sex or gender, disability, religion or belief, sexual orientation and age.10 Second, EU law addresses algorithmic bias only in certain  7  By contrast to previous versions that did not mention discrimination in the binding part of the text, the compromise text adopted in November 2022 now explicitly acknowledges that bias can lead to ‘discrimination prohibited by Union law’ in Art. 10(2)(f) and recognises the right of national public authorities or bodies in charge of supervising or enforcing the respect of inter alia the right to nondiscrimination in relation to the use of high-risk AI systems to request or access information in Art. 64(3). See also Art. 10(5), Art. 14(4)(b) and Art 15(3) (European Commission, 2021a). Interestingly, the term ‘fairness’ is entirely absent from the binding part of the proposal.  8  Amendment 78 of the European Parliament aims to mitigate this uncertainty in the context of the processing of special categories of personal data to ensure the detection and correction of ‘negative bias’ in relation to high-risk AI systems by adding to Recital 44 that ‘[n]egative bias should be understood as bias that create[s] direct or indirect discriminatory effect against a natural person’. See European Parliament. (14 June 2023). Amendments on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) A9-0188/2023 available at: https://www​.europarl​.europa​.eu​/doceo​/document​/ TA​-9​ -2023​- 0236​_ EN​.html  9  European Commission. Impact Assessment accompanying the Proposal for a Regulation of the European Parliament and of the Council laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, Brussels, SWD(2021) 84 final (p. 91). 10  See Directive 2000/43/EC (2000). Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, Brussels, OJ L 180/22; Directive 2000/78/EC (2000). Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation, Brussels, OJ L 303/16; Directive 2004/113/EC (2004). Directive 2004/113/EC of 13 December 2004 implementing the principle of equal treatment between men and women in the access to and supply of goods and services, Brussels, OJ L 373/37; Directive 2006/54/EC (2006). Directive 2006/54/EC of the European Parliament and of the Council of 5 July 2006 on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation (recast), Brussels, OJ L 204/23. In EU primary law, and in particular Art. 21 of the EU Charter of Fundamental Rights, the personal scope of protection is broader and non-exhaustive, but he CJEU has ruled that it could not be relied on to extend the scope of EU secondary law (C-354/13). See Case C-354/13, Judgment of 18 December 2014:

226  Research handbook on law and technology areas of life, including work and access to certain goods and services.11 Third, to qualify as discrimination, algorithmic bias must fall within the central dichotomy of EU anti-discrimination law. If a protected group or category is treated differently from others, it qualifies as direct discrimination. That would be the case for example when an algorithmic system used to screen CVs learns to use candidates’ ethnic background as a predictor of lesser performance. Alternatively, bias creates a particular disadvantage to a protected group without using the protected characteristic as a decision-making factor and qualifies as indirect discrimination. That might be the case if, for example, an algorithmic credit scoring system used predictors such as income or employment history, which are facially neutral towards protected groups, but that still have a disadvantageous impact on women. While direct discrimination cannot be justified in principle, indirect discrimination comes with an open-ended justification regime. A prima facie indirectly discriminatory provision, criterion or practice can be objectively justified if it serves a legitimate aim and the means of achieving that aim are appropriate and necessary.12 Mapping algorithmic bias onto the EU anti-discrimination framework yields a complex picture where the harms deriving from algorithmic bias only qualify as legally prohibited discrimination when fulfilling the above three conditions linked to EU anti-discrimination law’s personal, material and conceptual scope. Mapping fairness onto the legal framework is also a difficult task. The principle of equality is a contextually moving target, especially when courts are called to conduct a proportionality test to assess justifications. This raises the question of which definitions of fairness satisfy the requirements of the principle of equal treatment and under what conditions (Weerts, Xenidis, Tarissan, Palmer Olsen & Pechenizkiy, 2023). The case law of the ECJ shows that equal treatment cannot translate into a one-size-fits-all fairness formula because anti-discrimination law is polysemous and assumes different social and legal functions (Xenidis, 2021b).

3. THE EU EQUALITY PUZZLE: WHAT GROUPS ARE PROTECTED FROM ALGORITHMIC DISCRIMINATION AND WHEN? How does EU anti-discrimination law apply to algorithmic discrimination? Answering this question brings to light a series of regulatory, conceptual, doctrinal and procedural gaps and challenges. Not only do the specific mechanics of algorithmic discrimination create uncertainty regarding the interpretation and application of EU anti-discrimination rules, often exacerbating existing tensions, but they also question the frontiers of EU anti-discrimination law. Thus, certain forms of algorithmic discrimination could fall into the cracks of EU antidiscrimination law.

Fag og Arbejde (FOA) v Kommunernes Landsforening (KL). Court of Justice of the European Union. EU:C:2014:2463. 11  As explained below, the protection is not even for all protected grounds across these areas. 12  Note that the qualification of direct and indirect discrimination is different from US law. In EU law, direct discrimination does not require showing intent.

When computers say no  227 3.1 A Patchy Material Scope As hinted above, EU anti-discrimination law is a regulatory puzzle that combines provisions of EU primary and secondary law. Article 19 TFEU gives the EU power to adopt legislation prohibiting discrimination on grounds of sex, racial or ethnic origin, religion or belief, disability, age, and sexual orientation. Article 157 TFEU guarantees the equal treatment of men and women at work, especially with regard to pay. In the EU Charter of Fundamental Rights, Article 21 provides for a non-exhaustive list of protected criteria including, but also going beyond, those listed in Article 19 TFEU. In turn, Article 23 of the Charter ensures equality between men and women. In addition to these provisions of primary law, four equality directives prohibit discrimination on grounds of sex, race or ethnic origin, religion or belief, disability, age, and sexual orientation.13 Even though these directives pertain to discrimination in general, their application can be extended to algorithmic discrimination in particular.14 Despite a seemingly broad personal scope, the application of EU law is not equally extensive. With a broad brush, Directive 2000/43 applies to discrimination on grounds of race or ethnic origin in the areas of employment, goods and services, social protection, and education. Directive 2004/113 and Directive 2006/54 offer a similar protection in relation to sex discrimination.15 However, discrimination on grounds of religion or belief, disability, age and sexual orientation is only prohibited in matters related to employment and occupation. This means that an algorithmic system that would exclude, for example, end users above a certain age or with a certain religious affiliation from accessing given services or from purchasing certain goods would not, in principle, be contrary to EU secondary anti-discrimination law as it stands.16 In addition to these gaps in the material scope of EU anti-discrimination law, further exceptions exist, which might negatively impact EU law’s grasp on algorithmic discrimination, such as the fact that the content of media and advertising is not covered by the ban on sex discrimination.17 When considering the pervasiveness of algorithmic systems 13  See Directive 2000/78/EC. Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation, Brussels, OJ L 303/16; Directive 2004/113/EC. Directive 2004/113/EC of 13 December 2004 implementing the principle of equal treatment between men and women in the access to and supply of goods and services, Brussels, OJ L 373/37; Directive 2006/54/EC. Directive 2006/54/EC of the European Parliament and of the Council of 5 July 2006 on the implementation of the principle of equal opportunities and equal treatment of men and women in matters of employment and occupation (recast), Brussels, OJ L 204/23; Directive 2000/43/ EC. Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin, Brussels, OJ L 180/22. 14  As highlighted in previous work, this ‘translation’ to the algorithmic context comes with conceptual, doctrinal and procedural challenges (Xenidis & Senden, 2019; Xenidis, 2021a; Gerards & Xenidis, 2021). 15  Except in relation to the content of media or advertising and to education, see below. 16  Of course, individual member states can opt for a more protective legal framework as long as it does not breach the EU treaties. A proposal for evening out the material scope of EU anti-discrimination law across protected grounds is pending since 2008. See European Commission. Proposal for a Council Directive on implementing the principle of equal treatment between persons irrespective of religion or belief, disability, age or sexual orientation, Brussels, COM(2008) 426 final.. In addition, Art. 21 of the EU Charter of Fundamental Rights might apply when member states are implementing EU law in matters falling outside the scope of the equality directives. 17  See Recital 13, Directive 2004/113/EC. Directive 2004/113/EC of 13 December 2004 implementing the principle of equal treatment between men and women in the access to and supply of goods and services, Brussels, OJ L 373/37.

228  Research handbook on law and technology in the market for goods and services, these gaps seriously undermine the robustness of the existing framework. Importantly, however, EU law only foresees minimum requirements and member states are in principle free to adopt a higher level of protection against discrimination. 3.2 Personal Scope: Where to Draw the Boundaries? When algorithmic systems are used to support decision-making, their primary function is often to discriminate. Yet, legally speaking, some forms of distinction are prohibited. This renders a societal consensus over the moral wrong of differentiating between certain social groups in certain contexts. EU secondary law only prohibits a finite number of such distinctions, namely when they are based on sex or gender, race or ethnic origin, disability, religion or belief, age and sexual orientation. Should the law extend to algorithmic distinctions that unfairly and systematically exclude given social groups from accessing valuable social goods? This raises deep-seated normative questions concerning the social function and the mandate of non-discrimination law. For example, should anti-discrimination law apply to algorithmic decision-making systems that rely on behavioural data (such as e.g. eating, sleeping, or sports habits) to exclude consumers from given insurance policies or to personalise the prices of such services in exclusionary ways? With the pervasive deployment of algorithmic systems, there is a risk that new patterns of systemic discrimination emerge based on aggregated social sorting performed by predictive analytics, algorithmic profiling and decision-making. This could bear grave socio-economic consequences for social groups that are not protected under EU equality law (Gerards & Borgesius, 2022; Wachter, 2022). In addition, some scholars have drawn attention to the fact that ‘emergent’ forms of algorithmic distinctions might not always correspond to socially salient features (Mann & Matzner, 2019; Leese, 2014). While it is clear that such algorithmic distinctions are morally unfair, is it the role of anti-discrimination law to address them? And if so, how? In responding to these questions, there is scope to consider how the open-ended list of protected grounds provided in Article 21 of the Charter, similar to Article 14 of the European Convention of Human Rights, could be used to address algorithmic discrimination beyond the categories protected by EU secondary law. This situation would nevertheless be limited to the subsidiary space where member states are implementing EU law but in situations falling outside the scope of the equality directives (Kilpatrick, 2014). Another friction that arises when attempting to apply EU anti-discrimination law to algorithmic discrimination relates to intersectionality. Algorithmic profiling powered by big data affects the granularity of the classifications underpinning decision-making. In other words, algorithmic distinctions are very likely to compound numerous data points, potentially at the intersection of several protected groups. This could give rise to so-called intersectional forms of discrimination, i.e. discrimination originating in several inextricably linked vectors of disadvantage. The problem is that the Court of Justice of the European Union has not recognised intersectional discrimination as a prohibited form of discrimination so far. In Parris, it stated that ‘there is […] no new category of discrimination resulting from the combination of more than one [protected] groun[d] […] that may be found to exist where discrimination on the basis of those grounds taken in isolation has not been established’.18 Thus discrimination induced by the use of algorithmic systems conceptually challenges the unidimensional or ‘single axis’ 18  See Case C-443/15. Judgment of 24 November 2016: David L. Parris v. Trinity College Dublin and Others. Court of Justice of the European Union. EU:C:2016:897.

When computers say no  229 understanding of discrimination prevalent in EU law. Since intersectional discrimination is already pervasive in society but not legally recognised as such, it also risks being amplified through feedback loops while at the same time still remaining invisible. Indeed, the lack of participation and representation opportunities for intersectionally marginalised groups in society leads to increased algorithmic invisibility for these groups. The Gender Shades study has shown, for example, that face recognition systems display high rates of non- or misrecognition of the faces of women of colour (Buolamwini & Gebru, 2018). To effectively capture algorithmic discrimination, EU law should evolve towards a more complex conceptualisation of discriminatory harms, for instance by extending its scope to intersectional and systemic discrimination.

4. LEGALLY QUALIFYING ALGORITHMIC HARMS: REVISITING THE DICHOTOMY BETWEEN DIRECT AND INDIRECT DISCRIMINATION The third type of difficulty encountered when applying EU anti-discrimination law to algorithmic harms is that of shoehorning algorithmic discrimination into the direct/indirect bifurcated framework. The definitions of direct and indirect discrimination provided in the EU equality directives highlight three main legal criteria that in principle serve to distinguish between direct and indirect discrimination: the (absence of) neutrality of a given measure or practice, the existence of a discriminatory ‘treatment’ vs. discriminatory ‘effects’, and the presence of group vs. individual harm.19 Applying such distinguishing criteria to algorithmic harms to determine whether they qualify as direct or indirect discrimination raises difficult normative questions. Qualifying algorithmic unfairness as direct or indirect discrimination is crucial because it leads to, respectively, a closed or an open-ended regime of justifications. This regime determines whether and how users of algorithmic technologies can lawfully use a system that is biased against a protected group. Hence, rather than simply responding to a legal technicality, the choice of qualifying algorithmic harms as direct or indirect discrimination directly shapes liability for algorithmic discrimination and thus amounts to deciding how the burdens of inequality are allocated among users of algorithmic technologies, potential victims and society at large. First, how to qualify the neutrality of algorithmic operations? In other terms, what is a neutral criterion or practice in the context of algorithmic decision-making systems? Or else, in the absence of bias mitigation practices, can data-driven systems ever be conceptualised as neutral towards protected grounds? This is a thorny question because untreated data is very likely to reflect past discrimination, which algorithmic systems are then likely to treat as relevant factors for future predictions. One might even go so far as to call algorithmic discrimination a self-fulfilling prophecy. Hence, these systems can hardly qualify as neutral. At the same time, as argued elsewhere, algorithmic discrimination mostly takes the form of proxy 19  Direct discrimination is defined in EU law as a situation in which ‘one person is treated less favourably than another [… is …] on grounds of [a protected characteristic]’. Indirect discrimination refers to situations ‘where an apparently neutral provision, criterion or practice would put persons [who are members of a protected group] at a particular disadvantage compared with other persons’ unless it can be objectively justified. (emphasis added).

230  Research handbook on law and technology discrimination (Prince & Schwarcz, 2019). Machine learning algorithms are trained to recognise patterns in large datasets. As a result, even if developers remove labels like sex, race or age, these systems still identify related patterns through correlated variables and can therefore unlawfully discriminate. For instance, an algorithm that used the distance between workers’ home and the workplace as a predictor for job tenure was found to be discriminatory by proxy because it inferred workers’ membership of an ethnic group based on zip code data (Williams et al., 2018). In the same vein, we could imagine that sports data collected through the use of apps influence the price paid by end users for loans or insurance. Or the content watched on media platforms such as YouTube or Netflix might reveal one’s cultural affiliation and perhaps correlate with one’s ethnic background, age and even socio-economic background. In Dekker, the ECJ recognised that when such proxies are inextricably linked to protected characteristics (e.g. pregnancy and sex), they cannot be considered neutral and such proxy discrimination qualifies as direct discrimination.20 However, the Court’s approach to what constitutes a proxy ‘inextricably linked’ to a protected ground is not entirely clear. This was illustrated in Jyske Finans, where the Court found that an applicant’s country of birth did not suffice ‘in itself, [to] justify a general presumption that that person is a member of a given ethnic group’.21 It is thus difficult to predict whether the Court will treat algorithmic proxy discrimination as direct or indirect on this basis. Moreover, the notion of neutrality is easily manipulated depending on how the comparator group – or in Westen’s terms the desirable level of equality in society (Westen, 1982) – is defined, as demonstrated in cases like Achbita, WABE, and VL.22 Deciding whether an algorithmic system can qualify as neutral is key because it impacts the finding of direct or indirect discrimination and thus corresponding justification routes, with consequences on users’ liability. Yet, data-driven decision-making calls for re-assessing the contours of the neutrality criterion. Second, as the Advocate General recalled in VL, ‘[t]here is “indirect” discrimination where the difference resides not so much in the treatment as in the effects which it produces’. Yet qualifying algorithmic operations in terms of treatment or effect raises questions about how to qualify the entangled forms of agency that exist in human-machine relationships and sociotechnical systems. Algorithmic discrimination is technologically mediated by machines that can learn to discriminate. In this context and in light of automation biases, how to conceive of the agency of the humans in the loop? The notion of direct discrimination would capture algorithmic discrimination as a form of differential treatment consisting of a human decision 20  Case C-177/88. Judgment of 8 November 1990: Elisabeth Johanna Pacifica Dekker v Stichting Vormingscentrum voor Jong Volwassenen (VJV-Centrum) Plus. Court of Justice of the European Union. EU:C:1990:383. 21  See Case C-668/15. Judgment of 6 April 2017: Jyske Finans A/S v Ligebehandlingsnævnet, acting on behalf of Ismar Huskic. Court of Justice of the European Union. EU:C:2017:278. 22  See Case C-157/15. Judgment of 14 March 2017: Samira Achbita and Centrum voor gelijkheid van kansen en voor racismebestrijding v G4S Secure Solutions NV; Cases C-804/18 and C-341/19 (joined). Judgment of 15 July 2021: IX v WABE eV and MH Müller Handels GmbH v MJ. Court of Justice of the European Union. EU:C:2021:594; Case C-16/19. Judgment of 26 January 2021: VL v Szpital Kliniczny im. dra J. Babińskiego Samodzielny Publiczny Zakład Opieki Zdrowotnej w Krakowie. Court of Justice of the European Union. EU:C:2021:64. Court of Justice of the European Union. EU:C:2017:203. For instance, it has been argued that comparing religious and non-religious employees, as opposed to employees whose religious beliefs mandate the wearing of religious garment and employees whose (absence of) religious beliefs do(es) not, yields a different understanding of the neutrality of a practice (Cloots, 2018; Sharpston, 2021).

When computers say no  231 interpreting an algorithmic recommendation.23 In this legal construct, machine support would not detract from the integrity of human agency. Human agents would not be able to justify algorithmic discrimination invoking their lack of intent or awareness that the algorithmic recommendation was biased. Conversely, the notion of indirect discrimination would construct algorithmic discrimination as the effect of multiple conjugated causes among which a (set of) human practice(s). While strategically assigning responsibility to the human agent, it casts a looser net around liability by permitting escape routes via justifications. The same kind of dilemma arises when considering the question of causation. In direct discrimination, the notion of treatment ‘on grounds of’ a protected category and its interpretation by the ECJ have often been said to amount to a causation requirement. In other words, the difference in treatment arises ‘because of’ a protected ground. How to qualify causation when machine learning systems operate on the basis of correlations? At first sight, this would speak for conceptualising algorithmic discrimination as indirect discrimination. Yet, the causation requirement has often been relaxed by the ECJ, for instance in its case law on discrimination by association or direct proxy discrimination.24 Again, it appears that both the direct and the indirect discrimination framework can be constructed to fit algorithmic discrimination, but choosing one over the other pertains more to a strategic rather than to a technical interpretation of the legal framework. In fine, this choice pertains to how legal rules distribute the costs of inequality among applicants, defendants and society. Third, even though the ECJ has departed from this distinction, in principle, direct discrimination is related to individual harm and indirect discrimination to group harm. The problem is that algorithmic systems upset this distinction. They do not treat subjects qua individuals but rather based on algorithmic representations inferred from aggregated data and clustering. Hence these systems compound individual and collective harm by letting structural discrimination feed into individualised assessments. Even if a given algorithmic classification is not accurate for a given individual, its inclusion in a given algorithmic cluster will determine the treatment applicable to that person. Conceptualising algorithmic discrimination as individual or group harm is once again a strategic, rather than a technical, choice. If, as has been argued in the literature so far (e.g. Hacker, 2018; Borgesius, 2020; KellyLyth, 2021),25 courts go down the indirect discrimination route to qualify algorithmic discrimination, this will create important challenges. Indirect discrimination raises essential questions such as ‘how much’ disadvantage amounts to a ‘particular disadvantage’ prohibited under EU anti-discrimination law and what ground truth to use as a baseline for comparison. It also entails an open pool of justifications and a proportionality test that is difficult to ‘translate’ in the context of algorithmic systems. Attempting to assess whether algorithmic discrimination can be justified leads to assessing decisions pertaining to technical trade-offs between accuracy and fairness.

23  For example, the outputs of the AMS algorithm were framed as ‘second opinions’ and the system itself as a ‘mere support system’ while decisions were ‘delegated to case workers’ (Alhutter et al., 2020). 24  See Case C-83/14. Judgment of 16 July 2015, ‘CHEZ Razpredelenie Bulgaria’ AD v Komisia za zashtita ot diskriminatsia. Court of Justice of the European Union. EU:C:2015:480. 25  More recently, it has been argued that algorithmic discrimination can amount to direct discrimination (Adams-Prassl, Binns & Kelly-Lyth, 2023).

232  Research handbook on law and technology

5. CONCLUSIONS: SOME REFLECTIONS ON ALGORITHMIC EQUALITY This chapter has shown how algorithmic technologies disrupt the application of existing legal constructs and thereby destabilise the justice arrangements entrenched in the law. Our task as legal scholars is to reflect on the normative implications of different possible interpretations of the legal framework in light of problems such as algorithmic discrimination. Such a reflection demands articulating the normative equilibria underpinning existing legal constructs and revisiting the law ‘strategically’ to safeguard or restore (and in some cases perhaps even alter) these value frameworks. In particular, national and EU non-discrimination laws need to be applied in a teleological and instrumental manner to guarantee that technological evolutions do not jeopardise fundamental rights. Anti-discrimination law is technology-neutral and its application should not stop at the frontiers of the digital world. Possibilities for legal resilience include harnessing the principles of effectiveness and purposive interpretation. At the same time, reflecting on the strategic nature of legal interpretation and the deep-seated normative questions it raises invites us to look ahead. Thinking about how to redress algorithmic discrimination is an invaluable opportunity to think about the transformative potential of anti-discrimination law. Specifically, fixing algorithmic bias (imagining that it is even possible) will not solve the problem of algorithmic discrimination. Instead, preventing discriminatory algorithms from becoming a self-fulfilling prophecy requires transforming the status quo. Legally speaking, this invites us to think about carving a greater role for positive action measures in EU anti-discrimination law and to reflect on how to best tap into their transformative potential in the algorithmic society.

REFERENCES Adams‐Prassl, J., Binns, R. & Kelly‐Lyth, A. (2023). Directly discriminatory algorithms. The Modern Law Review, 86(1), 144–175. Retrieved from https://onlinelibrary​.wiley​.com​/doi​/abs​/10​.1111​/1468​ -2230​.12759. Allhutter, D., Cech, F., Fischer, F., Grill, G. & Mager, A. (2020). Algorithmic profiling of job seekers in Austria: How austerity politics are made effective. Frontiers in Big Data, 3, 1–17. Retrieved from https://doi​.org​/10​.3389​/fdata​.2020​.00005. Borgesius, F.J.Z. (2020). Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24(10), 1572–1593. Retrieved from https://doi​.org​/10​.1080​/13642987​.2020​.1743976. Buolamwini, J. & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Proceedings of Machine Learning Research, 81, 77–91. Retrieved from https:// proceedings​.mlr​.press​/v81​/ buolamwini18a​.html. Cloots, E. (2018). Safe harbour or open sea for corporate headscarf bans? Achbita and Bougnaoui. Common Market Law Review, 55(2), 589–624. Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. Retrieved from https://www​.reuters​.com​/article​/us​-amazon​-com​-jobs​-automation​-insight​/amazon​ -scraps​-secret​-airecruiting​-tool​-that​-showed​-bias​-against​-women​-idUSKCN1MK08G. Friedman, B. & Nissenbaum, H. (1996). Bias in computer systems. ACM Transactions on Information Systems (TOIS), 14(3), 330–347. Gerards, J. & Xenidis, R. (2021). Algorithmic Discrimination in Europe: Challenges and Opportunities for EU Gender Equality and Non-Discrimination Law. Brussels: Publications Office of the European Union. Retrieved from https://op​.europa​.eu​/en​/publication​-detail/-​/publication​/082f1dbc​-821d​-11eb​ -9ac9​- 01aa75ed71a1.

When computers say no  233 Gerards, J. & Borgesius, F.Z. (2022). Protected grounds and the system of non-discrimination law in the context of algorithmic decision-making and artificial intelligence. Colorado Technology Law Journal, 20, 1. Retrieved from https://ctlj​.colorado​.edu/​?p​=860. Green, B. & Chen, Y. (2019). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments. ACM FAT* ‘19: Proceedings of the Conference on Fairness, Accountability, and Transparency, 90–99. Retrieved from https://doi​.org​/10​.1145​/3287560​.3287563. Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(4), 1143–1185. Retrieved from https://doi​.org​/10​.54648​/cola2018095. Hellman, D. (2021). Big data and compounding injustice. Virginia Public Law and Legal Theory Research Paper, No. 2021-27. Retrieved from https://ssrn​.com​/abstract​=3840175. Kayser-Bril, N. (2019). Austria’s employment agency rolls out discriminatory algorithm, sees no problem. Algorithm Watch. Retrieved from https://algorithmwatch​.org​/en​/austrias​-employment​ -agency​-ams​-rolls​-out​-discriminatory​-algorithm/. Kelly-Lyth, A. (2021). Challenging biased hiring algorithms. Oxford Journal of Legal Studies, 41(4), 899–928. Retrieved from https://doi​.org​/10​.1093​/ojls​/gqab006. Kilpatrick, C. (2014). Article 21 – non-discrimination. In S. Peers, T. Hervey, J. Kenner & A. Ward (Eds.). The EU Charter of Fundamental Rights: A Commentary (pp. 579–604). London: Hart Publishing. Leese, M. (2014). The new profiling: Algorithms, black boxes, and the failure of anti-discriminatory safeguards in the European Union. Security Dialogue, 45(5), 494–511. Retrieved from https://doi​.org​ /10​.1177​/0967010614544204. Mann, M. & Matzner, T. (2019). Challenging algorithmic profiling: The limits of data protection and anti-discrimination in responding to emergent discrimination. Big Data Society, 6(2), 1–11. Retrieved from https://doi​.org​/10​.1177​/20539517198958. Mayson, S.G. (2018). Bias In, Bias Out. University of Georgia School of Law Legal Studies Research Paper No. 2018-35. Retrieved from https://papers​.ssrn​.com ​/sol3​/papers​.cfm​?abstract​_id​=3257004. Prince, A.E.R. & Schwarcz, D. (2019). Proxy Discrimination in the age of artificial intelligence and big data. Iowa Law Review, 105, 1257–1318. Retrieved from https://ilr​.law​.uiowa​.edu​/print​/volume​-105​ -issue​-3​/proxy​-discrimination​-in​-the​-age​-of​-artificial​-intelligence​-and​-big​-data. Sharpston, E. (2021). Shadow Opinion in Joined Cases C-804/18 and C-341/19 IX v WABE e.V and MH Müller Handels GmbH v MJ. Retrieved from http://eulawanalysis​.blogspot​.com​/2021​/03​/shadow​ -opinion​-of​-former​-advocate​.html. Turk, V. (2015). When algorithms are sexist. Vice. Retrieved from http://www​.vice​.com​/en​_us​/article​/ ezvkee​/when​-algorithms​-are​-sexist. Wachter, S. (2022). The theory of artificial immutability: Protecting algorithmic groups under antidiscrimination law. Tulane Law Review, 97(2), 149–204. Retrieved from https://www​.tulanelawreview​ .org​/pub​/artificial​-immutability. Weerts, H., Xenidis, R., Tarissan, F., Olsen, H.P. & Pechenizkiy, M. (2023). Algorithmic unfairness through the lens of EU non-discrimination law: Or why the law is not a decision tree. 2023 ACM Conference on Fairness, Accountability, and Transparency (FAccT ‘23). Retrieved from https://arxiv​ .org​/abs​/2305​.13938. Westen, P. (1982). The empty idea of equality. Harvard Law Review, 95(3), 537–596. Retrieved from https://doi​.org​/10​.2307​/1340593. Williams, B.A., Brooks, C.F. & Shmargad, Y. (2018). How algorithms discriminate based on data they lack: Challenges, solutions, and policy implications. Journal of Information Policy, 8, 78–115. Retrieved from https://doi​.org​/10​.5325​/jinfopoli​.8​.2018​.0078. Wulf, J. (2022). Automated Decision-Making Systems and Discrimination: Understanding Causes, Recognizing Cases, Supporting Those Affected. Berlin: Algorithm Watch. Retrieved from https:// algorithmwatch​.org​/en ​/wp​-content ​/uploads​/2022​/06​/AutoCheck​-Guidebook ​_ ADM​_ Discrimination​ _EN​-AlgorithmWatch​_ June​_2022​.pdf. Xenidis, R. & Senden, L. (2019). EU non-discrimination law in the era of artificial intelligence: Mapping the challenges of algorithmic discrimination. In U. Bernitz, X. Groussot, J. Paju & S.A. de Vries (Eds.). General Principles of EU Law and the EU Digital Order. Alphen aan den Rijn: Wolters Kluwer.

234  Research handbook on law and technology Xenidis, R. (2021a). Tuning EU equality law to algorithmic discrimination: Three pathways to resilience. Maastricht Journal of European and Comparative Law, 27(6), 736–758. Retrieved from https://journals​.sagepub​.com​/doi​/full​/10​.1177​/1023263X20982173. Xenidis, R. (2021b). The polysemy of anti-discrimination law: The interpretation architecture of the Framework Employment Directive at the Court of Justice. Common Market Law Review, 58(6), 1649–1696. Retrieved from https://doi​.org​/10​.54648​/cola2021108.

15. International human rights law in the digital age: perspectives from the UN human rights system Claudia Victoria Ionita and Machiko Kanetake

1. INTRODUCTION The increase of digital surveillance infrastructures, the growing spread of disinformation that undermines credible journalism, and the proliferation of algorithmic ‘profiling’ that reinforces systemic inequality: these are a few of the warning signs of a profound conflict between the development and use of digital technologies and respect for human rights. In her report on ‘racial discrimination and emerging digital technologies’, E. Tendayi Achiume, the United Nations’ Special Rapporteur on racial discrimination, observed that emerging digital technologies ‘exacerbate and compound’ existing inequities based upon race, ethnicity, and national origin, raising concerns about racial discrimination ‘in the design and use of emerging digital technologies’.1 The need for controlling the spread of disinformation and hate speech has then been used as a justification by governments to arbitrarily disrupt access to the internet, which has in turn the ‘profound adverse impacts’ on human rights,2 not only the freedom of expression, but also the enjoyment of economic, social, and cultural rights.3 At the same time, digital technologies are indispensable in documenting human rights violations and promoting accountability for them.4 In addressing the relevance of international human rights law (a branch of international law containing principles, rules, standards, and practices with which to realise human rights of individuals and groups) in the digital age, it is important to be aware of whose normative interpretation is shaping the field’s responses to the challenges associated with the use of digital technologies and networked spaces. Law can be understood differently depending on law-applying actors or other actors who present their views on the interpretation of international human rights law. Such actors are not limited to domestic courts, regional human rights courts, and other international courts and tribunals. The political organs of states, advisory bodies for governments, national human rights institutions, treaty bodies to monitor human 1  UN Human Rights Council. (2020, 18 June). Report of the Special Rapporteur on contemporary forms of racism, racial discrimination, xenophobia, and related intolerance. Racial discrimination and emerging digital technologies: a human rights analysis, UN Doc. A/HRC/44/57, para 4. 2  UN Human Rights Council. (2022, 13 May). Report of the Office of the UN High Commissioner for Human Rights. Internet shutdowns: Trends, causes, legal implications and impacts on a range of human rights, UN Doc. A/HRC/50/55, para 66. 3  UN Human Rights Council. (2022, 13 May). Report of the Office of the UN High Commissioner for Human Rights. Internet shutdowns: Trends, causes, legal implications and impacts on a range of human rights, UN Doc. A/HRC/50/55. 4  E.g. UN Human Rights Council. Report of the Working Group on Enforced or Involuntary Disappearances. Enforced or involuntary disappearance, UN Doc. A/HRC/51/31, para 36.

235

236  Research handbook on law and technology rights treaties, international organisations, business enterprises, human rights NGOs, journalists, and scholars take part in the development of international human rights norms, although some are more influential than others. It is also important to note that we use ‘norms’ throughout this chapter to refer, not only to existing laws, but also to so-called ‘soft law’ standards, which states may consider that they ought to abide by, despite the fact that such standards are not formally binding at the international level. Among a wide range of actors that shape international human rights law’s responses to the digital age, this chapter focuses on the role of the United Nations (UN) human rights system. After all, one of the purposes of the United Nations is to promote respect for human rights.5 For the sake of this chapter, such a system includes both bodies that are established under the UN Charter (Charter-based bodies), and committees that monitor the implementation of human rights conventions sponsored by the United Nations (treaty-based bodies). This chapter does not focus on the work of the United Nations’ specialised agencies active in the governance of the information society, such as the International Telecommunication Union (ITU) and the UN Educational, Scientific and Cultural Organization (UNESCO). While many other international organisations are active in facilitating the protection of human rights (Benedek, 2019), the United Nations is the most universal treaty-based international organisation in terms of the number of state members, which comes with both potential and challenges for the United Nations’ role in promoting human rights. In order to have a glimpse of the United Nations’ contribution, we have gathered information through the website of the Office of the UN High Commissioner for Human Rights (OHCHR), entitled ‘Digital Space and Human Rights’.6 More specifically, we examined 43 documents linked to the OHCHR website as of 1 March 2023, which filtered the documents with the following pre-selected key words: ‘digital spaces’, ‘artificial intelligence and algorithms’, ‘assistive devices and technologies’, ‘big data’, ‘cybersecurity and data protection’, ‘digital privacy’, ‘drones and autonomous weapon systems’, ‘emerging technologies’, ‘privacy’, ‘privacy and surveillance’, and ‘surveillance’. These 43 documents were published between 2014 and 2022. We also collected and analysed an additional 22 documents published between 2011 and 2022, through official UN websites (un​.or​g, digitallibrary​.un​.o​rg, and ohchr​.o​rg), using key words such as ‘digital’, ‘technology’, ‘free speech’, ‘privacy’, ‘resolution’, ‘discrimination’, ‘ICCPR’, ‘UDHR’, ‘ICESCR’, ‘expression’, ‘online’, and ‘hate speech’. The types of documents include resolutions, reports, and public statements. This chapter maps digital spaces (understood in the context of this chapter to refer to online spaces accessible through devices such as computers or smartphones) and human rights as interpreted by the United Nations’ human rights mechanisms. This chapter is primarily descriptive in nature. We will start with a brief explanation of the United Nations’ human rights system for the sake of our analysis (Section 2). This will be followed by a short narrative on the United Nations’ construction of three selected rights: freedom of opinion and expression, equality and non-discrimination, and the right to privacy (Section 3). There is no doubt that the United Nation engages in the promotion of a number of other human rights affected by digital technologies and networked spaces. Yet these three appeared most predominantly in

Charter of the United Nations, 24 October 1945, 1 UNTS XVI, Article 1(3). UN High Commissioner for Human Rights. Digital space and human rights. https://www​.ohchr​ .org​/en​/topic​/digital​-space​-and​-human​-rights (last accessed 15 May 2023). 5  6 

International human rights law in the digital age  237 the 43 documents filtered through the United Nations’ aforementioned website,7 partly due to the keywords that the OHCHR selected for the sake of the website entitled ‘Digital Space and Human Rights’. In the digital era, one of the common challenges that the United Nations—as well as many other entities—has repeatedly raised is the observance of human rights norms by business entities, inasmuch as international human rights law has traditionally developed to regulate states’ organs (Section 4). While the United Nations has been active since around 2011 in promoting human rights in the digital environment, precisely because of its broad membership, the United Nations cannot escape from interstate politics which limits its ability to be critical of governmental misuse of digital technologies (Section 5).

2. CHARTER-BASED AND TREATY-BASED BODIES The United Nations’ human rights approaches cannot be understood as a unified endeavour. The United Nations is a complex body involving governmental officials, secretariat officials, designated experts, and civil society organisations. As noted above, the United Nations’ human rights mechanisms can be divided into Charter-based bodies and treaty-based bodies. The Charter-based bodies are established through the application of the UN Charter and include the UN General Assembly and its subsidiary organ, the UN Human Rights Council (UNHRC). The UNHRC, which replaced the Commission on Human Rights, consists of 47 Member States elected by the General Assembly on the basis of the number of seats allocated to five regional groups. One of the core mechanisms of the UNHRC is called ‘Special Procedures’ (Pinto, 2018). The UNHRC may establish Special Procedures for a country-specific mandate or a themebased mandate. Such a task is carried out by a mandate folder (i.e. Special Rapporteur, Independent Expert, expert members of a Working Group) selected through a competitive process. As contrasted with the UNHRC made up of governmental representatives, the mandate folders are supposed to serve in their personal capacities as experts and as such do not represent a state. While the Experts can, during their tenure, take on additional commitments, they must not engage in activities that would create a conflict of interest with their UN duties. While the scope of human rights where Special Procedures apply varies depending on their mandate and the state concerned, the ‘bottom layer’ is the Universal Declaration of Human Rights (UDHR),8 which serves as a normative yardstick for the work of the Special Procedures (Pinto, 2018, para 49). On top of the UDHR, Special Procedures refer to human rights treaties sponsored by the United Nations, including the International Covenant on Civil and Political

7  According to our keyword search of 43 documents, the following keywords returned with the results indicated in brackets: ‘right to privacy’ (547), ‘discrimination’(313), ‘freedom of expression’ (234), ‘freedom of opinion and expression’ (92), ‘equality’ (89), ‘right to education’ (53), ‘non-discrimination’ (35), ‘right to life’ (29), ‘right to health’ (18), ‘right to an effective remedy’ (8), ‘right to be forgotten’ (5), ‘right to work’ (5), ‘right to an adequate standard of living’ (4), ‘right to liberty and security’ (3), ‘right to participate in cultural life’ (2), ‘freedom of religion’ (2), ‘right to development’ (excluding the one used in the description of the agenda item) (1), ‘right to assembly’ (1), ‘right to social security’ (1), ‘right to peaceful protest’ (1). We must reiterate the fact that the number of keywords found in 43 documents does not reflect the level of UN-wide engagement with different human rights in the digital age. 8  Universal Declaration of Human Rights (adopted 10 December 1948 UNGA Res 217 A(III).

238  Research handbook on law and technology Rights (ICCPR)9 and the International Covenant on Economic, Social and Cultural Rights (ICESCR).10 To provide a concrete example of the Charter-based mechanism, the Special Rapporteur on the right to education, Koumbou Boly Barry, issued her report focusing on the impact of the digitalisation of education on the right to education.11 In highlighting the benefits and risks of digitalising education, the Special Rapporteur raised concerns over the growing inequalities, which were exacerbated by the reliance upon digital education.12 She has also pointed out international inequalities, in which corporations located in the Global North dominate the provision of services.13 The Special Rapporteur articulated her view that the ‘massive imbalance in power’ between technology providers and users, due to the former’s ability to accumulate data about students and teachers, is ‘at odds with the human rights principles of freedom, equality, autonomy and participation’.14 While these reports are non-binding, the work of the Special Rapporteurs has been essential to the advancement of human rights. Their input has been critical in the development and application of international human rights law, as their recommendations allowed the United Nations to bring global attention to a wide range of human rights issues (Subedi, 2011). On top of the Charter-based mechanisms, the UN human rights system is understood to include a series of bodies that are established not through the UN Charter, but according to nine UN-sponsored human rights conventions in order to monitor the implementation of such conventions. There are ten treaty-monitoring bodies (or simply called UN treaty bodies)—such as the Human Rights Committee (HRC Committee) and the Committee on the Elimination of Discrimination Against Women (CEDAW Committee)—which consist of elected and independent experts. Broadly speaking, the treaty-monitoring bodies issue three types of documents regarding respective human rights treaties: General Comments and Recommendations, which are addressed to all state parties; Concluding Observations and Concluding Comments, which are addressed to a particular state party; and Views (and Decisions) and ‘Suggestions and Recommendations’, which pertain to individual communications and petitions. To give a specific example, one of the treaty-monitoring bodies, the Committee on the Rights of the Child (CRC Committee), which monitors the implementation of the Convention on the Rights of the Child, adopted General Comment No. 25 focusing on children’s rights in relation to digital environments.15 The CRC Committee used the term ‘violence in the digital environment’, observing that states parties have a ‘duty to protect children from infringements of their rights by business enterprises’, including ‘the right to be protected from all forms of violence in the digital environment’.16 It must be noted that General Comments and Recommendations are designed, not only to facilitate states parties’ implementation, but also International Covenant on Civil and Political Rights (1966, 16 December) 999 UNTS 171. International Covenant on Economic, Social and Cultural Rights (1966, 16 December) UNTS 993. 11  UN Human Rights Council. (2022, 19 April). Report of the Special Rapporteur on the right to education, Koumbou Boly Barry. Impact of the digitalization of education on the right to education, UN Doc. A/HRC/50/32. 12  Ibid., paras. 52–55. 13  Ibid., paras 60–61. 14  Ibid., para 62. 15  Committee on the Rights of the Child. (2021, 2 March). General comment no. 25 (2021) on children’s rights in relation to the digital environment, UN Doc. CRC/C/GC/25. 16  Ibid., para 37.  9  10 

International human rights law in the digital age  239 to stimulate the activities of non-state stakeholders in fostering the realisation of treaty rights (Takata & Hamamoto, 2023). In that sense, General Comment No. 25 has been referred to by a wide range of stakeholders, including, for example, the European Commission’s ‘European strategy for a better internet for kids (BIK+)’.17

3. THE UNITED NATIONS’ ENGAGEMENT WITH SPECIFIC HUMAN RIGHTS IN THE DIGITAL ENVIRONMENT 3.1 The Right to Freedom of Opinion and Expression in the Digital Era For several decades since the UN General Assembly’s adoption of the UDHR, digital technology and human rights were two terms that would rarely sit next to one another. This is partly illustrated by the Geneva Plan of Action, adopted in 2003 by the UN-sponsored World Summit on the Information Society, which merely makes cursory reference to ‘rights to privacy, data, and consumer protection’.18 In this sense, the United Nations did not give an ‘early warning’ to some of the detrimental impacts of digital technologies on human rights. In the discourse of the United Nations, one of the turning points came with the UNHRC’s Resolution 20/8 of 5 July 2012 on the ‘promotion, protection and enjoyment of human rights on the internet’.19 This resolution was facilitated by the report of the UN Special Rapporteur on the freedom of opinion and expression, Frank La Rue, which articulated the applicability of international human rights norms to the internet.20 The UNHRC’s Resolution 20/8 of 2012 is significant in that it was the first UN resolution to affirm that human rights, in particular that of freedom of expression, have a place not only in the physical world, but in the digital one as well. The draft resolution provides that ‘the same rights that people have offline must also be protected online’.21 The resolution was hailed as innovative, a sign that the United Nations is finally able to recognise that human rights extend to the digital world as well (Bildt, 2012). At the same time, the resolution attracted criticism. For instance, Kettemann condemned the document for failing to provide the additional protections necessary to safeguard the freedom of expression in the digital era. For Kettemann, the framing of the right to freedom of expression was archaic: it followed existing free speech provisions to a fault, failing to realise that a new environment requires new safeguards (Ketteman, 2012). The right to freedom of opinion and expression is foremost provided in Article 19 of the ICCPR. Under Article 19(3), such a right can be restricted, including for the protection of national security or public order, or public health or morals. Restrictions must be justified 17  European Commission. (2022, 11 May). A digital decade for children and youth: the new European strategy for a better internet for kids (BIK+), COM (2022) 212 final. 18  World Summit (2003, 12 December). World Summit on the Information Society Geneva 2003– Tunis 2005, Plan of action, Document WSIS-03/GENEVA/DOC/5-E. 19  UN Human Rights Council. (2012, 16 July). Resolution 20/8, The promotion, protection, and enjoyment of human rights on the Internet, UN Doc. A/HRC/RES/20/8. 20  UN Human Rights Council. (2011, 16 May). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Frank La Rue, UN Doc. A/HRC/17/27. 21  UN Human Rights Council. (2012, 16 July). Resolution 20/8, The promotion, protection, and enjoyment of human rights on the Internet, UN Doc. A/HRC/RES/20/8 (para 1).

240  Research handbook on law and technology according to a three-part test: any restrictions must be provided by law with sufficient precision, pursue one of the legitimate aims, and meet the tests of necessity and proportionality.22 While the HRC Committee has adopted in 2011 General Comment No. 34 on the implementation of Article 19 of the ICCPR, few paragraphs concern the freedom of opinion and expression online.23 While General Comment No. 34 recommends states to ensure that communication technologies and media broadcasters protect freedom of expression, particularly of journalists, there is no further elaboration on how that could be achieved or what specific factors might impede individuals’ freedom in the digital sphere.24 In 2016, the UN Special Rapporteur on the freedom of opinion and expression, David Kaye, a successor to La Rue, published the alarming reports: one on the ‘contemporary challenges to freedom of expression’25 and the other on the ‘freedom of expression, states and the private sector in the digital age’.26 National legislations adopt overly broad definitions of key terms such as ‘national security’ and ‘hate speech’, according to the Special Rapporteur. This offers an unnecessarily broad discretion to executive authorities to decide what falls under the boundaries of freedom of expression.27 The second report continues the trend of discussing possible authoritarian tendencies, but this time the state is not the sole source of worry: private actors in the form of social media companies have taken centre stage.28 Private industries have tremendous influence in the digital realm, serving as mediators for online communication. As such, private companies should be assessed on how they both support and hinder freedom of speech. The Special Rapporteur observed that private actors should create ‘transparent’ procedures for assessing the human rights impacts of business activities.29 Such assessments should critically review all of the private actions, including, for instance, the making and enforcing of user policies and freedom of speech, the effect of products and services on users’ freedom of speech during the development process, and the plans for different prices or access to online content and services.30 On the basis of these cross-cutting reports, the Special Rapporteur on the freedom of opinion and expression has engaged with some specific issues as part of the analysis. Two issues would be mentioned here: the surveillance industry, and ‘hate speech’. With regard to surveillance, the Special Rapporteur’s concerns regarding the role of private companies

22  For details, see, e.g. UN Human Rights Committee. (2011). General comment no. 34, Article 19, Freedoms of opinion and expression, UN Doc. CCPR/C/GC/34, paras 22–36; UN Human Rights Council. (2019, 28 May). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. Surveillance and human rights, UN Doc. A/HRC/41/35, para 24. 23  UN Human Rights Committee. (2011). General comment no. 34, Article 19, Freedoms of opinion and expression, UN Doc. CCPR/C/GC/34. 24  Ibid., paras 39–47. 25  UN General Assembly. (2016, 6 September). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (Report on contemporary challenges to freedom of expression), UN Doc. A/71/373. 26  UN Human Rights Council. (2016, 11 May). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (Report on freedom of expression, states and the private sector in the digital age), UN Doc. A/HRC/32/38. 27  UN General Assembly. (2016, 6 September). Report of the Special Rapporteur, UN Doc. A/71/373. 28  UN Human Rights Council. (2016, 11 May). Report of the Special Rapporteur, UN Doc. A/ HRC/32/38. 29  Ibid., para 88. 30  Ibid., para 88.

International human rights law in the digital age  241 were reiterated in a separate report focusing on surveillance industries.31 In this report, the Special Rapporteur pointed out the thriving industry of digital surveillance tools, which was characterised as ‘unsupervised and with something close to impunity’.32 Given that existing mechanisms are ill-suited to addressing human rights violations caused by the cross-border transfer of surveillance tools, the Special Rapporteur recommended that states ‘impose an immediate moratorium on the export, sale, transfer, use or servicing of privately developed surveillance tools’ until a necessary safeguard is in place.33 While such a recommendation was met with scepticism at the governmental level, the UN Special Rapporteur’s call for moratorium was reiterated and endorsed by as many as 156 civil society organisations across the globe, following the Pegasus Project revelations on the sale of a spyware (called Pegasus) to conduct surveillance on journalists, political opponents, and other individuals.34 The call for moratorium was also reiterated by the OHCHR35 and shaped the narrative of the recommendations of the European Parliament’s inquiry committee for the use of Pegasus and other spyware.36 As illustrated by the misuse of Pegasus, journalists and their freedom of expression are some of the most affected by the development, sale, and use of intrusion software. The Special Rapporteur on freedom of opinion and expression, Irene Khan, who succeeded Kaye, thus published her report focusing on media freedom and the safety of journalists in the digital age.37 On top of surveillance, another issue that the United Nations has engaged with is the regulation of so-called ‘hate speech’, whose definition itself has been a source of political controversies. While hate speech itself is nothing novel, the creation of the internet has brought unparalleled consequences to such speech, allowing for mass online dissemination of hateful content (Dias, 2022). In 2019, the Special Rapporteur on the freedom of opinion and expression published his report on ‘online hate speech’.38 The report encourages states to treat hate speech with the same degree of severity in both online and offline scenarios.39 Predictably, 31  UN Human Rights Council. (2019, 28 May). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. Surveillance and human rights, UN Doc. A/HRC/41/35. 32  Ibid., para 6. 33  Ibid., para 66. 34  Amnesty International et al. (2021, 27 July). Joint open letter by civil society organizations and independent experts calling on states to implement an immediate moratorium on the sale, transfer and use of surveillance technology. 35  E.g. UN Human Rights Council. (2022, 4 August). Report of the Office of the United Nations High Commissioner for Human Rights. The right to privacy in the digital age, UN Doc. A/HRC/51/17, para 56(g). 36  Having referred to the UN special rapporteurs’ call for an immediate moratorium (in the recital), the European Parliament’s inquiry committee listed the conditions that EU member states must fulfil by the end of 2023: PEGA Committee. (2023, 22 May). European Parliament Draft Recommendation to the Council and the Commission following the investigation of alleged contraventions and maladministration in the application of Union law in relation to the use of Pegasus and equivalent surveillance spyware (2023/2500(RSP)), Rapporteur Sophie in ‘t Veld, B9-0260/2023, recital AQ, and para 28. 37  UN Human Rights Council. (2022, 20 April). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan. Reinforcing media freedom and the safety of journalists in the digital age. UN Doc. A/HRC/50/29. 38  UN General Assembly. (2019, 9 October). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. Online hate speech. UN Doc. A/74/486. 39  Ibid.

242  Research handbook on law and technology the spotlight is on social media companies, as the owners of public fora where hate speech is often exercised. As such, the UN Special Rapporteur provided companies with a long to-do list: adhere to international human rights standards in evaluating how their services interact with human rights, adopt content policies to better target hate speech, offer more explicit definitions for what qualifies as hate speech, and ensure that communities affected most by hate speech are involved in the process of identifying the tools to address such concerns.40 Apparently, the UN Special Rapporteur’s calls were not adequately addressed by social media companies. In January 2023, more than two dozen UN-appointed independent human rights experts urged social media companies to ‘urgently address posts and activities that advocate hatred and constitute incitement to discrimination, in line with international standards for freedom of expression’.41 Still, the reality of implementation delivers a painful strike: even if a company were to attempt to strictly self-regulate hate speech, there may be an issue of the insufficient workforce to assess possibly problematic content, the ineffectiveness of algorithms in completely removing hate speech, possible challenges raised by national laws, linguistic subtleties, and the sheer number of communities posting discriminatory content that can slow down the elimination of hate speech. One of the United Nations’ suggestions has then been to prioritise the regulation of certain types of hate speech, thus allowing for the limitation of the right to free speech based on context. This could be done by following the framework of the ‘Rabat Plan of Action’ which differentiates between types of prohibited expression: expression that constitutes a criminal offence; expression that is not criminally punishable, but may justify a civil suit or administrative sanctions; expression that does not give rise to criminal, civil or administrative sanctions, but still raises concern in terms of tolerance, civility and respect for the rights of others.42

Assessment of the seriousness of the hate speech is based on a six-part threshold which contains the following elements: the context in which the speech took place, the public status of the speaker, the intent of the speech to, for instance, incite to violence, the exact content and form of the speech, the extent (based on audience size) of the speech, and the likelihood of the speech to incite action against a specific group.43 A series of calls addressed to private sectors can further be translated into obligations for states to ensure that social media platforms are strict(er) concerning the content they allow. When it becomes clear that the respective content qualifies as hate speech following human rights standards (while the legal definition of hate speech is disputed, the United Nations understands it as any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other Ibid. UN. (2023, 6 January). Freedom of speech is not freedom to spread racial hatred on social media: UN experts, https://www​.ohchr​.org​/en​/statements​/2023​/01​/freedom​-speech​-not​-freedom​-spread​-racial​ -hatred​-social​-media​-un​-experts 42  UN Human Rights Council. (2013, 11 January). Report of the United Nations High Commissioner for Human Rights on the expert workshops on the prohibition of incitement to national, racial or religious hatred, UN Doc. A/HRC/22/17/Add.4 (Appendix: Rabat Plan of Action), para 20. 43  Ibid., para 29. 40  41 

International human rights law in the digital age  243 words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor)[,]44

it should be promptly removed before it can spread or become viral (Medzini, 2022). This obligation also stems from the right to information, which mandates access to clear and trustworthy information online. Nevertheless, according to the Special Rapporteur on the freedom of expression, David Kaye, states should refrain from imposing obligations on platforms to remove specific pieces of prohibited content in a short or strict timeframe without prior judicial review, as that could violate freedom of expression.45 The Special Rapporteur of the same theme, Irene Khan, also expressed her concern about a stricter approach to content removal causing over-removal, which can in turn affect the freedom of expression.46 Furthermore, the inconsistent application of content removal policies can create opportunities for authorities to successfully pressure platforms to remove accounts critical of governmental actions.47 3.2 Equality and Non-Discrimination in the Digital Environment As encapsulated by the words of UN Special Rapporteur E. Tendayi Achiume, which we referred to at the beginning of the chapter, the effect of digital technologies and networked spaces on human rights is by no means the same across different segments of societies. In fact, the Special Rapporteur observed that emerging digital technologies exacerbate existing inequalities based on race, ethnicity, and national origin. In this regard, the UN human rights system reminded states of the applicability of equality and non-discrimination to the digital sphere. As enunciated by Article 2 of the UDHR and ICCPR, states have a duty to respect and ensure that their citizens benefit from the same rights without discrimination based on race, sex, religion, national or social origin, birth, or other status. While the UNHRC’s Resolution 20/8 (2012) affirmed the applicability of equality and non-discrimination to online space, it does not offer an updated understanding of this principle in a digital world. In this sense, further normative guidance is provided through the work of the Committee on the Elimination of Racial Discrimination (CERD Committee)—which monitors the implementation of the International Convention on the Elimination of All Forms of Racial Discrimination (1966)—and the UN Special Rapporteur on contemporary forms of racism. While non-discrimination in the digital environment is frequently addressed by the United Nations in various thematic contexts, we limit ourselves to the description of the United Nations’ engagement with one of the cross-cutting issues: algorithmic biases. While algorithmic decision-making has the advantages of being time- and cost-efficient (Marabelli et al., 2021), it can perpetuate discrimination by reinforcing existing biases. This is the conclusion of a recent report authored by the CERD Committee. The Committee emphasises that increased reliance on algorithmic systems risks deepening rather than reducing

UN (2018, 20 June). UN Strategy and Plan of Action on Hate Speech, p. 2. UN General Assembly. (2018, 6 April). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. UN Doc. A/HRC/38/35, para 15. 46  UN Human Rights Council. (2021, 13 April). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan. Disinformation and freedom of opinion and expression. UN A/HRC/47/25, paras. 70, 71. 47  Ibid, paras 77, 78, and 79. 44  45 

244  Research handbook on law and technology discrimination based on race or nationality and subsequently leads to human rights violations.48 For example, law enforcement’s usage of historical arrest data can lead to the overpolicing of a respective area, resulting in more arrests and a cycle of discrimination. Biases in algorithms can come either ‘accidentally’ as a result of incomplete training data (for example, failing to train face recognition software on people of colour) or through reliance on information that reflects historical discrimination (the over-policing example). Biases in algorithmic decision-making can also circumvent existing legislation by creating new categories for discrimination, as noted in legal scholarship. Algorithms can generate novel grounds for discrimination, such as browser type or IP address. Although these are not characteristics protected under human rights laws, different treatment based on them could still raise questions of unfairness (Gerards & Borgesius, 2022). Take, for example, ‘personalised pricing’, the practice of offering the same goods and services at a different price for different consumers, based on data regarding their address and past purchasing behaviour. While it raises concerns, especially from a consumer law perspective, this practice remains legal with no clear prohibitive regulations in sight (Borgesius & Poort, 2017). The CERD Committee warned that, if left unchecked, biased algorithms can lead to decisions that can have a negative impact on certain groups of people even without the programmer’s intention to discriminate. One of the solutions suggested by the CERD Committee is transparency. There is a need for more public disclosure of how such algorithms are built and deployed and whether there are human rights protection measures followed in both the creation and implementation of these software.49 This could be achieved by involving legal experts to assess the potential human rights impacts of such technologies before their adoption: a move from an ex-post approach where the law must work to catch up and mend existing issues to an ex-ante obligation of building technology that already incorporates human rights standards. While businesses are not bound by international human rights law, many human rights norms have been ‘transposed’ in domestic laws, which bind private actors. It is thus important for national governments to be active players in safeguarding human rights in the digital age by adopting legislation that mandates, for instance, transparency (e.g. through a company’s publicly available reports) on how certain algorithms or software work. 3.3 The Right to Privacy in the Digital Era It is fair to say that the advent of digital technology has brought fundamental challenges to the protection of the right to privacy, provided in Articles 12 of the UDHR and Article 17 of the ICCPR as protection from unlawful interferences against ‘privacy, family, home or correspondence’. To determine if interference with the right to privacy is justifiable, a tripartite test is understood to be applicable to Article 17. Namely, interference is permitted if (a) it is ‘authorized by domestic law that is accessible and precise and that conforms to the requirements’ of the ICCPR, (b) it pursues a legitimate aim, and (c) it meets the tests of necessity

48  Committee on the Elimination of Racial Discrimination. (2020, 17 December). General recommendation no. 36 (2020) on preventing and combating racial profiling by law enforcement officials, UN Doc. CERD/C/GC/36. 49  Ibid., para 61.

International human rights law in the digital age  245 and proportionality.50 It would be important to remember that Articles 12 of the UDHR and Article 17 of the ICCPR are not the only privacy-related provisions within UN-sponsored human rights treaties. For example, the Convention on the Rights of Persons with Disabilities has its own provisions on the right to privacy as well as on ‘data protection’.51 In the digital age, the most notable change posed to the respect for privacy is arguably the sheer increase in visibility. Much of how we communicate now leaves a digital trace: call logs, email archives, entire histories of texting, voice messages, and locations can be subjected to the newfound ability of government agencies to analyse and store mass communication data (Murray & Fussey, 2019). In 2018, the UN Special Rapporteur on the freedom of opinion and expression published a report on ‘artificial intelligence technologies and implications for the information environment’ which primarily focused on how artificial intelligence (AI) threatens the right to privacy.52 The data can be taken out of its original context and repurposed, which can lead to it becoming inaccurate and difficult to update or remove.53 Having depicted ‘a troubling picture of how the right to privacy is being steadily undermined in the digital age’, the OHCHR, in its August 2022 report, regarded the practices of pervasive surveillance and the resulting erosion of human rights and pluralistic democracies as ‘profoundly alarming’.54 The erosion of the right to privacy—as well as responses to protect such a right—involves both private and public actors. The technological developments effectively enable social media companies, among others, to know users—and people connected to them—by tracking the users’ online activity, creating a ‘profile’ based on the data, and then engaging in relentless content and advertisement suggestions (Buts, 2021). While this could lead to increased customer satisfaction with the platform’s services (Chen et al., 2021), it raises significant concerns over the protection of the right to privacy. In view of the crucial role of business enterprises, it is not surprising that the recommendations of UN human rights mechanisms are regularly ‘addressed’ to business enterprises. For example, the UN High Commissioner for Human Rights, in the report concerning the right to privacy in the digital age, recommended business enterprises to ‘[m]ake all efforts to meet their responsibility to respect all human rights’

50  UN General Assembly. (2014, 23 September). Report of the Special Rapporteur on the promotion and protection of human rights and fundamental freedoms while countering terrorism. UN Doc. A/69/397, para 30. 51  Convention on the Rights of Persons with Disabilities. (2016, 13 December). 2515 UNTS 3, Articles 22 (respect for privacy), 31 (statistics and data collection); UN Human Rights Council. (2022, 28 December). Report of the Office of the United Nations High Commissioner for Human Rights. Statistics and data collection under article 31 of the Convention on the Rights of Persons with Disabilities. UN Doc. A/HRC/49/60. 52  UN General Assembly. (2018, 29 August). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression (Report on Artificial Intelligence technologies and implications for freedom of expression and the information environment), UN Doc. A/HRC/73/348. 53  Ibid. 54  UN Human Rights Council. (2022, 4 August). Report of the Office of the United Nations High Commissioner for Human Rights. The right to privacy in the digital age, UN Doc. A/HRC/51/17, para 54.

246  Research handbook on law and technology according to the UN Guiding Principles on Business and Human Rights (UNGPs),55 and to conduct systematic human rights assessment and monitoring of the outputs of AI systems.56 That being said, states remain to be the primary addressees of the United Nations’ efforts to promote the right to privacy. In the digital era, if a state organisation wishes to obtain information about participants in a protest, rather than manually sorting through audio-visual material, it could access en masse communications such as phone calls, messages, emails, or browsing history to reveal those who were at the location of the demonstration (Murray & Fussey, 2019). Such technological power rings an alarm bell over the misuse of such data against citizens who pose no justifiable threat to national security. In a regional context, the European Court of Human Rights (ECtHR) ruled in Big Brother Watch and others v The United Kingdom that bulk surveillance without sufficient oversight mechanisms or safeguarding violates the right to privacy.57 At the same time, the ECtHR established in Centrum för Rättvisa v Sweden that the mere existence of such surveillance mechanisms or of laws enabling them may not endanger the right to privacy. For surveillance to be harmful to human rights, either the applicant must have been directly impacted, for example by having had their communications targeted or for the domestic legal regime to have no remedies for victims of surveillance.58 These two ECtHR judgments relating to the activities of intelligence services have been cited by the United Nations’ Special Rapporteur on the right to privacy,59 on the basis of their ‘potential worldwide impact’ on the interpretation of the right to privacy.60 The Special Rapporteur then expressed his support for ‘the strict application of the tests of proportionality and necessity in a democratic society’, not only for the regional context, but also as a global model.61 UN High Commissioner for Human Rights Michelle Bachelet noted the possibility of the misuse of surveillance technologies when discussing the aforementioned ‘Pegasus’ case.62 Concerns were raised regarding police usage of the software to obtain bulk phone data of high-profile individuals such as politicians, journalists, or ambassadors in the United Arab Emirates, Saudi Arabia, Spain, Poland, Panama, Netherlands, Mexico, Morocco, India, Israel, Germany, Hungary, Bahrain, Azerbaijan, and Armenia.63 While the protection of personal data may not be an autonomous right, it is necessary in order to enjoy the right to privacy, in the sense of respect for private and family life, home and correspondence.64 Access to private information about individuals can lead to their arrest, intimidation, or assassination, and can

55  United Nations. (2011). ‘Guiding principles on business and human rights: Implementing the UN “protect, respect and remedy” framework’, HR/PUB/11/04 (2011). 56  UN Human Rights Council. (2021, 13 September). Report of the UN High Commissioner for Human Rights. The right to privacy in the digital age, UN Doc. A/HRC/48/31, para. 61. 57  Big Brother Watch and others v The United Kingdom App nos 58170/13, 62322/14 and 24960/15 (ECtHR, 25 May 2021). 58  Centrum för Rättvisa v Sweden App no 35252/08 (ECtHR, 25 May 2021). 59  UN Human Rights Council. (2019, 16 October). Report of the Special Rapporteur on the right to privacy. Right to privacy, UN Doc. A/HRC/40/63, para 29. 60  Ibid., para 30. 61  Ibid., para 31. 62  OHCHR, Use of spyware to surveil journalists and human rights defenders, Statement by UN High Commissioner for Human Rights Michelle Bachelet, July 2021). 63  Council of Europe, (2022, April). Pegasus Spyware and its impact on human rights. Information Society Department DGI(2022)04. 64  Ibid.

International human rights law in the digital age  247 lead to self-censoring out of fear of retaliation, which can prevent human rights activists and journalists from efficiently challenging governmental injustices.65 The consequence of this is an overall unequal protection of privacy as well as other rights at the global level, as the United Nations has signalled in various reports. An optimistic argument can be made that these reports, albeit formally non-binding in themselves, can influence the actions of the UNHRC and subsequently those of states by publicly evaluating how countries have dealt with human rights issues in the digital sphere. As a result, non-binding reports can be the first step in the creation of binding national legal instruments, which could become binding on private actors as well. Furthermore, recommendations can feed into the development of the meaning of treaty provisions and facilitate the development of international customary law and create obligations at a global level. However, the apparent limitation of such reports, if they remain soft law sources, is that they would not be immediately linked to a mechanism to regulate current actors, prevent possible abuses, or design efficient remedies for those whose rights have been affected by technologies, as was the case in the Pegasus scandal.

4. BUSINESSES’ RESPECT FOR INTERNATIONAL HUMAN RIGHTS LAW While the UN human rights system has produced a number of instruments to promote human rights in the digital sphere primarily by applying and reinterpreting existing rights, one of the common challenges is the observance of international human rights law by private corporations. The application of international human rights law in the digital environment creates considerable tension with the framework of international human rights law, in part because the enjoyment of digital rights often depends on the conduct of private companies (Shany, 2023). This signals a shift in balance, where some governments battle for regulatory authority with other powerful private actors such as corporations (Smith, 2021). Although social media platforms appear as much-needed public fora that maintain free discourse in society, their legal status as private service providers afford them the freedom to govern following commercial rather than public interests (Balkin, 2014). As the UNHCR’s advisory committee noted, it is ‘impossible to understate the role of the private sector’ in the promotion and protection of human rights in the digital age.66 Within the UN human rights system we examined, the role of corporations and the need for regulation have been repeatedly mentioned by UN experts and bodies. To recall a few examples mentioned in this report, the United Nations’ Special Rapporteur on the right to education regarded the ‘massive imbalance in power’ between technology providers and users as detrimental to freedom and equality.67 The CRC Committee reiterated states’ duty to protect 65  OHCHR. (2021, 19 July). Use of spyware to surveil journalists and human rights defenders, Statement by UN High Commissioner for Human Rights Michelle Bachelet). 66  UN Human Rights Council. (2021, 19 May). Report of the Human Rights Council Advisory Committee. Possible impacts, opportunities and challenges of new and emerging digital technologies with regard to the promotion and protection of human rights, UN Doc. A/HRC/47/52, para 56. 67  UN Human Rights Council. (2022, 19 April). Report of the Special Rapporteur on the right to education, Koumbou Boly Barry. Impact of the digitalization of education on the right to education, UN Doc. A/HRC/50/32 (para 62).

248  Research handbook on law and technology children from infringements of their rights by business enterprises’ infringements of children’s rights.68 The Special Rapporteur on the freedom of expression regarded the digital surveillance industry as ‘something close to impunity’.69 Just like many other fields of international law, international human rights law has developed primarily to regulate the conduct of states, as opposed to the conduct of nonstate actors. While states have a certain due diligence obligation to protect the rights of individuals against deprivations that are caused by non-state actors, this is not the same as corporations having an obligation under international human rights law. International standards have developed regarding corporations, primarily through formally non-binding international instruments. Particularly notable is the UNGPs,70 which have been described as an ‘authoritative’ global framework.71 According to the UNGPs, ‘all business enterprises’ have the ‘responsibility to respect human rights’ as a ‘global standard of expected conduct’ regardless of where they operate.72 The UNGPs expect business enterprises to exercise human rights due diligence to identify, prevent, mitigate, and account for the way they take account of and deal with adverse human rights impacts. The formulation of due diligence under the UNGPs is broad enough to be applicable to internet service providers, corporations that offer social media platforms, and a wide range of technology companies that develop, sell, transfer, and use hardware, software, and digital infrastructure. The UN Secretary-General’s report also observed that the human rights due diligence requirement under the UNGPs applies to business enterprises active ‘in the areas critical for the realization of economic, social and cultural rights such as smart cities, health and education services’. While UNGP-mandated responsibilities are not equivalent to binding obligations, the UNGPs are frequently invoked by the United Nations itself in order to encourage business enterprises to respect all human rights. For example, the Special Rapporteur on the right to privacy, Joseph A. Cannataci, reiterated recommending ‘states and non-state parties’ implement the UNGPs, together with gender considerations.73 Despite this lack of a binding character, the UNGPs can be incorporated into international and national legal frameworks that create obligations on corporations.

68  Committee on the Rights of the Child. (2021, 2 March). General comment no. 25 (2021) on children’s rights in relation to the digital environment, UN Doc. CRC/C/GC/25 (para 37). 69  UN Human Rights Council. (2019, 28 May). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. Surveillance and human rights, UN Doc. A/HRC/41/35 (para 6). 70  United Nations. (2011). ‘Guiding principles on business and human rights: Implementing the UN “protect, respect and remedy” framework’, HR/PUB/11/04 (2011). 71  UN Human Rights Office of the High Commissioner. (2021, 17 June), An Authoritative Global Framework on Business and Human Rights Turns 10, https://www​.ohchr​.org​/en ​/stories​/2021​/06​/authoritative​-global​-framework​-business​-and​-human​-rights​-turns​-10 72  United Nations. (2011). ‘Guiding principles on business and human rights: Implementing the UN “protect, respect and remedy” framework’, HR/PUB/11/04 (2011) (p. 13). 73  UN General Assembly. (2022, 23 July). Report of the Special Rapporteur on the right to privacy, Joseph A. Cannataci. UN Doc. A/76/220, para 101; UN Human Rights Council. (2019, 23 May). Report of the Working Group on the issue of human rights and transnational corporations and other business enterprises. Gender dimensions of the Guiding Principles on Business and Human Rights, UN Doc. A/HRC/41/43.

International human rights law in the digital age  249 In the absence of applicable regulations, one may argue that it is realistic to rely on corporate ‘self-regulation’ in order to facilitate the integration of international human rights norms in corporate decision-making. Business enterprises—especially major social media platforms and big tech companies—may be inclined to provide their own input for the interpretation of international human rights law in the digital sphere. Corporate self-regulation certainly has its benefits. Self-regulation may be perceived as a fast and flexible alternative, compared to more traditional government-led regulation (Puppis, 2010). However, voluntary industry-level selfregulation may also be viewed as lacking legitimacy and accountability, and can oftentimes be a move simply designed to project an image of compliance while lacking substantive impacts. The concern is that when a platform self-regulates, public interests are given less weight and protection, which results in the underachievement of the objectives that the policies are meant to support (Medzini, 2022). Furthermore, the decision-making processes of the company remain opaque, or altogether inaccessible. As it currently stands, social media companies are ‘enigmatic regulators’ that develop obscure ‘platform law’ that fails to fully comply with human rights norms.74 Company-specific rules can reduce consumer protection and the overall exercise of human rights, perhaps more so than government-mandated regulations. When public authorities fail to adequately protect consumers’ interests, they may restrict them from having proper access to remedies in cases of abuse (Baldwin et al., 2011). Moreover, a will to self-regulate is not necessarily synonymous with the capacity to do so. Smaller companies might avoid self-regulation due to lacking the financial or technological resources necessary to design, implement, monitor, and enforce self-regulatory laws (Medzini, 2022). Within the self-regulation debate, Facebook’s ‘Oversight Board’ offers one of the most wellknown cases as a subject of study. The Board is a final instance: it reviews Facebook’s decisions to delete possibly problematic user content and either upholds or dismisses the removal. As in the case of the Supreme Court, the Board has ‘judges’, many of whom are active in legal scholarship or practice (Klonick, 2020). Unlike a Supreme Court, the Board keeps its reasoning opaque, with short judgements, not motivated by references to case law or regulatory instruments besides Facebook’s own policy, and with little information provided on the appeal process or the case selection (ibid.). Since there is previously no comparable precedent of a social media company creating its own Supreme Court-esque organ, it is difficult to determine its desirability. Certainly, there are no laws prohibiting corporate self-regulation according to international human rights law. While self-governance can facilitate larger regulatory aims, there is concern that such behaviour sets a precedent where it is possible for corporations to remove themselves from the realm of human rights law obligations. Should any company have the right to be both judge and jury of its own actions? The regulatory issue is further complicated by the multiple prongs of human rights, for when discussing technology in a legal context, freedom of expression is only one of many worries.

5. LIMITATIONS OF THE UN HUMAN RIGHTS SYSTEM As suggested by the previous sections, the United Nations’s Charter-based bodies and treaty bodies engage with the promotion of human rights in the digital sphere. Overall, the UN 74  UN Human Rights Council. (2018, 6 April). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. UN Doc. A/HRC/38/35, para 1.

250  Research handbook on law and technology bodies tend to emphasise the resilience of the existing catalogues of human rights under international human rights law in the digital era. That being said, the United Nations, as an intergovernmental organisation, cannot escape from interstate politics, budgetary constraints, and bureaucratic processes. Also, the members of the UNHRC apparently include those states that are known to have been systematically oppressing political opponents, political activists, and journalists. The influence of political and ideological divisions may be most evident at the UNHRC and UN General Assembly, in which political negotiations are inevitable in order to pass a resolution. For example, the UNHRC’s Resolution 44/12 of 16 July 2020 on the freedom of opinion and expression75 was criticised by the European Parliament’s study, in that the resolution did not refer to surveillance technologies and failed to address their chilling effect on freedom of expression (Głowacka et al., 2021 at 5). Besides these constraints and characteristics at the United Nations’ level, the effectiveness of the United Nations’ recommendations is subject to each state’s national legislative and political environment. As highlighted in the previous sections of this chapter, the UN human rights system’s recommendations would eventually rely upon states’ context-dependent willingness to respond to UN-level critiques and, if needed, adopt a new piece of legislation to prevent human rights abuses in the digital sphere. At the same time, the degree of political influence varies depending on the bodies within the UN human rights system. As contrasted with the UNHRC itself, the mandate holders of Special Procedures (such as Special Rapporteurs) are supposed to be independent and not expected to engage in a complex political negotiation at the United Nations before releasing their findings. Furthermore, contrary to UN treaty-monitoring bodies, the mandate holders of the Special Procedures do not need to seek consensus (or sufficient votes) in order to publish their reports. Such a flexibility allows Special Rapporteurs to express factual and legal observations which may not have been possible should they be required to seek wider political or legal consensus beforehand. For example, in 2019, the UN Special Rapporteur on the freedom of expression called for an immediate moratorium on the sale and use of surveillance tools—which may not have been possible as an outcome of political consensus at the UNHRC. Yet in the end, the Special Rapporteur’s call for moratorium has been echoed by other UN mechanisms, endorsed by a wide range of NGOs, and shaped the narrative of the European Parliament’s inquiry, as noted in Section 12.3.1 of the chapter above. At any rate, one cannot take a short-term view on the effectiveness of the UN human rights system in promoting human rights in the digital age—just like many other activities of the United Nations and other international organisations. While the recommendations of UN Special Procedures and those of UN treaty bodies may not immediately lead to changes in the conduct of states and non-state actors, the findings can be referred to by civil societies, business enterprises, European Union and other international organisations, and governmental organs, in order to strengthen national and international initiatives for protecting rights in the digital era.

75  UN Human Rights Council. (2020, 24 July). Resolution 44/12 adopted by the Human Rights Council on 16 July 2020. Freedom of opinion and expression. UN Doc. A/HRC/RES/44/12.

International human rights law in the digital age  251

6. CONCLUSION It was in 2012 that the UNHRC recognised the applicability of international human rights norms to the digital sphere. Over the years, the United Nations’ human rights system has increased its engagement with digital human rights. Among a wide range of mechanisms, the UNHRC’s Special Procedures play a unique role, partly because of the relative flexibility of mandate holders who work in their personal capacity as experts. While the UNHRC’s Special Procedures create a venue for advancing normative claims and policy options for facilitating the protection of rights in the digital era, their reports do not intend to have the same normative weight as, for instance, General Comments and Recommendations adopted by UN treaty-monitoring bodies in the interpretation of human rights. The general findings of UN treaty bodies carry weight in the state’s and stakeholder’s interpretation of relevant human rights treaties. In fact, the International Court of Justice in Diallo found it necessary to ascribe ‘great weight’ to the interpretation of the Human Rights Committee in construing the ICCPR.76 Such weight should thus also be given to UN treaty bodies’ interpretation of rights in the digital environment—such as the interpretation of children’s rights in the digital era under General Comment No. 25 of the CRC Committee (2021). The findings of UNHRC’s Special Procedures and the instruments adopted by UN treaty bodies are generally considered as non-binding at the international level, although they are not devoid of legally binding effect (Kanetake, 2021). Yet their formal non-bindingness does not translate the normative impact of the UN human rights system in the development of international human rights law in the digital environment. The Special Rapporteur’s observations may be relied upon by states, other international organisations, and civil society organisations in fostering policy changes. UN treaty bodies’ General Comments and Recommendations can be consulted and referred to by domestic courts, which may seek extra guidance in applying human rights to the digital sphere. Overall, the United Nations—through the initiatives of the General Assembly, UNHRC, Secretariat, and UN treaty bodies just to name a few—has been demonstrating that existing international human rights law is flexible enough to respond to many, if not all, challenges arising out of digital technologies. At the same time, one of the underlying limitations of international human rights law remains to be its application to businesses, which has been remedied by their human rights responsibility under the UNGPs applicable to all business enterprises. Furthermore, the United Nations appears reluctant to engage with ‘new’ rights uniquely developed by the digital society—such as the right to be forgotten and the right not to be subject to automated decisions. While the United Nations seems to have faith in the resilience and adaptability of international human rights law in the digital age, the institution’s contribution should also be assessed by the degree to which it welcomes the emergence of new paradigms and normative frameworks to respond to the challenges posed by digital technologies.

Application Instituting Proceedings filed in the Registry of the Court on 28 December 1998, Ahmadou Sadio Diallo (Republic of Guinea v DRC), para 66).

76 

252  Research handbook on law and technology

BIBLIOGRAPHY Baldwin, R., Cave, M. & Lodge, M. (2011). Understanding regulation: Theory, strategy, and practice. Oxford: Oxford University Press. Balkin, J.M. (2014). Old-school/new-school speech regulation. Harvard Law Review, 127(8), 2296–2342. Benedek, W. (2019). ‘International organizations and digital human rights’ in B Wagner and MC Ketteman, Research Handbook on Human Rights and Digital Technology (pp. 364–375). Cheltenham: Edward Elgar Publishing. Bildt, C., (2012). Opinion: A victory for the Internet. NYTimes. Retrieved from https://www​.nytimes​ .com​/2012​/07​/06​/opinion​/carl​-bildt​-a​-victory​-for​-the​-internet​.html. Buts, J. (2021). Targeted individuals: Personalised advertising and digital media translation. Translation Spaces, 10(2), 181–201. Chen, X., Sun, J. & Liu, H. (2021). Balancing web personalization and consumer privacy concerns: Mechanisms of consumer trust and reactance. Journal of Consumer Behaviour, 21(3), 572–582. Dias, T. (2022). Tackling online hate speech through content moderation: The legal framework under the International Covenant on Civil and Political Rights. Retrieved from https://www​.elac​.ox​.ac​ .uk ​/wp​-content ​/uploads​/2022​/07​/ Dias​-Tackling​-Online​-Hate​-Speech​-through​- Content​-Moderation​ -Working​-paper​-1​.pdf. Donnelly, J. (2022). Universal human rights in theory and practice (2nd ed.). Ithaca: Cornell University Press. Gerards, J. & Borgesius, F. (2022). Protected grounds and the system of non-discrimination law in the context of algorithmic decision-making and artificial intelligence. Colorado Technology Law Journal, 20(1), 1–56. Głowacka, D., Youngs, R. and Pintea, A., and, E. Wołosik (2021). Digital technologies as a means of repression and social control. European Parliament. Retrieved from https://www​.europarl​.europa​ .eu​/thinktank​/en​/document​/ EXPO​_ STU(2021)653636. Hicks, P. et al. (2021). Press briefing: Online content moderation and internet shutdowns. UN Human Rights Office. Retrieved from https://www​.ohchr​.org​/ Documents​/ Press​/ Press​%20briefing​_140721​ .pdf. Kanetake, M. (2021). ‘Giving due consideration: a normative pathway between UN human rights treatymonitoring bodies and domestic courts’ in Nico Krisch (ed.), Entangled Legalities Beyond the State (pp. 133–161). Cambridge University Press. Kettemann, M. (2012). The UN Human Rights Council Resolution on Human Rights on the Internet: Boost or Bust for Online Human Rights Protection. Human Security Perspectives, 1(3), 145–169. Klonick, K., (2020). The Facebook Oversight Board: Creating an Independent Institution to Adjudicate Online Free Expression. Yale Law Journal, 129(2418), 2418–2499. Marabelli, M., Newell, S. & Handunge, V. (2021). The lifecycle of algorithmic decision-making systems: Organizational choices and ethical challenges. The Journal of Strategic Information Systems, 30(3), 1–15. McKernan, B. (2022, February 22). Police use of Pegasus malware not illegal, Israeli inquiry finds. The Guardian. Retrieved from https://www​.theguardian​.com​/world​/2022​/feb​/22​/police​-use​-of​-pegasus​ -malware​-not​-israeli​-inquiry​-finds. Medzini, R. (2022). Enhanced self-regulation: The case of Facebook’s content governance. New Media &Amp; Society, 24(10), 2227–2251. Murray, D. & Fussey, P. (2019). Bulk surveillance in the digital age: Rethinking the human rights law approach to bulk monitoring of communications data. Israel Law Review, 52(1), 31–60. O’Flaherty, M. (2012). Freedom of expression: Article 19 of the International Covenant on Civil and Political Rights and the Human Rights Committee’s general comment no 34. Human Rights Law Review, 12(4), 627–654. Pegg, D. & Cutler, S. (2021). What is Pegasus spyware and how does it hack phones? The Guardian. Retrieved from https://www​.theguardian​.com​/news​/2021​/jul​/18​/what​-is​-pegasus​-spyware​-and​-how​ -does​-it​-hack​-phones. Pinto, M. (2018). ‘Special procedures: human rights council’ in A Peters and M Sólveigardóttir (eds), The Max Planck Encyclopedias of Public International Law. Oxford: Oxford University Press.

International human rights law in the digital age  253 Puppis, M. (2010). Media governance: a new concept for the analysis of media policy and regulation. Communication, Culture, and Critique, 3(2), 134–149. Shany, Y. (2023). Digital rights and the outer limits of international human rights law. German Law Journal, 24(3), 461–472. Smith, A. (2021). Freedom of expression and social media: How employers and employees can benefit from speech policies rooted in international human rights law. Indiana International &Amp; Comparative Law Review, 32(4), 629–657. Subedi, S.P. (201). Protection of Human Rights through the Mechanism of UN Special Rapporteurs. Human Rights Quarterly, 33(1), 201–228. Takata, H. & Hamamoto, S. (2023). ‘Human rights, treaty bodies, general comments/recommendations’ in A Peters and M Sólveigardóttir (eds), The Max Planck Encyclopedias of Public International Law. Oxford: Oxford University Press. Zuiderveen, F.Z. & Poort, J. (2017). Online price discrimination and EU data privacy law. J Consum Policy, 40, 347–366.

16. Legal principles and technology at the intersection of energy, climate, and environmental law Leonie Reins

1. INTRODUCTION This contribution proceeds from three distinct but related observations that concern the intersection of energy, climate, and environmental law (ECEL) and technology regulation. First, the fact that climate change is one of the most difficult challenges of this century: as of December 31, 2021, it has led to 2,306 local climate emergencies being declared in 40 countries around the globe.1 It has further been described as a “super wicked problem” (see Section 2 below). The second premise of this contribution is timing in regulatory intervention in ECE technologies. On the one hand, it can be argued that several energy-, climate-, and environment-related technologies are developing rapidly and that regulators are increasingly struggling to keep pace (see Section 2 below). However, on the other hand, it can also be stated that “law came before technology” and it is the law that “pushes” technologies to become competitive. For example, the advances in solar and wind, and the current push toward electrifying transport, have been largely driven by law or, put more broadly, government action. If not for the Obama post-2008 huge spending in the area, we would not have achieved the current levels of efficiency2 similarly the current expenditure by Biden and EU administrations seems to be driving a lot of innovation and adoption.3 Those are not market-driven technologies, they are largely “commissioned” by the same lawmakers who regulate them. The fragmentation of ECEL (Biermann et al., 2010) has to do with the third point of departure of the present contribution. Energy law, climate law, and environmental law are increasingly overlapping in practice. Likewise, the legal questions that arise in the three domains are interlocked tightly, which sometimes results in conflict and inconsistency (Woolley, 2021). If these observations are to be integrated, which may potentially present a starting point for some of the underlying challenges to be overcome, the organizing notions of ECEL, that is, its core principles, must be re-examined. It is often claimed that each of these subdisciplines has, or should have, its own set of regulatory principles (see Section 4). This contribution is intended as an overview of the different organizing principles, of the interactions between them, and of their implications 1  See Global cedamia. (2023, January 2). Climate Emergency Declaration and Mobilisation in Action. Retrieved from https://www​.cedamia​.org​/global/ 2  See The White House. (2016). “FACT SHEET: Obama Administration Announces Clean Energy Savings for All Americans Initiative.” Retrieved from https://obamawhitehouse​.archives​.gov​/the​-press​ -office​/2016​/07​/19​/fact​-sheet​-obama​-administration​-announces​-clean​-energy​-savings​-all 3  See The White House. (2023). Joint Statement by President Biden and President von der Leyen. Retrieved from https://www​.whitehouse​.gov​/ briefing​-room​/statements​-releases​/2023​/03​/10​/joint​-statement​-by​-president​-biden​-and​-president​-von​-der​-leyen​-2/

254

Legal principles and the technology of energy, climate, and environmental law  255 for the regulation of technology. In order to advance the argument, Section 2 elaborates on the three distinct starting points. Section 3 reviews the origins, the characteristics, and the specificities of energy law, climate law, and environmental law. Section 4 discusses the ECEL principles that matter for technology regulation. Section 5 zooms into a specific principle, sustainable development, and explains its role in technology regulation. Section 6 concludes.

2. CURRENT TRENDS AND CHALLENGES IN CLIMATE, ENERGY, AND ENVIRONMENTAL LAW 2.1 Climate Change as the Challenge of the 21st Century and Being a “Super Wicked Problem” In its Sixth Assessment Report, the Intergovernmental Panel on Climate Change (IPCC) reported that global net anthropogenic greenhouse gas (GHG) emissions had continued to rise across all major sectors of the economy between 2010 and 2019.4 The IPCC also found that the regional distribution of global GHG emissions differed and that the variance was linked to stages of development and to income levels. However, the variance within income levels was also wide.5 Scientists and decision makers generally agree that climate change and its adverse impacts need to be addressed. However, the exact means by which this objective ought to be attained remain the subject of debate. The Paris Agreement on Climate Change, adopted on December 12, 2015, by the Parties to the United Nations Convention on Climate Change represents the joined understanding of all nations into a common cause to undertake ambitious efforts to mitigate climate change, as well as to adapt to its effects globally (Reins, 2021). It codified the 2-to-1.5°C target—all Contracting Parties (CPs) are required to restrict the increase in global temperatures to well below 2°C and to strive to limit it to 1.5°C above preindustrial levels (for an in-depth discussion, see Ghaleigh, 2021). Several countries and jurisdictions, including the European Union, which has published a Climate Law (Regulation) (European Climate Law, 2021), China, and the United States, have agreed to target net zero emissions by 2050 (p. 24).6 However, the current pace of emissions reduction and the Nationally Determined Contributions (NDCs) that the CPs to the Paris Agreement have set mean that this objective is unlikely to be met by 2050. The same is true of the interim target of halving global emissions by 2030.7 Implementing the current NDCs would only reduce global temperatures by between 2.4 and 2.6°C (p. XVI).8 The Paris Agreement target can only be

4  See Intergovernmental Panel on Climate Change Working Group III contribution to the Sixth Assessment Report, I. P. on C. C. W. G. I. contribution to the S. A. R. (2022, March). Climate Change 2022: Mitigation of Climate Change. Retrieved from https://www​.ipcc​.ch​/report​/ar6​/wg3/ 5  See Intergovernmental Panel on Climate Change Working Group III contribution to the Sixth Assessment Report, I. P. on C. C. W. G. I. contribution to the S. A. R. (2022, March). Climate Change 2022: Mitigation of Climate Change. Retrieved from https://www​.ipcc​.ch​/report​/ar6​/wg3/ 6  See United Nations Environment Programme. (2022). Emissions Gap Report 2022. Retrieved from http://www​.unep​.org​/resources​/emissions​-gap​-report​-2022 7  See United Nations Environment Programme. (2022). Emissions Gap Report 2022. Retrieved from http://www​.unep​.org​/resources​/emissions​-gap​-report​-2022 8  See United Nations Environment Programme. (2022). Emissions Gap Report 2022. Retrieved from http://www​.unep​.org​/resources​/emissions​-gap​-report​-2022

256  Research handbook on law and technology reached if the existing commitments are expanded or if emission reduction technologies are developed; in an ideal world, these two developments would occur in tandem. Article 4 of the Paris Agreement stipulates that in order to achieve the long-term temperature goal … [the] Parties aim … to undertake rapid reductions … so as to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century.

In line with the principle of state sovereignty, which is included in the Preamble to the United Nations Framework Convention on Climate Change (UNFCCC; see also Section 3 below), the provision does not prescribe specific methods and tools for reaching net zero emissions (Mayer, 2021, p. 112). Nevertheless, the temperature target in Article 2 and its explanation in Article 4 reflect an “implicit recognition of the need for negative emission technologies” (Mayer, 2021, p. 112) such as carbon capture, storage, and utilization (Intergovernmental Panel on Climate Change Working Group III contribution to the Sixth Assessment Report, 2022). Havercroft and Singh Ghaleigh phrased the same point more bluntly: “Remember: No CCS = No 2*C” (Havercroft & Singh Ghaleigh, 2018, p. 31). Furthermore, new technologies might be necessary to balance “continuing emissions by sources in certain sectors” (Mayer, 2021, p. 112). Climate change has been characterized as a super wicked problem, an extension of Rittel and Webber’s theory of (simple) wicked problems (Rittel & Webber, 1973). According to Levin and colleagues, super wicked problems have four key features: [1] time is running out; [2] those who cause the problem also seek to provide a solution; [3] the central authority needed to address them is weak or non-existent; and [4] irrational discounting occurs that pushes responses into the future. (Levin et al., 2012)

As far as the temporal dimension of the problem is concerned, the consequences of climate change become more significant and manifest more frequently as time passes—2022 once more saw numerous records for heatwaves, wildfires, and droughts broken. As Levin et al. (2012) put it, “Significant impacts will occur; with each passing year, they become more acute; and if we do not act soon, the risk of harm to human communities and ecosystems, as well as non-linear change and catastrophic events, increases” (Levin et al., 2012). In the process of climate change, those who cause problems also seek to provide solutions, at least by agency. Few doubt that climate change is human-made. The sectors that contribute most to climate change include energy, transportation, and industry (Reins & Verschuuren, 2022, p. 10). At the same time, both natural and legal persons try to solve the problem. Climate and environmental law are characterized by a collective action problem (see Section 2 below)—the effects manifest locally but demand global solutions. The tragedy of the commons also plagues the sphere of these areas of law (Fisher, 2017). Governance entails actions ranging from the international arena to the local levels. There is no centralized environmental authority, and no single international organization is in charge of the matter. At least on the international level, the outputs of climate-change lawmaking procedures exhibit certain environmental-law characteristics, such as a preference for compliance procedures over litigation. This domain has been called “something of a laboratory for international law” in which “highly informal means” are employed (Klabbers, 2017, pp. 279 and 292). Even on more local, that is, national, levels, “Decision makers within public authorities do not control all

Legal principles and the technology of energy, climate, and environmental law  257 the choices required to alleviate pressures on the climate” (Levin et al., 2012). The tendency to push responses into the future is reflected in the time horizons of the climate targets—net zero need only be attained by 2050. The IPCC noted, in this regard, that Global GHG emissions in 2030 associated with the implementation of NDCs] would make it likely that warming will exceed 1.5 °C during the 21st century. Likely limiting warming to below 2°C would then rely on a rapid acceleration of mitigation efforts after 2030. (Intergovernmental Panel on Climate Change Working Group III contribution to the Sixth Assessment Report, 2022, p. 14)

2.2 Timing in Regulatory Intervention and Path Dependency The second premise of this contribution is related to the timing of regulatory intervention within ECE technologies. On the one hand, it can be observed that energy-, climate-, and environment-related technologies are developing rapidly and that regulators are increasingly struggling to keep pace. Both the technologies in question and their regulation are shaped by significant path dependencies. The Collingridge Dilemma is salient. Collingridge wrote that [a]ttempting to control a technology is difficult, and not rarely impossible, because during its early stages, when it can be controlled, not enough can be known about its harmful social consequences to warrant controlling its development. By the time these consequences are apparent, control has become costly and slow. (Collingridge, 1980, p. 19)

In the ECEL context, one can point to several technologies for which the dilemma holds. Some of those technologies, such as hydrogen or hydraulic fracturing, can or recently were labeled as “emerging” and have already been deployed in some parts of the world. Others, such as solar radiation management and carbon dioxide removal, entail high risks and high rewards. Their mooted rollouts excite controversy in many states at present (Hester & Williams, 2022). As far as these high-risk-high-reward technologies are concerned, there is currently “insufficient, conflicting or confusing data about [their] nature and impact” (Laurie et al., 2012). This lack of information also leads to legal uncertainty. But also for the former technologies, which are being deployed already, regulation lags behind. In the European Union, for example, the deployment of hydrogen has largely occurred within a framework in which regulatory sandboxes predominate (Mete & Reins, 2020). It took several decades for the unit costs of simpler low-emissions technologies, such as wind, solar, and lithium batteries, to fall, and “tailored policies and comprehensive policies addressing innovation systems have helped overcome the distributional, environmental and social impacts potentially associated with global diffusion of low-emission technologies” (Intergovernmental Panel on Climate Change Working Group III contribution to the Sixth Assessment Report, 2022, p. 11). However, it also holds true that for some technologies at least, the law has been “pushing” them on the market and has set incentives to deploy certain technologies. Some of the regulatory initiatives that comprise the European Green Deal can be classified as having such a function, for example, the transition to zero-emission vehicles or developments in energy efficiency.9 9  See European Commission, The European Green Deal, Brussels, December 11, 2019, COM(2019) 640.

258  Research handbook on law and technology Beyond timing and the Collingridge Dilemma, the regulation of ECEL technologies is also path-dependent. Kirk and colleagues wrote of path dependency as the defining feature of situations in which “the option most likely to be chosen is that which most closely resembles existing practice or previous choices” (Kirk et al., 2007). Technology regulation in the energy sector has been widely described as problem-based. Accordingly, energy law, as a discipline, is largely concerned with the regulation of problems (Huhta, forthcoming). At least in the European Union, lawmaking in that domain involves legislative acts and strategies that pertain to individual technologies (such as the CCS Directive and the Hydrogen Strategy) rather than a comprehensive framework that is underlain by principles and animated by common objectives (see also Section 2). Disrupting this path dependency is the key to integrating technology regulation into ECEL. 2.3 The Fragmentation of ECEL The fragmentation of ECEL (Biermann et al., 2010) has to do with the third point of departure of the present contribution. Without regulatory reform and further integrative adjustment, technology regulation will continue to operate in its current framework. Energy law, climate law, and environmental law are increasingly overlapping in practice. Likewise, the legal questions that arise in the three domains are interlocked tightly, which sometimes results in conflict and inconsistency (Woolley, 2021). The law has tended to draw an artificial conceptual distinction between the climate, energy, and the environment. For example, the Planetary Boundaries Framework (Rockström et al., 2009) provides an integrated lens of analysis. Five of the nine boundaries have already been transgressed (climate change, biogeochemical flows, land system change, biosphere integrity, and chemical pollution). One (atmospheric aerosol loading) has not yet been qualified, and three remain intact (ocean acidification, stratospheric ozone depletion, and global freshwater use). It is at least arguable that the legal responses to these inter-related challenges ought to be integrated.

3. CLIMATE, ENERGY, AND ENVIRONMENTAL LAW: ORIGINS, CHARACTERISTICS, AND DISTINCTIONS The (academic) debate on the interlinkages, differences, and commonalities of ECEL is mature. This section summarizes the main arguments that have been advanced by scholars such as Hilson (2013), Klass (2013), Nagle (2010), Peel (2008), and Romppanen and Huhta (2023). The exposition is descriptive rather than normative (see also Hilson, 2013), and its focus is on technology regulation. 3.1 Environmental Law Environmental law is arguably the oldest and the most established of the three elements of ECEL. In the main, its concern is with “the conservation and protection of land, water, air, species, and resources for purposes of protecting human health as well as for long-term preservation of environmental, cultural, and aesthetic values” (Klass, 2013). Structurally, the focus is on risk management and the development of regulatory tools for addressing and “limit[ing]

Legal principles and the technology of energy, climate, and environmental law  259 the environmental impacts of an industrialized society” (Klass 2013). Environmental law, like other domains, has been influenced by nonlegal considerations, chiefly due to progress in the sciences and in economics (Sands, 2018, p. 6). Other social objectives have only begun to attract significant attention in recent years (Sands, 2018, p. 9). Environmental law and sustainable development are thus intimately connected—sustainable development “encompass[es] but [is not] limited to international environmental law” (Sands, 2018, p. 10, which is discussed further below). Therefore, “reorienting technology and managing risk” is “a critical objective for environment … policies, also reflected in sustainable development” (Sands, 2018, p. 10). Its relation with technology can further be characterized as its origins were largely concerned with the move from regulating technologies as sources of risk toward deploying technologies as parts of solutions. The historical development of international environmental law was molded by technological progress. Four distinct periods may be discerned, namely [1] developments in scientific knowledge [19th century–1945], [2] the application of new technologies and an understanding of their impacts [1945–1972 when the Stockholm Declaration was adopted], [3] the change in political consciousness [1972–1992] and [4] the changing structure of the international legal order and institutions [1992–the present day]. (Sands, 2018, p. 20)

Developments in science and technology have thus played a “significant catalytic” role (Sands, 2018, p. 22). Traditionally, technologies have been seen as tools for “operationaliz[ing] statutory standards” (Somsen, 2017, p. 395). The predominant means of effectuating environmental law obligations through the use of technology are emissions standards and the best-availabletechnique (BAT) standard. In the European Union, BAT is defined as the most effective and advanced stage in the development of activities and their methods of operation which indicates the practical suitability of particular techniques for providing the basis for emission limit values and other permit conditions designed to prevent and, where that is not practicable, to reduce emissions and the impact on the environment as a whole. (Article 3[10])10

In this sense, “environmental law is often dictating particular technological choices” (Fisher, 2017, p. 364) and presents a lock-in of the industry. The relationship between technology and environmental law has also been argued to be “imaginative” (Fisher, 2017, p. 360). Since most environmental law takes the form of ex-ante regulation (Fisher, 2017, p. 360), the relationship in question revolves around the manner in which “we as a polity envisage the roles that law and technology could play in ensuring environmental quality” (Fisher, 2017, p. 360). On this account, both law and technology are “devices for addressing environmental degradation” (Fisher, 2017, p. 366). Somsen went further and wrote that “unless technologies become the principal target and instrument of environmental policy-making, environmental law will become an irrelevance” (Somsen, 2017, p. 395). He argued that, if there is to be a “renaissance of environmental law” (Somsen, 2017, p. 395), regulators must employ four kinds of technologies “in fulfillment of their environmental obligations” (Somsen, 2017, p. 395), namely (i) surveillance technologies (for the early detection of ecosystem change), (ii) technologies that operationalize conventional statutory standards (the toolbox of environmental law that was described above, with emission standards as 10  See Directive 2010/75/EU of November 24, 2010 on industrial emissions. Retrieved February 27, 2023, from https://eur​-lex​.europa​.eu​/ legal​-content​/ EN​/ TXT​/ HTML/​?uri​= CELEX​:02010L0075​ -20110106​&from​=EN

260  Research handbook on law and technology a classical example), (iii) normative technologies (in order to “remove non-compliance from the equation”; Somsen, 2017, p. 395)), and (iv) enhancement technologies (such as genetic manipulation, synthetic biology, nanotechnology, climate-engineering, and de-extinction techniques; Somsen, 2017, p. 396). In Somsen’s classification, technologies of the first three kinds target humans as subjects of regulation, whereas the fourth type of technology, environmental enhancement, targets the living and the nonliving environment (Somsen, 2017, p. 396). 3.2 Energy Law Adrian Bradbrook formulated one of the first and most widely cited definitions of energy law. He wrote that energy law is about the “the allocation of rights and duties concerning the exploitation of all energy resources between individuals, between individuals and the government, between governments and between states” (Bradbrook, 1996). This definition reflects one of the main attributes of (traditional) energy law, namely its focus on natural resources, upstream and downstream extraction, and production and development. Traditionally, the key objectives of this branch of law have, for the most part, revolved around near-term efficiency and economic growth (Klass, 2013). The three branches of this traditional energy law are (1) electricity generation, transmission, and markets, including the laws governing the production, transportation, and sale of fuels used for electricity generation such as nuclear energy, coal, and natural gas; (2) the laws governing fuels used in transportation such as oil and biofuels; and, more recently (3) renewable energy including wind, solar, hydropower, and geothermal energy. (Klass, 2013)

In its attempts to orchestrate a transition toward a zero-carbon energy system, the European Union has opted for a three-pronged approach that is based on setting targets, creating market conditions that are conducive to their attainment at the lowest possible cost, and adopting sector-specific legislation (Houtman & Reins, 2022). As Klass noted, structurally, energy law is mainly about economics, monopolies, and markets, and it “arose primarily from public utility and competition law” (Klass, 2013). Simply put, environmental law is more about (human) rights/public international law aspects, whereas energy law is a “branch” of economic law. Given its origins, energy law is sometimes treated not as a self-standing discipline but as a set of borrowings from other fields, such as competition and environmental law, that are adjusted slightly for the purposes of the energy sector (Talus, 2013). On this interpretation, energy law is less autonomous (Heffron & Talus, 2016) than, for example, environmental law. According to Huhta, energy law has five characteristic properties, namely a problem-based approach , multilevel governance and institutional fragmentation, the aforementioned absorption of ideas from other fields, interdisciplinarity, and the prevalence of balancing exercises (Huhta, 2021). Climate and environmental law also exhibit many of these characteristics, as is readily apparent from the present text. However, the problem-based approach and the absence of strong organizing principles distinguish energy law from the other two fields that are under investigation here. Indeed, to date, most discussions of energy law have focused on what Huhta called the “surface level normative material” (Huhta, forthcoming), that is, on legislation that is intended to “address new technologies (sic!), new societal needs and varying political priorities” (Huhta, forthcoming). It has further been argued that currently its main objective is to “keep abreast of all new technology developments” (Fleming, 2019) and

Legal principles and the technology of energy, climate, and environmental law  261 embrace the characteristic of energy law as being “reactive to fast-paced developments in the sector and is inherently dynamic” (Huhta, 2021). Energy law and technology regulation are thus interdigitated. The fact that “elements of the discipline that are shared irrespective of the jurisdiction and do not change even if laws themselves do”, so more underlying principles for example, have not been studied yet links to the characteristic as being (mostly) problem-based and reactive. I argue that the underlying principles could and should also impact the way emerging (energy) technologies are regulated. As noted before, fossil energy production is among the key drivers of climate change and of the legal response that it has triggered. The third branch of energy law that was described in the opening paragraph of this section is mainly concerned with facilitating and governing the energy transition. Its focus is on issues such as energy efficiency, energy savings, and reducing demand, and there are obvious links to the mitigation tools that are discussed in the next section. 3.3 Climate Law Climate law is at the intersection of energy and environmental law. The influence of other fields of law, such as constitutional and administrative law, planning law, and natural disaster law, is also strong (Klass, 2013). As noted previously, climate law is an attempt to solve a super wicked problem that is characterized by “uncertainties, urgency, contentiousness, and intractability” (Mehling et al., 2020). Energy law and climate law share several properties. Climate law needs to generate solutions “across all regulatory planes, from international to local to transnational, with repercussions across jurisdictions and between different levels of governance” (Mehling et al., 2020). It has also absorbed concepts from, among others, constitutional and administrative law. Even though the UNFCCC is the leading international institution, international climate governance remains highly fragmented (Biermann et al., 2010; van Asselt & Zelli, 2018; Zelli, 2011). The problems of climate law are interdisciplinary (Mehling et al., 2020). Within the legal disciplines, those problems put a strain on the traditional doctrines and boundaries of public international law and national law, including the role of soft law, non-state actors, public–private partnerships, and other hybrid governance arrangements, as well as the use of a regulatory toolbox that differs considerably from traditional command-andcontrol approaches. (Mehling et al., 2020)

Climate law revolves around mitigation and adaptation. These two foci cannot be analyzed in isolation (Reins & Verschuuren, 2022). They are mutually reinforcing, in that mitigation facilitates adaptation, but also conflicting, in that some mitigation measures obstruct adaptation and vice versa (Reins & Verschuuren, 2022, p. 6 provide examples of these interlinkages). The IPCC defines mitigation simply as “a human intervention to reduce the sources or enhance the sinks of greenhouse gases” (van Diemen, 2019); adaptation is defined as follows: in human systems, [adaptation is] the process of adjustment to actual or expected climate and its effects, in order to moderate harm or exploit beneficial opportunities. In natural systems, [adaptation is] the process of adjustment to actual climate and its effects; human intervention may facilitate adjustment to expected climate and its effects. (van Diemen, 2019; for more information on the categories of adaptation, refer to Verschuuren, 2013)

262  Research handbook on law and technology The role of technology in both definitions is dual. Technologies such as CO2-emitting installations (power and industrial plants), cars, and such like are significant sources of greenhouse gas emissions. At the same time, technology is essential for mitigation, for example through the greenhouse gas removal technologies that Hester and Williams (2022) discussed, and for adaptation—human intervention is a means of “facilitat[ing] adjustment to expected climate and its effects.” Laws on energy efficiency, transportation, and the energy transition serve as mitigation tools (Reins & Verschuuren, 2022); legislation on floods and stormwater, zoning, and land use facilitates adaptation (Verschuuren, 2013). It is apparent even from the brief presentation of these examples that the boundaries of climate law are porous and that the (academic) discussion is ongoing (Mehling et al., 2020). For example, in Massachusetts v. EPA,11 a landmark US case, it was held that CO2 is a pollutant for the purposes of the Clean Air Act. The implication is that climate-change law addresses “a typical pollution problem” (Nagle, 2010), albeit one that is global in nature (Nagle, 2010), and that there is a clear link between it and traditional environmental law. Nagle (2010) and Shellenberger and Nordhaus (2004) argued the case for “climate exceptionalism” (Nagle, 2010; Shellenberger & Nordhaus, 2004), that is, for the proposition “that the problem presented by climate change is different from the air pollution problems that we have addressed in the past” (Nagle, 2010). Hilson posited that climate unexceptionalism can also be a useful concept (Hilson, 2013), for example in strategic litigation. Climate unexceptionalism entails “framing climate problems using more typical environmental law frames,” say by stressing the local characteristics and impacts of a problem (Hilson, 2013). 3.4 Interim Conclusion This section outlined several interlinkages, differences, and commonalities that pertain to ECEL. Its three elements are linked. They overlap often, yet they also differ in their (legal) origins and foci. For example, “whereas environmental law typically aims to adapt to ecological needs, energy law often adapts to technological development” (Huhta, 2021). The influence of the overlaps on technology regulation is readily discernible. For example, writing on climate and environmental law, Hilson noted that “action on climate change inevitably produces numerous ‘co-benefits’ in terms of other environmental issues” (Hilson, 2013). Furthermore, according to Huhta, “addressing climate change in the energy sector requires structural changes in the entire value chain of energy, ranging from the energy sources that are explored and produced to how and when energy is consumed” (Huhta, 2021). Technology has played a key role in the historical development of all three areas of ECEL. Its influence has been double-edged: technology is both the main contributor to pollution and climate change and a key mitigation and adaptation tool. The next section investigates the relationship between the three disciplines by focusing on organizing notions, that is, on matters of legal principle.

11 

Massachusetts v. Environmental Protection Agency 549 US 583 (2007).

Legal principles and the technology of energy, climate, and environmental law  263

4. LEGAL PRINCIPLES IN ENERGY, ENVIRONMENTAL, AND CLIMATE LAW AND TECHNOLOGY REGULATION Generally, principles provide general directions to policymakers. They are more abstract than rules and usually cannot be invoked separately, with the exception of those that are contained in legal instruments such as the Charter of Fundamental Rights (see further Fleming, 2019). Since the disciplines that constitute ECEL differ in origin, and focus, the legal principles that underlie them also have different standing in law. Whereas the principles of climate and environmental law are largely codified and have emerged from diplomatic processes, bottom-up, the suggested principles of energy law are conceptualized top-down. Regardless of the differences in origin and standing, the inexhaustive taxonomy of ECEL principles in Table 16.1 reveals evidence of considerable overlap and cross-fertilization in practice. This section does not discuss the legal status, the interpretation, the meaning, and the tools by which each of these principles is implemented in detail (see, among others, Carlarne et al., 2016; Heffron et al., 2018; Sands, 2018) but instead offers general reflections on their content and on their implications for the regulation of technology. Precaution and prevention are often deemed the most important principles of technology regulation (Stirling, 2017; Wiener, 2016, p. 170). However, at least in my view, sustainable development, which is also treated as being of the utmost import in all three subdisciplines, is essential to a holistic assessment of technology regulation. I return to this matter in Section 4. The following subsection prefaces that discussion with a general account of the principles in question as they pertain to technology regulation.12 4.1 Principles of Environmental Law The principles of environmental law emerged from international environmental diplomacy. They developed organically over a long period of time. Some, such as the precautionary principle (Vorsorgeprinzip), were national at first (Huhta, forthcoming). Many such principles were codified in the Stockholm Declaration of 1972 and in the Rio Declaration of 1992 (Sands, 2018, p. 29ff). The principles in Table 16.1 that are marked with an asterisk are deemed “general principles … of international environmental law, as reflected in treaties, binding acts of international organizations, state practice, judicial decisions and soft law commitments” (Sands, 2018, p. 197). They are “potentially applicable to all members of the international community across the range of activities that they carry out or authorize” (Sands, 2018, p. 198) and, therefore, also to the development and use of technologies. In the light of the specificities of environmental law that were outlined previously, however, it is difficult to “establish the parameters or the precise international legal status of each principle” (Sands, 2018, p. 198) despite codification. The key principles in (environmental) technology regulation are sustainable development, precaution and prevention, the polluter-pays principle, and the principle that pollution should be rectified at its source. The tools by which these principles are applied usually result in the employment of classical command-and-control laws, such as licensing procedures, prohibitions, environmental impact assessments, environmental quality standards, BAT requirements, 12  The order of discussion of these principles is not entirely the same one as in the previous section discussing the origins of the disciplines. That is because the principles of climate change law are codified in international law and the energy principles are only suggested.

264  Research handbook on law and technology Table 16.1  Inexhaustive taxonomy of legal principles in environmental, climate, and energy law Environmental Law

Climate Law

Energy Lawa

Sustainable Development

Intergenerational and Intragenerational Equity

Natural Resources Sovereignty

Sovereignty Over Natural Resources

Principle of Common but Access to Modern Energy Differentiated Responsibilities and Services Respective Capabilities

Precautionary and Prevention Principle

Precaution

Energy Justice

Intergenerational and Intragenerational Equity

Sustainable Development

Prudent, Rational, and Sustainable Use of Natural Resources

Integration Principle

Common Concern of Humankind

Protection of the Environment, Human Health, and Combatting Climate Change

Polluter-Pays Principle

Cooperation and Knowledge Transfer

Energy Security and Reliability

Rectification at Source Principle

Principle of Cost-Effectiveness

Resilience

Cooperation Principle

Accountability and Transparency

Priority to Special Situation and Needs of Developing Countries Principle

Nonregression

(solidarity)

Principle of Common but Differentiated Responsibilities and Respective Capabilities Capacity-Building and Transfer of Technologies Access to Information, Public Participation in Decision-Making, and Access to Justice Duty Not to Cause (Transboundary) Environmental Harm Peaceful Resolution of Disputes a Notes:   These principles are scholar-made and do not enjoy the same standing and backing by legal tradition as the principles of environmental and climate law. Source:   Author’s compilation based on the Rio Declaration on Environment and Development, 1992; the United Nations Framework Convention on Climate Change, 1992; Carlarne et al., 2016; Heffron et al., 2018; and Sands, 2018.

and emissions standards (see the previous section) as well as in the use of administrative orders and criminal sanctions. The principles are also implemented through procedural rules. This tendency is particularly pronounced in the context of the precautionary principle. The rules in question pertain to information, participation, and access-to-justice mechanisms. Marketbased instruments, such as liability legislation, taxes, and subsidies, are also used widely. The

Legal principles and the technology of energy, climate, and environmental law  265 principles were designed to guide “retrospective ecological conservation and improvement” (Somsen, 2017, p. 397), and they have been criticized for being unsuitable for technologydriven environmental enhancement (Somsen, 2017, p. 397). These technologies at least also have implications for human rights, which are traditionally omitted from the more ecological environmental-law principles (Somsen, 2017, p. 397; note, however, that the CBDR principle [see below] is also a principle of environmental law). 4.2 Principles of Climate Law Article 3 of the UNFCCC includes a list of principles that are intended to “guide the Parties when fulfilling their obligations” (Atapattu, 2016, p. 247). According to Atapattu, this is a “unique feature” (Atapattu, 2016, p. 247). However, she did not explain why those principles are unique. The principles of climate law, similarly to those of environmental law, are the result of diplomatic efforts and are codified in international instruments such as the UNFCCC. Unlike environmental-law principles, climate-law ones account for human rights, notably through the notion of intragenerational and intergenerational equity. The principles of “Common but Differentiated Responsibilities and Respective Capabilities” (CBDRRC) and “Common Concern of Humankind” are among the manifestations of that link. The CBDRRC principle is at the core of climate-change law (Redgwell, 2016, p. 186). The notion of “respective capabilities” was introduced into the UNFCCC in order to account for different contributions, both historical and contemporary, to climate change (Redgwell, 2016, p. 186), as well as for variation in “technological and financial resources” (Rajamani, 2006, p. 9). The principle is closely connected to sustainable development (Rajamani, 2006, p. 252). Intragenerational and intergenerational equity have two implications for technology. The first is that there is a right to secure access to technology, the realization of which ought to reflect “the development needs of the developing countries” (Rajamani, 2006, p. 252). That right is also recognized in the context of the CBDRRC principle. The second implication is that the needs of future generations should be considered in the development of ECEL technologies. The common concern of humankind is said to be the basis for other climatelaw principles (Soltau, 2016, p. 211). In brief, the principle posits that, “by virtue of their significance and the need for collective action to protect them, [the aspects of the global environment] have been designated as a common concern of humanity” (Soltau, 2016, p. 204). Climate change is recognized as a common concern of humankind.13 Technologies that could “conceivably implicate” this principle in the future are classified as climate-engineering technologies (Soltau, 2016, p. 211). 4.3 Principles of Energy Law As a discipline, energy law is often described as “new, young, immature or emergent” (Huhta, forthcoming). Codified or customary principles of (international) law are scarce. In order to strengthen energy law as a self-standing field, in 2018, a group of scholars developed several normative principles. They drew inspiration from the more established principles of environmental law (Heffron et al., 2018). The material scope and the legal standing of these scholar-made principles have been the subject of academic debate because they were not 13 

UNGA Res 43/53 (December 6, 1988) UN Doc A/RES/43/53.

266  Research handbook on law and technology developed organically through diplomacy and case law and are thus rather normative in tenor. Furthermore, since they have not been codified in (international) law, they are also arguably not universal. It is noteworthy that in the European Union, energy solidarity has become the first explicit energy law-related legal principle, in the traditional sense of that term, to emerge from the case law (Hutha & Reins, forthcoming). It was not among the scholar-made principles that were published in 2018. If the 2018 principles are accepted as the foundations of energy law, then Table 16.1 displays the organizing notions of that discipline. The mention of the principle “Protection of the Environment, Human Health, and Combatting Climate Change” is striking. Arguably, it incorporates environmental and climate-change principles. The authors justified its inclusion by reference to the physical link between energy and the environment (Heffron et al., 2018). Therefore, trade-offs need to be identified and struck (Heffron et al., 2018)—“energy law and policy and environmental law and policy cannot be treated as distinct areas of regulation,” and climate change can only be addressed if these inter-relationships are recognized (Heffron et al., 2018). 4.4 Interim Conclusion The key difference between principles of environmental law and principles of climate law on the one hand, and principles of energy law on the other is that the former were developed in accordance with state practice and within the sphere of international law, whereas the latter are scholar-made. This is, of course, a reflection of the fact that there is—to date—no overarching and comprehensive international legal instrument that deals with the regulation of energy. This difference notwithstanding, there are clear linkages between the principles in the three different fields as discussed above. The fact that the principles are intertwined plays a critical role in addressing the question as to how they influence the regulation of ECEL as is discussed in the next section.

5. LINKING SUSTAINABLE DEVELOPMENT AND TECHNOLOGY The principle of sustainable development pervades all three of the elements of ECEL.14 Indeed, it is one of the overarching principles of international law in general, and it is present in numerous international treaties. It has been called “an emerging area of international law in its own right” (Schrijver, 2017) and a “type of norm in its own right” (Cordonier Segger, 2004). However, in technology regulation, it has largely been overlooked. First defined in the Brundtland Report (World Commission on Environment and Development, 1987), its aim is to reconcile economic, social, and environmental development. Having been included in

14  For energy law, the formulation is slightly different (“Prudent, Rational and Sustainable Use of Natural Resources”). The explanation, however, refers back to the UNFCCC principles, the Rio Declaration, and the Sustainable Development Goals (Heffron et  al., 2018). Therefore, it may be assumed that the distinction is semantical.

Legal principles and the technology of energy, climate, and environmental law  267 Article 2 and Article 3 of the Rio Declaration,15 it is strongly linked to intragenerational and intergenerational equity. The interpretation of the principle has evolved over time (Fleming et al., 2021). Its relevance to technology might not be immediately apparent. Technology can both threaten and strengthen sustainable development. The Brundtland Report describes new technologies as a mainspring of economic growth … and while this technology offers the potential for slowing the dangerously rapid consumption of finite resources, it also entails high risks, including new forms of pollution and the introduction to the planet of new variations of life forms that could change evolutionary pathways. (World Commission on Environment and Development, 1987, p. 14)

Ashford and Hall, writing on the Brundtland Report, argued that technology, or more precisely the “idea of limitations imposed by the state of technology and social organization on the environment’s ability to meet present and future needs,” is one of the two key concepts that define sustainable development (the other being “needs”; Ashford & Hall, 2018, p. 32). The Sustainable Development Goals reflect the links between technology and sustainable development more explicitly. The 17 non-binding goals are the result of a recent attempt to strengthen the implementation of the principle. Several goals are related, directly or indirectly, to the use of technology. The table that follows lists the instances in which “technology,” “technological capacity,” or “technical assistance” are mentioned expressly as targets that are relevant to the pursuit of an SDG (for reasons of simplicity, the levels of the indicators are omitted). There are direct references to technology, technological development, technological capacity, and technical assistance in 12 of the 17 SDGs. However, the manner in which technology is seen as being relevant to the attainment of the respective SDGs differs widely. Four broad types of references can be identified in Table 16.2. First, some of the references to technology are vague and open to interpretation. For example, in SDG 1, “No poverty,” the target indicates that “all men and women” should have “access to … appropriate new technology.” The nature of this technology is left unspecified. The reference to technology is thus aspirational in purpose, and access to technology is treated as a measure of progress toward the elimination of poverty. Second, in other SDGs that refer directly to the aforementioned terms, the link to technology is operational—technology is seen not so much as an indicator of progress but as a means. For example, in the context of SDG 2, “Zero Hunger,” “technology development and plant and livestock gene banks” are said to be needed “to enhance agricultural productive capacity,” which would contribute to the eradication of hunger by 2030. The link between technology and the SDG is direct, and the relevant technologies are needed so that the objective in question can be achieved. The same is also true of SDG 4, “Quality Education”; of SDG 5, “Gender Equality”; of SDG 9, “Industry, Innovation and Infrastructure”; and of SDG 17.8 “Partnerships for the Goals.” All of these SDGs refer to “information and communications technology” as a means to achieve these respective objectives. The role of technology is facilitative—technologies that are generally available in the global North should also 15  Principle 3: The right to development must be fulfilled so as to equitably meet developmental and environmental needs of present and future generations. Principle 4: In order to achieve sustainable development, environmental protection shall constitute an integral part of the development process and cannot be considered in isolation from it.

268  Research handbook on law and technology Table 16.2  Direct references to (the use of) technology, technological capacity, and technical assistance in the SDGs SDG Goal

Target

1 No Poverty

1.4 By 2030, ensure that all men and women, in particular the poor and the vulnerable, have equal rights to economic resources, as well as access to basic services, ownership and control over land and other forms of property, inheritance, natural resources, appropriate new technology and financial services, including microfinance.

2 Zero Hunger

2.a Increase investment, including through enhanced international cooperation, in rural infrastructure, agricultural research and extension services, technology development and plant and livestock gene banks in order to enhance agricultural productive capacity in developing countries, in particular least developed countries.

4 Quality Education

4.b By 2020, substantially expand globally the number of scholarships available to developing countries, in particular least developed countries, small island developing States and African countries, for enrollment in higher education, including vocational training and information and communications technology, technical, engineering and scientific programs, in developed countries and other developing countries.

5 Gender Equality

5.b Enhance the use of enabling technology, in particular information and communications technology, to promote the empowerment of women.

6 Clean Water and Sanitation

6.a By 2030, expand international cooperation and capacity-building support to developing countries in water- and sanitation-related activities and programs, including water harvesting, desalination, water efficiency, wastewater treatment, recycling and reuse technologies.

7A  ffordable and Clean Energy

7.a By 2030, enhance international cooperation to facilitate access to clean energy research and technology, including renewable energy, energy efficiency and advanced and cleaner fossil-fuel technology, and promote investment in energy infrastructure and clean energy technology. 7.b By 2030, expand infrastructure and upgrade technology for supplying modern and sustainable energy services for all in developing countries, in particular least developed countries, small island developing States, and land-locked developing countries, in accordance with their respective programs of support.

 ecent Work and 8D Economic Growth

8.2 Achieve higher levels of economic productivity through diversification, technological upgrading and innovation, including through a focus on highvalue added and labor-intensive sectors.

Legal principles and the technology of energy, climate, and environmental law  269 SDG Goal

Target 8.a Increase Aid for Trade support for developing countries, in particular least developed countries, including through the Enhanced Integrated Framework for Trade-Related Technical Assistance to Least Developed Countries.

  9 Industry, Innovation and Infrastructure

9.4 By 2030, upgrade infrastructure and retrofit industries to make them sustainable, with increased resource-use efficiency and greater adoption of clean and environmentally sound technologies and industrial processes, with all countries taking action in accordance with their respective capabilities. 9.5 Enhance scientific research, upgrade the technological capabilities of industrial sectors in all countries, in particular developing countries, including, by 2030, encouraging innovation and substantially increasing the number of research and development workers per 1 million people and public and private research and development spending. 9.a Facilitate sustainable and resilient infrastructure development in developing countries through enhanced financial, technological and technical support to African countries, least developed countries, land-locked developing countries and small island developing States. 9.b Support domestic technology development, research and innovation in developing countries, including by ensuring a conducive policy environment for, inter alia, industrial diversification and value addition to commodities. 9.c Significantly increase access to information and communications technology and strive to provide universal and affordable access to the Internet in least developed countries by 2020.

11 Sustainable Cities and Communities

11.c Support least developed countries, including through financial and technical assistance, in building sustainable and resilient buildings utilizing local materials.

12 Responsible Consumption and Production

12.a Support developing countries to strengthen their scientific and technological capacity to move toward more sustainable patterns of consumption and production.

14 Life Below Water

14.a Increase scientific knowledge, develop research capacity and transfer marine technology, taking into account the Intergovernmental Oceanographic Commission Criteria and Guidelines on the Transfer of Marine Technology, in order to improve ocean health and to enhance the contribution of marine biodiversity to the development of developing countries, in particular, small island developing States and least developed countries.

270  Research handbook on law and technology SDG Goal

Target

17 Partnerships for the Goals

17.6 Enhance North-South, South-South and triangular regional and international cooperation on and access to science, technology and innovation and enhance knowledge sharing on mutually agreed terms, including through improved coordination among existing mechanisms, in particular at the United Nations level, and through a global technology facilitation mechanism. 17.7 Promote the development, transfer, dissemination and diffusion of environmentally sound technologies to developing countries on favorable terms, including on concessional and preferential terms, as mutually agreed. 17.8 Fully operationalize the technology bank and science, technology and innovation capacity-building mechanism for least developed countries by 2017 and enhance the use of enabling technology, in particular information and communications technology. 17.16 Enhance the Global Partnership for Sustainable Development, complemented by multi-stakeholder partnerships that mobilize and share knowledge, expertise, technology and financial resources, to support the achievement of the Sustainable Development Goals in all countries, in particular developing countries.

Source:   Author’s compilation.

be available in the global South because they are conducive to the attainment of particular SDGs. The reference to “increase[ing] access to information and communications technology and striv[ing] to provide universal and affordable access to the Internet in least developed countries” supplies a simple and concrete example—access to technology here is an intermediate step in the pursuit of SDG 9, “Industry, Innovation and Infrastructure.” Third, several SDGs refer to specific (novel) technologies. This is most notably the case of SDG 6, “Clean Water and Sanitation”; of SDG 7, “Affordable and Clean Energy”; and of SDG 14, “Life Below Water.” These goals contain mentions of “water harvesting, desalination, water efficiency, wastewater treatment, recycling and reuse technologies,” “clean energy technology,” and “transfer of marine technology,” respectively. International cooperation in the development and use of those technologies should be enhanced. That enhancement that should ideally be accompanied by technology transfers to developing countries. Specific technologies that can contribute to the realization of particular SDGs are identified, and it is recommended that the ability of developing countries to access them be expanded. Fourth, many of the references in Table 16.2 signal that a transfer of technology, the accumulation or development of technical capacities, or the “upgrading of technological capabilities of industrial sectors” would be conducive to the attainment of a given SDG. This is true of, inter alia, SDG 8, “Decent Work and Economic Growth”; SDG 9, “Industry, Innovation and Infrastructure”; SDG 11 “Sustainable Cities and Communities”; SDG 12, “Responsible Consumption and Production”; and SDG 17, “Partnerships for the Goals.” These references,

Legal principles and the technology of energy, climate, and environmental law  271 which can generally be subsumed under the term “technical assistance,” are closely tied to the principle of cooperation, to knowledge transfers and the building of capacity, and to technological transfer, which, as shown previously, all feature prominently in climate and environmental law. Evidently, the third category of references to technology is most intimately linked to ECEL. This said, the other categories also matter. SDG 6, SDG 7, and SDG 14 contain specific references to ECEL-related technologies for clean water and sanitation, energy, and the marine environment. Accordingly and unsurprisingly, the relationship between technology, in the broadest sense of that term, and the attainment of the SDGs has a wide scope. Importantly, there may be a close nexus between the availability and the implementability of highly specific ECEL technologies and the attainment of particular SDGs.

6. CONCLUSION The pace of technological development, the overlaps between the problems of climate law, energy law, and environmental law, and the occasional conflicts between the resultant legal solutions (Woolley, 2021) require an integrated approach to technological regulation. If the challenges that were outlined in the introduction to this chapter are to be overcome, a renewed focus on the organizational notions and concepts of ECEL, that is, on its underlying principles, is needed. The taxonomy of principles that was presented in this chapter shows that, despite the substantial differences in legal origin and standing, there is much substantive overlap. ECEL principles can function as the much-needed “cement” in the regulation of technology that is deployed at the intersection of the fields of environmental protection, climate change and energy. Whereas the regulation of these fields has developed in relative isolation, using (common) principles that can be distilled from each of these fields may significantly enhance the quality of the regulation of novel technologies. Sustainable development has been identified as particularly important for the regulation of (emerging) technologies in ECEL, as evidenced, among others, by the prominence of technology in the SDGs. Fisher’s remark about environmental law holds true in all three domains: it becomes apparent that choices can be made about the direction and nature of both environmental law and technology. Those choices are not simply choices about how we choose to imagine the world and how we choose to live in it. (Fisher, 2017, p. 376)

REFERENCES Ashford, N.A. & Hall, R.P. (2018). Technology, Globalization, and Sustainable Development: Transforming the Industrial State. London: Routledge. Retrieved from https://doi​ .org​ /10​ .4324​ /9780429468056. Atapattu, S. (2016). Climate Change, International Environmental Law Principles, and the North-South Divide. Transnational Law and Contemporary Problems, 26(2), 247–262. Biermann, F., Pattberg & Zelli, F. (2010). Global Climate Governance After 2012: Architecture, Agency and Adaptation.In: Hulme, M., Neufeldt, H. In Making Climate Change Work for Us: European Perspectives on Adaptation and Mitigation Strategies (pp. 263–290). Cambridge, UK: Cambridge University Press. Retrieved from https://lucris​.lub​.lu​.se​/ws​/portalfiles​/portal​/36135614​/Glo​balC​lima​ teGo​vern​ance​After2012 ​_c10​_p263​_290​.pdf.

272  Research handbook on law and technology Bradbrook, A.J. (1996). Energy Law as an Academic Discipline. Journal of Energy & Natural Resources Law, 14(2), 193–217. Retrieved from https://doi​.org​/10​.1080​/02646811​.1996​.11433062. Carlarne, C.P., Gray, K.R., Tarasofsky, R., Carlarne, C.P., Gray, K.R. & Tarasofsky, R. (2016). The Oxford Handbook of International Climate Change Law. Oxford: Oxford University Press. Collingridge, D. (1980). The social control of technology. New York: St. Martin’s Press. Cordonier Segge, M.C. & Khalfan, A. (2004) Sustainable Development Law: Principles, Practices, and Prospects. Oxford: Oxford University Press. Fisher, E. (2017). Imagining Technology and Environmental Law. In R. Brownsword, E. Scotford & K. Yeung (Eds.). The Oxford Handbook of Law, Regulation and Technology (pp. 360–378). Oxford: Oxford University Press. Retrieved from https://doi​.org​/10​.1093​/oxfordhb​/9780199680832​ .013​.14. Fleming, R. (2019). The ‘Trias’: A New Methodology for Energy Law. European Energy and Environmental Law Review, 28(5), 164–175. Retrieved from https://doi​.org​/10​.54648​/ EELR20 19018. Fleming, R., Huhta, K. & Reins, L.. (2021). What is Sustainable Energy Democracy in Law? In Fleming, R., Huhta, K. & Reins, L. (Eds.). Sustainable Energy Democracy and the Law (pp. 3–27). Nijhoff: Brill. Retrieved from https://doi​.org​/10​.1163​/9789004465442​_002 Ghaleigh, N.S. (2021). Article 2 Aims, Objectives and Principles. In G. van Calster & L. Reins (Eds.). The Paris Agreement on ClimateChange—A Commentary (pp. 73–103). Cheltenham: Edward Elgar Publishing. Retrieved from https://www​.elgaronline​.com​/display​/ book​/9781788979191​/ book​-part​ -9781788979191​-11​.xml. Havercroft, I. & Singh Ghaleigh, N. (2018). Geological Factors for Legislation to Enable and Regulate Storage of Carbon Dioxide in the Deep Surface. In I. Havercroft, R. Macrory & R. Stewart (Eds.). Carbon Capture and Storage: Emerging Legal and Regulatory Issues. (pp. 5–32) London: Bloomsbury Publishing. Heffron, R.J., Rønne, A., Tomain, J.P., Bradbrook, A. & Talus, K. (2018). A treatise for energy law. The Journal of World Energy Law & Business, 11(1), 34–48. Retrieved from https://doi​.org​/10​.1093​/jwelb​ /jwx039. Heffron, R.J. & Talus, K. (2016). The evolution of energy law and energy jurisprudence: Insights for energy analysts and researchers. Energy Research & Social Science, 19, 1–10. Retrieved from https:// doi​.org​/10​.1016​/j​.erss​.2016​.05​.004. Hester, T. & Williams, K. (2022). Greenhouse gas Removal. In Research Handbook on Climate Change Mitigation Law (pp. 502–526). Cheltenham: Edward Elgar Publishing. Retrieved from https://www​ .elgaronline​.com​/display​/ book​/9781839101595​/ book​-part​-9781839101595​-28​.xml. Hilson, C. (2013). It’s All About Climate Change, Stupid! Exploring the Relationship Between Environmental Law and Climate Law. Journal of Environmental Law, 25, 359–370. Retrieved from https://doi​.org​/10​.1093​/jel​/eqt019. Houtman, A. & Reins, L. (2022). The Energy Transition in the EU: Targets, Market Regulation and Law. In G. Wood, V. Onyango, K. Yenneti & M. Liakopoulou (Eds.). The Palgrave Handbook of Zero Carbon Energy Systems and Energy Transitions (pp. 1–26). London: Palgrave Macmillan. Retrieved from https://doi​.org​/10​.1007​/978​-3​- 030​-74380​-2​_2​-1. Huhta, K. (2021). The coming of age of energy jurisprudence. Journal of Energy & Natural Resources Law, 39(2), 199–212. Retrieved from https://doi​.org​/10​.1080​/02646811​.2020​.1810958. Huhta, K. (2023). The Disciplinary Foundations of Energy Law. In K. Talus (Ed.). A Research Agenda for International Energy Law. Cheltenham: Edward Elgar (forthcoming). Kirk, E.A., Reeves, A.D. & Blackstock, K.L. (2007). Path Dependency and the Implementation of Environmental Regulation. Environment and Planning C: Government and Policy, 25(2), 250–268. Retrieved from https://doi​.org​/10​.1068​/c0512j. Klabbers, J. (2017). International Law (2nd ed.). Cambridge: Cambridge University Press. Retrieved from https://doi​.org​/10​.1017​/9781316493717. Klass, A.B. (2013). Climate Change and the Convergence of Environmental and Energy Law. Fordham Environmental Law Review, 24(2), 180–204. Laurie, G., Harmon, S.H. & Arzuaga, F. (2012). Foresighting Futures: Law, New Technologies, and the Challenges of Regulating for Uncertainty. Law, Innovation and Technology, 4(1), 1–33. Retrieved from https://doi​.org​/10​.5235​/175799612800650626.

Legal principles and the technology of energy, climate, and environmental law  273 Levin, K., Cashore, B., Bernstein, S. & Auld, G. (2012). Overcoming the tragedy of super wicked problems: Constraining our future selves to ameliorate global climate change. Policy Sciences, 45(2), 123–152. Retrieved from https://doi​.org​/10​.1007​/s11077​- 012​-9151​- 0. Mayer, B. (2021). Article 4 Mitigation. In G. van Calster & L. Reins (Eds.). The Paris Agreement on Climate Change—A Commentary (pp. 109–132). Cheltenham: Edward Elgar Publishing. Retrieved from https://www​.elgaronline​.com​/display​/ book​/9781788979191​/ book​-part​-9781788979191​-13​.xml. Mehling, M., van Asselt, H., Kulovesi, K. & Morgera, E. (2020). Teaching Climate Law: Trends, Methods and Outlook. Journal of Environmental Law, 32(3), 417–440. Retrieved from https://doi​ .org​/10​.1093​/jel​/eqz036. Mete, G. & Reins, L. (2020). Governing New Technologies in the Energy Transition – The Hydrogen Strategy to the Rescue? Carbon & Climate Law Review, 14(3), 210–231. Retrieved from https://doi​ .org​/10​.21552​/cclr​/2020​/3​/9. Nagle, J.C. (2010). Climate Exceptionalism. Environmental Law, 40(1), 53–88. Peel, J. (2008). Climate change law: The emergence of a new legal discipline. Melbourne University Law Review, 32(3), 922–979. Retrieved from https://doi​.org​/10​.3316​/agis​_archive​.20092685. Rajamani, L. (2006). Differential Treatment in International Environmental Law. Oxford: Oxford University Press. Retrieved from https://doi​.org​/10​.1093​/acprof​:oso​/9780199280704​.003​.0001. Redgwell, C. (2016). Principles and Emerging Norms in International Law: Intra- and Inter-generational Equity. In C.P. Carlarne, K.R. Gray & R. Tarasofsky (Eds.). The Oxford Handbook of International Climate Change Law (1st ed., pp. 184–201). Oxford: Oxford University Press. Retrieved from https:// doi​.org​/10​.1093​/ law​/9780199684601​.003​.0008. Reins, L. (2021). Introduction. In G. van Calster & L. Reins (Eds.). The Paris Agreement on Climate Change—A Commentary (pp. 109–132). Cheltenham: Edward Elgar Publishing. Retrieved from https://www​.elgaronline​.com​/display​/ book​/9781788979191​/ book​-part​-9781788979191​-13​.xml. Reins, L. & Verschuuren, J. (2022). Climate Change Mitigation and the Role of Law. In Reins, L. & Verschuuren, J. Research Handbook on Climate Change Mitigation Law (pp. 2–16). Cheltenham: Edward Elgar Publishing. Retrieved from https://www​.elgaronline​.com​/display​/ book​/9781839101595​ /book​-part​-9781839101595​-6​.xml. Rittel, H.W.J. & Webber, M.M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4(2), 155–169. Retrieved from https://doi​.org​/10​.1007​/ BF01405730. Rockström, J., Steffen, W., Noone, K., Persson, Å., Chapin, F.S., Lambin, E.F., Lenton, T.M., Scheffer, M., Folke, C., Schellnhuber, H.J., Nykvist, B., de Wit, C.A., Hughes, T., van der Leeuw, S., Rodhe, H., Sörlin, S., Snyder, P.K., Costanza, R., Svedin, U. & Foley, J.A. (2009). A safe operating space for humanity. Nature, 461(7263). Retrieved from https://doi​.org​/10​.1038​/461472a. Romppanen, S. & Huhta, K. (2023). The interface between EU climate and energy law. Maastricht .org​ /10​ .1177​ Journal of European and Comparative Law, 1–18. Retrieved from https://doi​ /1023263X231159976. Sands, P. (2018). Principles of International Environmental Law (4th ed.). Cambridge: Cambridge University Press. Schrijver, N. (2017). Sustainable Development Principles in the Decisions of International Courts and Tribunals. London: Routledge Shellenberger, M. & Nordhaus, T. (2004). The Death of Environmentalism. Retrieved from https://s3​.us​ -east​-2​.amazonaws​.com ​/uploads​.thebreakthrough​.org ​/ legacy​/images​/ Death ​_of​_ Environmentalism​ .pdf. Soltau, F. (2016). Common Concern of Humankind. In C.P. Carlarne, K.R. Gray & R. Tarasofsky (Eds.). The Oxford Handbook of International Climate Change Law (1st ed., pp. 202–212). Oxford: Oxford University Press. Retrieved from https://doi​.org​/10​.1093​/ law​/9780199684601​.003​.0008. Somsen, H. (2017). From Improvement Towards Enhancement: A Regenesis of EU Environmental Law at the Dawn of the Anthropocene. In R. Brownsword, E. Scotford & K. Yeung (Eds.). The Oxford Handbook of Law, Regulation and Technology (pp. 379–403). Oxford: Oxford University Press. Retrieved from https://doi​.org​/10​.1093​/oxfordhb​/9780199680832​.013​.42. Stirling, A. (2017). Precaution in the Governance of Technology. In R. Brownsword, E. Scotford & K. Yeung (Eds.). The Oxford Handbook of Law, Regulation and Technology (pp. 645–669 ). Oxford: Oxford University Press. Retrieved from https://doi​.org​/10​.1093​/oxfordhb​/9780199680832​.013​.50. Talus, K. (2013). EU Energy Law and Policy: A Critical Account. Oxford: Oxford University Press.

274  Research handbook on law and technology van Asselt, H. & Zelli, F. (2018). International Governance: Polycentric Governing by and beyond the UNFCCC. In A. Jordan, D. Huitema, H. van Asselt & J. Forster (Eds.). Governing Climate Change (1st ed., pp. 29–46). Cambridge: Cambridge University Press. Retrieved from https://doi​.org​/10​.1017​ /9781108284646​.003. van Diemen, R. (2019). Annex I Glossary. In J. Shukla & E. Skea (Eds.). Climate Change and Land: An IPCC Special Report on Climate Change, Desertification, Land Degradation, Sustainable Land Management, Food Security, and Greenhouse Gas Fluxes in Terrestrial Ecosystems. IPCC. Retrieved from https://www​.ipcc​.ch ​/site​/assets​/uploads​/2019​/11​/11​_ Annex​-I​-Glossary​.pdf. Verschuuren, J. (2013). Introduction. In J. Verschuuren (Ed.). Research Handbook on Climate Change Adaptation Law. Cheltenham: Edward Elgar Publishing (pp. 1–15). Wiener, J. (2016). Precaution and Climate Change. In K.R. Gray, R. Tarasofsky & C.P. Carlarne (Eds.). The Oxford Handbook of International Climate Change Law (1st ed., pp. 163–184). Oxford: Oxford University Press. Retrieved from https://doi​.org​/10​.1093​/ law​/9780199684601​.003​.0008. Woolley, O. (2021). Reflection 6: Regime Inconsistency. In A. Zahar & B. Mayer (Eds.). Debating Climate Law (pp. 398–411). Cambridge: Cambridge University Press. Retrieved from https://www​ .cambridge​.org​/core​/ books​/debating​-climate​-law​/reflection​- 6 ​-regime​-inconsistency​/4B2​CF50​159B​ 80E2​683F​A452​6435B9E17. World Commission on Environment and Development. (1987). Our Common Future. Oxford: Oxford University Press. Retrieved from https://sus​tain​able​deve​lopment​.un​.org​/content​/documents​/5987our​ -common​-future​.pdf. Zelli, F. (2011). The fragmentation of the global climate governance architecture. WIREs Climate Change, 2(2), 255–270. Retrieved from https://doi​.org​/10​.1002​/wcc​.104.

PART III PERSPECTIVES

17. Afro-centric law and technology discourse Caroline B. Ncube and Thabiso R. Phiri1

1. INTRODUCTION In this chapter we canvass Afro-centric anglophone discourse on law and technology to ascertain whether there are any distinct patterns that can be discerned in terms of key themes and their impact, if any, on law and policymaking. The first aspect considers the topics of these scholarly enquiries, the research questions being posed and the arguments developed in the literature. The second aspect embodies an attempt to evaluate the impact of this scholarship to ascertain whether it has had any regulatory and policy impacts. An additional line of enquiry could have been an examination of this scholarship’s impact on other scholars, for instance, the extent to which they cite it. However, that analysis would have required a broader literature review that fell beyond the scope of the present enquiry. Engagement with these aspects informs our view that these voices2 add important views to the global law and technology discourse because they engage several themes and perspectives which would otherwise be overlooked. These are set out in the third section of this chapter. We adopted a first-person narrative style in this chapter, as we consider it to be appropriate for a reflection on ‘speaking’ and ‘being heard’. Inextricably linked with the metaphor of voices being heard, is the idea of who is seen or recognised in this area. Ultimately, it is about perceptions of expertise and the importance given to areas of enquiry. We acknowledge and embrace that we represent the demographic least seen and heard in this space and in this chapter, seek to amplify voices like ours. We use ‘law and technology’ in a broad sense in this chapter, therefore it is not about the technologies used nor about the specific subject fields canvassed. We leave the latter to the subject-specific chapters in this volume. The chapter proceeds in the following further sections: research methodology (Section 2), literature review (Section 3), thematic focus (Section 4), Discussion: evaluation and future directions (Section 5) and conclusion (Section 6).

1  This work was carried out under the auspices of the SARChI Chair: Intellectual Property, Innovation and Development, University of Cape Town. This work is based on research supported in part by the National Research Foundation (NRF) of South Africa (Grant numbers: 115716 and 132090). Any opinion, finding and conclusion or recommendation expressed in this material is that of the author and the NRF does not accept any liability in this regard. This work also constitutes part of the Open African Innovation Research (OpenAIR) partnership’s thematic research. OpenAIR work is carried out with financial support from the International Development Research Centre, Canada, the Social Sciences and Humanities Research Council of Canada, and Queen Elizabeth Scholarships – Advanced Scholars (QES-AS) through Community Foundations of Canada (CFC). The views expressed herein do not necessarily represent those of Open AIR’s funders. 2  ‘Voices’ and ‘speaking’ are used here to denote the scholars or commentators active in this space and their key positions and arguments as articulated in their writings.

276

Afro-centric law and technology discourse  277

2. RESEARCH METHODOLOGY As indicated above, our purpose in this chapter is to share our meta-reflections on the work of authors writing in English about law and technology from an Afro-centric perspective. This choice of language was dictated by our language proficiency limitations. We acknowledge that there is significant literature in other languages. The geographic focus was influenced by own location and subjective sense that law and technology literature does not primarily focus on Africa. The first step in our enquiry was to search for the various publications on the topic, as broadly construed, to understand various writers’ thoughts, analysis and interpretation of issues around the law and technology. We used desktop research methodology. The keywords used in the literature search were: ‘law and technology’, ‘technology’, ‘innovation’, ‘law’. Since not all innovation is disruptive, we decided not to use disruption as a keyword and rather to use innovation as the search term as that would render broader results. We did not conduct a time-bound search but rather just searched for these keywords. We employed a scoping study approach under which we do not claim to have conducted an exhaustive literature review, but rather our aim was to achieve these two ends identified by Arskey and O’Malley (2005, p. 21): To examine the extent, range and nature of research activity: this type of rapid review might not describe research findings in any detail but is a useful way of mapping fields of study where it is difficult to visualize the range of material that might be available … To identify research gaps in the existing literature: this type of scoping study takes the process of dissemination one step further by drawing conclusions from existing literature regarding the overall state of research activity …

Therefore, the search results are merely representative and are not exhaustive. Further, our attempt to scope the literature does not involve any substantive evaluation of the quality of the literature or the arguments made therein, which is beyond the scope of this work. Once the results of the searches were accumulated, we adopted a thematic analysis to identify themes and categorise the voices of authors into themes. The thematic analysis we employed is grounded in Clarke and Braun’s (2013, p. 87) seminal six-stage process which entails the following steps: (a) data collection; (b) ‘generating initial codes’; (c) ‘searching for themes’; (d) ‘reviewing themes’; (e) ‘defining and naming themes’ and (e) formulating our views as presented in this chapter or, as Clarke and Braun put it, ‘producing the report’.

3. LITERATURE REVIEW There is a vast amount of literature on law and technology, a significant portion of which is found in journal articles in law and technology-specific journals and in general law reviews, sometimes as part of special issues. These two examples will suffice to make the point: Griffith Law Review published its fourth issue of 2011 under the theme ‘The Laws of Technology and the Technology of Law’ and the University of New South Wales Law Journal published its issue 44(3) in 2021 with the theme ‘Big Technology and the Law’. Our review of journal articles showed that most of the scholarship journals publish is from the Americas and Europe. The subject of journal articles is rich and diverse having moved well past the earlier debates of whether

278  Research handbook on law and technology law and technology is worthy of being a separate or distinct area of law (Easterbrook, 1996; Lessig, 2003). Such movement is not illustrative of agreement on the point, as some contestation still exists, but the point being made is that the focus on scholarship has moved to discussion of the various issues that arise, as canvassed in the following section. Some of the scholarship has focused on whether the law can cope or contend with technological advancement (Moses, 2011). There are several comprehensive monographs on law and technology (Bremmer, 2007; Brownsword & Goodwin, 2012; Brownsword & Yeung, 2008; Hildebrandt, 2016; Kohl & Charlesworth, 2016; Lloyd, 2020; Murray, 2019). Similarly, there are numerous edited volumes on the subject (Borghi & Brownsword, 2022; Brownsword, Scotford & Yeung, 2017; Chirwa & Ncube, 2023; Papadopoulos & Snail ka Mtuze 2022) as there are chapters in general legal texts. There are also some very technology-specific legal texts such as those on the Fourth Industrial Revolution (Mazibuko-Makena & Kraemer-Mbula, 2021), artificial intelligence (De Bruyne & Vanleenhove, 2021; Martin-Bariteau & Scassa, 2021; Ncube et  al., 2023) and the Internet of Things (de la Nota, 2022). This literature has been comprehensively reviewed several times (Brownsword, Scotford & Yeung, 2017a; Tranter, 2011; Gifford, 2007) making it possible to build our own analysis on the sound work that has already been done. Most of the literature is from the United States, Europe and Asia-Pacific with limited scholarship in the area from the rest of the world. Pał ka & Brożek’s chapter in this volume engages with the research methodologies emanating from the Americas and Europe, as the main sources of literature in this area. Having identified the locale of voices, our focus then turned to an examination of what the latter and smaller proportion of the literature, specifically that which focuses on Africa espoused. This subset of literature consisted of journal articles, reports, conference papers, books and book chapters, theses, policy papers, issue briefs and blog posts that were analysed. We identified the following topics as being those most frequently canvassed in the period 2005–2022 with an exponential growth in the latter part of this period: artificial intelligence (AI), the Fourth Industrial Revolution (4IR), information and communication technology (ICT), cloud computing, automation, three-dimensional (3D) printing, machine learning, robotics, biotechnology, smart city technology, and virtual and augmented reality. The reason for this growth in literature on these very specific technological aspects is due to the rise of these technologies in this period. Our literature review established that this focus largely mirrors the focus of literature on other parts of the world. The key themes in this literature are the following (1) human rights, (2) social justice, (3) human development, (4) accountability and ethics vis-à-vis automated decision-making, (5) policymaking and regulatory approaches and (6) subject-field-specific discussions such as intellectual property, civil procedure, the law of evidence and the various other areas discussed in the chapters comprising this Research Handbook. These are very broad categories, for instance, aspects discussed under the umbrella of human rights include privacy, freedom of expression and other fundamental rights. The human development category includes (a) the digital divide, (b) technological innovation and its impact on employment as well as (c) how it decreases or exacerbates social, economic and cultural inequalities. These inequalities are founded on several grounds such as gender and the bifurcation between formal and informal sectors and urban and rural settings. Out of this broad literature and in keeping with our stated focus on the above-identified themes as canvassed by scholarship from Africa, the next section more fully sets out the positions advanced in the literature.

Afro-centric law and technology discourse  279

4. THEMATIC FOCUS This section sets out the main research questions probed by the literature on law and technology with an African focus. These questions were distilled from the surveyed literature and show that some enquiries are more prominent in this region due to its unique context. Each of the following themes emerges from this context, as explained below. As noted above, the socio-economic disadvantage of the continent, differentiates it from the rest of the globe and so Afro-centric law and tech discussions are coloured by that circumstance. Giving such literature a platform exposes law and tech discourse to a depth and perspective it would otherwise lack. 4.1 Development, Social Justice and Human Rights A key focus of the literature emanating from Africa is an understanding of the developmental aspects. This is often grounded in preliminary discussions of the infrastructural and technological setting which entails an appreciation of the prevailing digital divide between Africa and the rest of the world and within the continent itself for instance between states and internally within states. Jepkoech and Anyembe (2019, pp. 3–4) and Alureba and Jere (2022) note the significant challenges posed by technological incompetence, poor infrastructure, outdated or unavailability of government policies and regulations and inadequate skills. To this list, other scholars add a lack of diffusion of technology into the informal sector which covers most of the economically active population, (Ayentimi & Burgess, 2019; p. 643; Butcher et al., 2021; Gadzala, 2018, 2018a). Cumulatively, these factors put the relevance of the Fourth Industrial Revolution (4IR) for Africa into question. The continent is not ready to adapt big data technologies and their unequal uptake and diffusion will increase the digital divide in Africa (Joubert et al., 2019). Umezuruike and Ngugi (2020) identified challenges that hinder the adoption of big data in the education system in sub-Saharan Africa and reached a similar conclusion that there has to be a tailored approach to the use of big data that benefits Africa’s educational and developmental goals. Other scholars advancing similar arguments include Barakabitze et al. (2019), Blom et al. (2016), Cloete (2017), Jhurree (2005), Fomunyam (2020) and Moorosi et al. (2017). Marivate, Aghoghovwia, Ismail, Mahomed-Asmail and Steenhuisen (2021) probe how the 4IR will impact future academics and researchers. Other beneficial uses of technology are in relation to agriculture (Ponelis & Holmner, 2015; Access Partnership, 2018, pp. 9–10, Kim et  al., 2020; Ly, 2021) and trade (Bankole et  al., 2013), and how 3D printing could revolutionalise food security (Fasogbon & Adebo, 2022) and social entrepreneurship (Schonwetter & Van Wiele, 2020). The adoption of smart city technology has also been considered for purposes of improving service delivery and economic and social conditions in Africa, (Achieng et al., 2021; Echendu & Okafor, 2021; Woherem & Odedra, 2017). Another question surfaced in the literature is the extent to which technology can be harnessed to climate change and, by doing so, secure environmental sustainability (Rutenberg et al., 2021). Ndemo and Weiss (2017) argue that the introduction of digital technologies has a significant impact on inequality and poverty and that this needs to be watched carefully as digital transformation progresses. Specifically, they argue that to better understand how digital technologies will help transform societies, actions in economic, organisational, political, social and cultural environments provide a framework for analysing the evolutionary processes that digital

280  Research handbook on law and technology technologies prompt or fail to trigger in African societies (Ndemo & Weiss, 2017, p. 341). Chirwa and Ncube’s edited volume (2023, p. 1–2) argues that the internet, human rights and development in Africa are inextricably linked. This volume considers developmental aspects, human rights, international law, African regional and sub-regional law, commercial law, dispute resolution, intellectual property, criminal law and the law of evidence. The breadth of this engagement is anchored in a developmental framework, which asks how, if at all, such regulation impacts the attainment of national and continental developmental goals. Other scholars have considered this developmental framing with specific reference to AI (Smith & Neupane, 2018; Comninos et  al., 2019). Similarly, Adams et  al.’s (2021) Human rights and the Fourth Industrial Revolution in South Africa is an in-depth critique of the impact of 4IR on fundamental human rights and freedoms. Its chapters are entitled ‘Development, Unemployment and Inequality’, ‘Data Governance: Privacy and Cybersecurity’, ‘Predictive Policing, Surveillance and Digital Justice’, ‘Bias, Discrimination and the Digital Divide’, ‘Internet Rights and Responsibilities’ and ‘The Way Forward: The Role of the South African Human Rights Commission’. 4.2 Of Neo-Colonialism and Decolonial Views Several scholars contend that if the regulation of technology is not nuanced and appropriately calibrated, it will simply be another wave of neocolonial regulation. For instance, Marda (2019) posits that governance systems, safeguards and conversations around artificial intelligence are initiated in the Global North but the deployment and experimentation occurs in the Global South, arguing that such an approach is dangerous as it is removed from the realities of the Global South. Therefore, the failure to question local needs and nuance in artificial intelligence issues amounts to modern colonialism and racism. It is proposed that there be a local, nuanced, granular, bottom-up understanding of issues at play when researchers and policymakers consider ways of dealing with artificial intelligence. Focusing on the fintech space Campbell-Verduyn and Giumelli (2022, p. 536) seek to dispel the ‘growing hype around blockchain as a (de)colonial technology’ by contending that ‘blockchain-based fintech are facilitating, rather than displacing, exclusionary practices of sanctioning underpinning the existing finance/security infrastructure’. Similarly, Everisto’s (2021) The Fourth Industrial Revolution and the Recolonisation of Africa: The Coloniality of Data cautions against uncritical engagement with technologies and inappropriately calibrated regulatory frameworks. A significant aspect of decolonial views is associated with deliberate and critical engagement with gender aspects and feminist approaches (Peña & Varon, 2019). 4.3 The Regulation of Technology Will Save or Sink Us Some literature on technology in Africa is premised on the promise of advancement that it offers, such as Ndung’u and Signe’s (2019) consideration of whether ‘the Fourth Industrial Revolution and digitisation will transform Africa into a global powerhouse’. They note that the 4IR has substantial transformative potential for encouraging economic growth and structural transformation, fighting poverty and inequality and increasing financial services and investment and realistically consider whether Africa will be left behind in the 4IR as historically happened with past industrial revolutions. They conclude that the potential can be harnessed if African governments are proactive in positioning their economies, for instance fixing the

Afro-centric law and technology discourse  281 labour-skills mismatch and developing physical and digital infrastructure. Similarly, others have considered the transformative potential of other technologies such as robotics (Opio, n.d.) and blockchain’s benefits of easy access to markets, improvement of the business environment, facilitation of cross-border transactions and low costs of money remittances (Jepkoech & Anyembe, 2019, pp. 2–3). Romanello (2021) considers how blockchain technology can be used to solve problems and improve the quality of lives of people in Africa owing to its decentralised and transparent nature. Arguing that it can be used to reconfigure the way resources and services are allocated, secured and transferred, Romanello gives specific examples such as controlling corruption and documenting refugees. These are key elements of the African Union’s Agenda 2063: The Africa We Want3 as reflected in Aspirations 3 and 4 which aspire to a corruption-free Africa, and a peaceful and secure continent where improved human security and social inclusion prevail, respectively. Scholars such as Gilwald (2019) caution against being caught up in the hype of the 4IR to the extent of overlooking the technological and developmental backlog that still confronts the continent. Building on the case made for the immense potential of these technologies some scholars then argue that the technology will save us if it is properly regulated, which requires that the legal implications should be carefully considered in the construction of a tailor-made regulatory approach for African jurisdictions (Butcher et al., 2021; Wilhelm, 2019). Several others have written on the regulation of the blockchain and cryptocurrencies. The United Nations Economic Commission for Africa’s (ECA) (2017) Blockchain Technology in Africa Draft Report4 explores blockchain technology and makes policy recommendations on using it in Africa for economic transformation. The study explores how the technology may be adopted across multiple sectors including finance, education, land management, law, governance and in health. Stolp, Perumall and Self (2018) consider such blockchain and cryptocurrency regulation in 20 African countries. The study outlines the regulatory environment, highlights whether any formal legal action has been taken and analyses the reception of use of the two technologies mainly by central banks in the assessed countries. A comparative assessment of how the regulators approach the two types of technology is also undertaken. Similarly, the importance of appropriately regulating cloud computing and the Internet of Things to secure societal benefit has been canvassed (Abubakar et al., 2014; Information Telecommunication Union, 2012; Saint & Garba, 2016). The subtext of this theme is that appropriate regulation will enable technology to save us. In sharp contrast, if African states do not regulate appropriately, these bad takes will sink us. For instance, Access Now5 reports that the internet shutdown in Tigray, Ethiopia which had, by then, persisted for 787 days is ‘one of the longest uninterrupted internet shutdowns to have taken place during active conflict’. Further, it notes that internet shutdowns in election periods and around protests are quite common (p. 12). Arewa’s Disrupting Africa: Technology, Law, and Development (2021) engages with the Nigerian state’s efforts to quash or sanction 3  See Agenda 2063: The Africa We Want. (2015). African Union Commission (pp. 1–20). Retrieved from https://au​.int​/sites​/default​/files​/documents​/36204​-doc​-agenda2063​_popular​_version​_en​.pdf 4  See Blockchain Technology in Africa: Draft Report. (2017). United Nations Economic Commission for Africa (pp. 1–33). Retrieved from https://archive​.uneca​.org​/sites​/default​/files​/images​/ blockchain​ _technology​_in​_africa​_draft​_ report​_19​-nov​-2017​-final​_edited​.pdf 5  See Access Now. (2022). Weapons of Control, Shields of Impunity: Internet Shutdowns in 2022 (p. 15). Retrieved from https://www​.accessnow​.org​/cms​/assets​/uploads​/2023​/03​/2022​-KIO​-Report​-new​ .pdf

282  Research handbook on law and technology demonstrations through the regulation of technologies such as banning Twitter and cryptocurrencies. Such actions negatively affect the economy and by extension development, in addition to their direct impact on the freedoms of assembly and expression. Similarly, Ethiopia’s internet shutdowns trample upon fundamental human rights including freedom of expression and access to information (Ayalew, 2019). The following section on impact argues that African case law pertaining to internet shutdowns and Twitter bans have added to the global conceptualisation of the protection of the right to freedom of expression, 4.4 Privacy: Health Data, Biometric Data and Data Governance The body of work on privacy discusses data in general, health data, biometric voter data and data governance. Abraha (2017), Bryant (2021), Makulilo (2020), Mutuku (2020) and Van der Merwe (2014) broadly examine whether African data protection laws are appropriately drafted, implemented and enforced. Onuoha (2019) conducts this analysis with specific reference to AI. Van der Merwe finds that the jurisdictional hurdles posed by the Internet need rethinking of traditional legal solutions to keep up with rapid changes in information technology (Van der Merwe, p. 320). Van der Merwe also concludes that a multi-stakeholder collaboration among lawyers, the government and experts from information technology industries is required for effective data privacy law reform (Van der Merwe, p. 320). Abraha (2017) finds that Ethiopia’s informal regulatory approach through internet filtering and surveillance of content lacks institutional transparency (Abraha, 2017, p. 302), it is disorganised and pervasive (Abraha, p. 303), leading to ‘vague and privacy-unfriendly laws’ (Abraha, 2017, p. 304) that create uncertainty in the law. Makulilo (2020) argues that Tanzania’s current legal landscape does not sufficiently balance data privacy rights and ‘the law enforcement quest for personal data’ (Makulilo, 2020, p. 275). Some of the general literature on privacy focuses on the role of the African Union (AU) and its binding normative instrument, the Convention on Cyber Security and Personal Data Protection (the Malabo Convention),6 which regulates privacy but is not yet in force. An example is Abdulrauf’s critique of the AU’s performance in fostering cooperation to protect personal information which argues that the AU is inactive and ought to be strengthened and specifically more proactive in promoting an ‘internal process of norm-acceptance’ by African states (Abdulrauf, 2020, p. 103). Mutuku (2020) posits that data protection laws in Africa which are modelled on the European Union’s General Data Protection Regulation (the GDPR),7 must be supported ‘with regulations that are fit for the contexts in which they are being enforced’ instead of being transplanted without nuance (Mutuku, 2020, p. 26). The use of ICT in health relies on health data acquisition to facilitate primary healthcare provision for instance through managing patients remotely thereby eliminating transportation costs (Adeola & Evans, 2019, p. 71; Holst et al., 2020; Olu et al., 2019). In relation to privacy and health data, Townsend (2022) considers how health data flows throughout Africa ‘may be 6  African Union, African Union Convention on Cyber Security and Personal Data Protection, 27 June 2014, not yet in force). Retrieved from https://au​.int​/en​/treaties​/african​-union​-convention​-cyber​ -security​-and​-personal​-data​-protection 7  Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (OJ L 119, 4 May 2016, pp. 1–88).

Afro-centric law and technology discourse  283 facilitated in the interests of public health during times of pandemic’. She finds that data sharing during a pandemic presents challenges which for instance ‘lie in the disparity between the evolving norms both internationally and regionally’ (Townsend, 2022, p. 20). Donnelly (2022) discusses the legal framework (both statutory and common law) for technology in health in South Africa and argues that it needs to be developed to ensure proper oversight, that health practitioner-specific regulations need to be updated to factor in technological advancements and to align with national digital strategies. First, it proposes that the regulatory framework for the oversight of software as a medical device needs to be updated to develop frameworks for adequately regulating the use of such new technologies. According to the University of the Witwatersrand, Johannesburg,8 Sekalala, Ndemo and Andanda in a research project titled ‘There is No App for This! Regulating the Migration of Health Apps in Sub-Saharan Africa’ intend to examine how the collection and migration of health data of people in developing countries to safeguard their privacy can be better regulated. The project is country-specific, focusing on Kenya, Uganda and South Africa. In most states, technology is deployed in the administration of elections. One such use is through biometric voter registration. Makulilo (2017) questions whether Tanzania’s prevailing legal and regulatory frameworks on the protection of personal information adequately protects the use of voters’ personal information acquired during biometric voter registration. He concludes that it does not because although Tanzania’s Elections Act of 2004 prohibits the disclosure of information about a voter, there is no specific data protection legislation in Tanzania, thereby inadequately protecting a voter’s personal data (Makulilo, 2017, pp. 210, 212). Marda (2019, p. 13) argues that data governance models must fit the context within which they will operate, some of which have underlying structural inequalities, particularly in developing countries. Alemany and Gurumurthy (2019) and Yonazi et  al. (2012) echo the same views. Ndemo and Thegeya (2022, p. 2), however, argue that Africa requires a continental strategy as a starting point for formulating national data governance frameworks to avoid improper use of data generated in Africa. Policy briefs have also discussed other topics in relation to data governance including financial data, and open data regimes and cyber security (Cube & Hlomani, 2022; Ndemo & Mkalama, 2022). Marivate (2022) highlights the practical impact of data governance on data science. 4.5 AI, Bias and Ethical Considerations Scholars have considered the ethical challenges presented by AI and identified bias as one of the key areas of concern in the African context, as is the case globally (Alemany & Gurumurthy, 2019; Ormond, 2020, 2022; Peña & Varon, 2019; Vernon, 2019). They contend that this is because AI algorithms are trained on data that is collected mainly in the West and from China which do not reflect the African experience (Ormond, 2020, p. 6, 2022, p. 5). To avoid bias, they argue for the adoption of a policy that considers the appropriateness or desirability of AI in certain social domains and introducing legislation that encourages fair collection of data necessary for training AI algorithms (Ormond, 2020, pp. 9–10, 2022, p. 10). 8  See University of the Witswatersrand, Johannesburg. (2022, 23 February). Tackling how data from your Health App is used, General news section. Retrieved from https://www​.wits​.ac​.za​/news​/latest​ -news​/general​-news​/2022​/2022​- 02​/tackling​-how​-data​-from​-your​-health​-app​-is​-used​.html

284  Research handbook on law and technology States are often more focused on ‘the hyperbolic discourses that accompany the opportunity rhetoric’ of AI decision-making and are less alert to the state-attendant risks such as such as error, bias or discrimination (Alemany & Gurumurthy, 2019, p. 92). Accordingly, new regulatory approaches must focus on how to determine liability and accountability for AI decisionmaking, and to establish mechanisms for redress (Alemany & Gurumurthy, 2019, p. 92). 4.6 Automation and Its Impact on Employment in Africa Allard (2015), Suri and Udry (2022), Calitz, Famubode (2018), Le Roux (2018), Millington (2017), Milner and Yayboke (Ed(s)(2021)), Ndung’u and Signe (2020), Poisat and Cullen (2017), Vernon (2019), and Zeufack et al. (2021) have considered the implications of new technologies on jobs in developing countries. In terms of the quality of human capital required, Allard (2015, p. 142) finds that ‘higher education could accelerate the rate of technology catch-up in Africa and boost per-capita incomes’. Nonetheless, Millington (2017, p. 7) finds that there will be a widened income inequality as a result of digitisation, with the less educated and less well-off being most vulnerable to technological changes in the labour market. To overcome the problem, Le Roux (2018, p. 509), Famubode (2018, pp. 5–6) and Milner and Yaboke (2021, p. 13) argue that education systems in developing countries, particularly sub-Saharan Africa, must adapt to technological changes to develop the requisite education and skills set to meet the demands of the new job market introduced by automation. Calitz et al. (2017, pp. 7, 8), Vernon (2019) and Zeukack et al. (2021, p. 61) argue that digital skills are a key component of human capital should the benefits of technology in developing countries be realised. Ndung’u and Signe (2020, p. 65) find that African governments are reluctant to support technologies that threaten low-skilled jobs which are most common in Africa and this has the effect of ‘constraining participation in the 4IR to economies with relevant skills’. Milner and Yayboke (Ed(s), 2021, p. 37), however, find that jobs will not disappear overnight in developing countries since ‘the infrastructure may not yet support radical technological adoption of some of these evolutionary industries’. Africa has the highest share of employment in agriculture at about 50 per cent (Suri & Udry, 2022, p. 33). Suri and Udry (2022, pp. 34–35) find that technological changes like ‘mechanisation of farm activities’ improve labour and agricultural productivity which in turn lead to poverty reduction, a vice which is prevalent in Africa. Zeufack et al. (2021, p. 54) ask what the future of work in Africa will be as a result of automation. They argue that the ‘future landscape of jobs in Sub-Saharan Africa is likely positive’ since most African economies are largely informal and retaining most lowly skilled workers in Africa would be more ideal than investing in automation. 4.7 Technology and Gender Adera et al. (2014), Melhem et al. (2009), Olatokun (2008) and Wakunuma (2013) considered the role of gender differences in accessing and benefiting from technology. Olatokun finds that gender issues must be incorporated in national ICT policies in Africa so that women too may access and benefit from it. Policy changes would be necessary to overcome the existing social and cultural structures that have resisted gender equality which has extended to women accessing technology, (Adera et al., 2014, p. 238; Melhem et al., 2009, p. 2; Wakunuma, 2013,

Afro-centric law and technology discourse  285 p. 134). Other scholars who have considered the interface between technology and gender are Ahmed (2021) and Gwagwa et al. (2020). As noted above gender considerations have been discussed in the context of decoloniality and feminist approaches (Peña & Varon, 2019).

5. DISCUSSION: EVALUATION, FUTURE DIRECTION AND IMPACT The literature on law and tech in Africa is well-established and rich in its coverage. The leading general texts on this area, Papadopoulos and Snail ka Mtuze (2022) and van der Merwe et al. (2022), are now in their fourth and third editions, respectively, and the other issue-specific literature is also well-established, as shown above. The questions that have been posed in the literature have been highlighted in the thematic focus section above. They can be categorised into (a) emulation evaluation, (b) context-specific regulatory and policy questions (c) theoretical and (d) forward-looking literature. The emulation evaluations hold up a global or international standard and ask whether a state’s laws meet that standard. In other words, how good is that state at emulating these standards? For example, privacy scholarship often evaluates whether a given state’s laws meet GDPR standards. There is value in these studies when the international standard being held up is not contentious or can readily be transplanted to other contexts. However, where the standard is one that requires nuancing or calibration for local contexts then emulation evaluation needs to be carefully done so that it is not making a case for inappropriate transplantations. This is more appropriately done by scholarship that asks context-specific regulatory and policy questions. It is enhanced by scholarship that has a sound theoretical basis, for instance, that is grounded in development or perceived through a decolonial lens, of the type discussed above. Finally, because law and technology is a dynamic field due to technological advancements, scholarship that is forward-looking in its consideration of emerging technology, is quite valuable. The focus of literature has indeed been driven by technological advancements as shown, for example, by the growth in texts addressing the 4IR in the last decade. Due to this, there has been an impressive growth in research output on law and technology in the last decade. We anticipate future growth in the literature due to recently completed and continuing postgraduate research on law and technology, for example, Adams (2021) and the establishment of several IT and law research centres in law schools on the continent. For example, the Intaka Centre for Law and Technology at the University of Cape Town in South Africa9 and the Centre for Intellectual Property and Information Technology Law (CIPIT) at Strathmore University in Kenya.10 Other centres that are not focused solely on law and regulation will no doubt also house some of the law and technology research. An example is the recently launched AI Institute of South Africa (AIISA) (Michalsons, 2022). Another example is the Distributed Artificial Intelligence Research Institute11 founded by Timnit Gebru, who is well known for

https://lawtechlab​.africa ​/about/ h​ ttps:/​/cipit​.strathmore​​.edu/ 11  See Distributed Artificial Intelligence Research Institute. (n.d.). About. Retrieved from https:// www​.dair​-institute​.org​/about  9  10 

286  Research handbook on law and technology her views on bias in AI which culminated in the termination of her employment at Google.12 There are also networks and think tanks that drive much of the scholarship. For example, the African Law and Technology Network,13 the African Economic Research Consortium (AERC)14 and Research ICT Africa (RIA).15 AERC recently completed a research project on data governance which engaged several aspects of it on the continent, as has been outlined above. Similarly, RIA is involved in a series of research projects on topics including the African Data Policy framework, the African Observatory on Responsible Artificial Intelligence and the Africa Just AI Project.16 A significant angle that is starting to be interrogated is the regulation of technology and digital trade in the context of the African Continental Free Trade Area (AfCFTA).17 The negotiation of a Protocol on Digital Trade under the AfCFTA Agreement is underway and this requires reflection on how best to regulate the use of technology in this context (Banga et al., 2021). In summary, as the discussion above shows, literature centred on the available and emerging technologies at the time of publication. For instance, attention turned to 4IR technologies in the second decade of the 21st century when these technologies were becoming prominent. Prior to that, the focus was on predecessor technologies. An evaluation of the impact of the above-summarised scholarship would entail tracing its influence on policy, regulation and case law. These links are not always readily made and it is only in case law that a scholar may expressly be quoted. Further, scholarship is often ahead of the curve, and it takes a significant amount of time before it impacts policy and regulation. However, in some instances policymakers and regulators call for comments and presentations by scholars as part of their policymaking and legislative processes and in that context, scholarship is readily linked to those processes. A few examples will make the point. On the international norm-setting level, the World IP organisation18 has over the last several years convened conversations on AI and IP policies where scholars, practitioners and other stakeholders have made both written and oral submissions on what they consider to be the most appropriate regulatory approach. Similarly, several states have undertaken a similar exercise such as the United Kingdom’s public consultations on AI and IP in 2021.19 Another example is the South African Presidential Commission on the Fourth Industrial Revolution’s (2020) Summary Report & Recommendations.20 As mentioned above, the African Union is currently

12  Distributed Artificial Intelligence Research Institute. (n.d.). About. Retrieved from https://www​ .dair​-institute​.org​/about 13  https://alt​-network​.com/ 14  https://aercafrica​.org​/about​-us/ 15  See Research ICT Africa. (n.d.). Current Projects. Retrieved from https://researchictafrica​.net​/ category​/current​-ria​-projects/ 16  See Research ICT Africa. (n.d.). Current Projects. Retrieved from https://researchictafrica​.net​/ category​/current​-ria​-projects/ 17  See https://au​-afcfta​.org/ 18  See World Intellectual Property Organisation. (n.d.). AI and IP Policy: The WIPO Conversation. Retrieved from https://www​.wipo​.int​/about​-ip​/en​/frontier​_technologies​/ai​_and​_ip​_policy​.html 19  See Gov​.u​k. (2021, 23 March). Consultation outcome: Government response to call for views on artificial intelligence and intellectual property; Gov​.u​k. (2022, 28 June). Consultation outcome Artificial Intelligence and Intellectual Property: copyright and patents. 20  See Summary Report & Recommendations. (2020). South African Presidential Commission on the Fourth Industrial Revolution (pp. 121–347). Retrieved from https://www​.ellipsis​.co​.za​/wp​-content​/ uploads​/2020​/10​/201023​-Report​-of​-the​-Presidential​- Commission​-on​-the​-Fourth​-Industrial​-Revolution​ .pdf

Afro-centric law and technology discourse  287 considering an African Data Policy framework, upon which some literature has focused. Whilst the global debate on whether AI can be considered an inventor under patent law proceeds, the South African patent office has granted a patent to Stephen Thaler’s DABUS inventions. This development has attracted scholarly attention as it goes against the grain of what is solidifying as the approach elsewhere (Oriakhogba, 2021a, 2021b). Internet bans, an issue that implicates human rights, has been canvassed by scholars as set out above and has also been pronounced upon by courts. The Community Court of Justice of the Economic Community of West African States (ECOWAS) held that Nigeria’s seven-month-long Twitter ban was unlawful because it ‘violated the applicant’s right to the enjoyment of freedom of expression as the suspension of Twitter was not sanctioned by any law or order of a competent court’.21 Nigeria was ordered ‘to guarantee non-repetition of the ban on Twitter’ (para 102) and to ensure that its domestic laws and policies comply with Article 9 of the African Charter on Human and Peoples’ Rights (ACHPR)22 and Article 19 of the International Covenant on Civil and Political Rights (ICCPR).23 The court declined the applicant’s request for reparations as it found that loss or damage were not proven (para 98). This case is a valuable addition to case law in its discussion of the freedom of expression as provided for in both the ACPHR and ICCPR. Its relevance was underscored by the participation of three amicus curiae in the matter, namely, Access Now, the Electronic Frontier Foundation, and the Open Net Association. In 2020, the same court held that Togo’s internet ban was unlawful, for the same reasons.24 Togo was also ordered to guarantee non-repetition of the ban, to ensure that its domestic laws complied with international human rights instruments and to pay each of the applicants 2 million CFA as compensation for the violation of their right to freedom of expression. The Togolese matter also attracted numerous amicus curiae from around the world. Both cases were cited, with approval, in the 2022 Annual Report of the United Nations High Commissioner for Human Rights entitled Internet Shutdowns: Trends, Causes, Legal Implications and Impacts on a Range of Human Rights.25 This shows that African case law is positively influencing the global understanding of human rights. Some of the scholarship we surveyed for this chapter, such as policy briefs, was targeted at policymakers and regulators. However, the bulk of it was conceptualised to contribute to academic debates or to add to the body of knowledge as conventional academic literature such as books and journal articles. Another subset of the literature consisted of conference papers, 21  The Registered Trustees of the Socio-economic rights & Accountability Project (SERAP) and others v. Federal Republic of Nigeria Suit No: ECW/CCJ/APP/23; 24; 26&29/21, Judgment No: ECW/ CCJ/JUD/40/22 para 76. 22  Organization of African Unity (OAU), African Charter on Human and Peoples’ Rights (Banjul Charter), 27 June 1981, CAB/LEG/67/3 rev. 5, 21 I.L.M. 58 (1982), available at: https://www​.refworld​ .org​/docid​/3ae6b3630​.html 23  UN General Assembly, International Covenant on Civil and Political Rights, 16 December 1966, United Nations, Treaty Series, vol. 999, p. 171. 24  Amnesty International Togo and others v the Togolese Republic Suit No: ECW/CCJ/APP/61/18, Judgment No: ECW/CCJ/JUD/09/20. 25  See Report of the Office of the United Nations High Commissioner for Human Rights (2022). Internet Shutdowns: Trends, Causes, Legal Implications and Impacts on a Range of Human Rights (A/HRC/50/55) (pp. 13–14). Retrieved from https://www​.ohchr​.org​/en​/press​-releases​/2022​/06​/internet​ -shutdowns​-un​-report​-details​-dramatic​-impact​-peoples​-lives​-and​-human Research ICT Africa. (n.d.). Current Projects. Retrieved from https://researchictafrica​.net​/category​ /current​-ria​-projects/

288  Research handbook on law and technology which typically are scholars conversing with each other, following which the work is typically converted to a conventional academic text. There have been many law and tech conferences on the continent which can be taken to predict further literature. One major determinant of impact is visibility, so the discoverability of voices speaking law and technology in Africa is a significant consideration. Work that is published open-access has more chances of being read and making an impact, hence it is usually the preferred outlet for researchers. Indeed, most publicly funded research has an open-access mandate. For example, South Africa’s National Research Foundation which funds a significant amount of research, has an open-access mandate. We postulate that African scholars seeking to reach African readership are unlikely to publish in paywalled books and journals to which their intended readership has no access. We cannot proffer definitive reasons for why law and technology literature from Africa is not widely known nor read extensively elsewhere because we did not canvass or survey readers but we can postulate that part of the answer may be that the Africaspecific enquiries do not interest a broad readership. Considering the wealth of perspectives and themes we have set out above, this is lamentable. This chapter is our contribution to highlighting the wealth of this scholarship.

6. CONCLUSION In closing, we reiterate that there are indeed voices speaking on law and technology in Africa, a region not typically associated with the field. Due to its political and socio-economic context and developmental imperatives, there is a strong thematic focus on development, social justice and human rights. The historical legacy of colonialism has sparked engagement with neocolonialism and decoloniality with an emphasis on feminist perspectives. It is worth noting that there are several prominent female scholars writing on law and technology as it pertains to Africa from these theoretical perspectives including Arewa (2021) and Adams et al. (2021). The main theoretical thrust is that regulation of technology needs to be carefully nuanced for African contexts to ensure that historical extractive patterns are severed and that legal frameworks do not exacerbate inequality or deepen the digital divide. For instance, Arewa and Folakade (2021, p. 297) rightly argue that Africa cannot simply transplant the regulatory schemes for drones and automation from elsewhere, using Nigeria’s adoption of ‘highly bureaucratic’ and costly processes as a case in point. In contrast, they show that Kenya has nuanced its approach by differentiating drones. Legal positions taken in more developed contexts are not appropriate for Africa, or other developing contexts. Accordingly, the impact of technology on employment specifically, and its social, economic and cultural impact, more generally, has been probed. This includes a consideration of its impact on the informal sector and the consideration of gender and technology. Human rights considerations are key considering how autocratic governments have sought to shut down the internet or prohibit the use of social media sites and cryptocurrencies in a bid to clamp down on political dissent. The range of topics and research questions posed by the literature is admirable in its breadth and depth. Topics such as bias and ethics in relation to AI, privacy and health data, privacy and biometric voter registration and data governance have been considered. A significant angle that is starting to be interrogated is the regulation of technology and digital trade in the context of the AfCFTA in view of the negotiation of a Protocol on Digital Trade under the AfCFTA Agreement.

Afro-centric law and technology discourse  289 Due to the strong growth of think tanks, networks and research centres at universities and outside universities that focus on technology generally and on law and advanced technologies specifically, it is reasonable to predict a steady increase in literature in the future. A scenario-building exercise by the OpenAIR partnership predicted that wireless engagement was a probable trajectory for Africa. This would be an Africa where ‘African enterprise is interconnected with the global service-oriented economy, young business leaders form a vocal middle class, and citizens hold governments accountable. Except for … uneducated or under-resourced individuals who cannot conform to homogenous technical, legal and socioeconomic standards’ (Elahi et al., 2013, p. 77). The scholarship discussed in this chapter grapples with all aspects of this scenario in a bid to amplify the former and to minimise the latter. Perhaps, we can summarise it by saying, we seek equitable wireless engagement for Africa.

REFERENCES Abdulrauf, L.A. (2021). Giving ‘teeth’ to the African Union towards advancing compliance with data privacy norms. Information & Communications Technology Law, 30(2), 87–107. Retrieved from https://doi​.org​/10​.1080​/13600834​.2021​.1849953. Abraha H.H. (2017). Examining approaches to internet regulation in Ethiopia. Information & Communications Technology Law, 26(3), 293–311. Retrieved from https://doi​.org​/10​.1080​/13600834​ .2017​.1374057. Abubakar, A.D., Bass, J.M. & Allison, I. (2014). Cloud computing: Adoption issues for sub-saharan African SMEs. Electronic Journal of Information Systems on Developing Countries, 62(1), 1–17. Retrieved from https://doi​.org​/10​.1002​/j​.1681​- 4835​.2014​.tb00439​.x. Achieng, M., Ogundaini, O., Makola, D. & Iyamu, T. (2021). IST-Africa 2021 Conference Proceedings, pp. 1–9. Retrieved from https://www​.researchgate​.net​/publication​/354604057​_The​_ African​_ Perspective​_of​_a​_ Smart​_City​_Conceptualisation​_of​_Context​_and​_Relevance. Adams, R., Gastrow, M., Olorunju, N., Gaffley, M., Ramkissoon, Y., Van Der Berg, S., Adams, F. & Thipanyane, T. (2021). Human Rights and the Fourth Industrial Revolution in South Africa. HSRC Press. Retrieved from https://www​.hsrcpress​.ac​.za​/ books​/ human​-rights​-and​-the​-fourth​-industrial​ -revolution​-in​-south​-africa. Adeola, O. & Evans, O. (2019). Digital health: ICT and health in Africa. Actual Problems of Economics, 10(208), 66–83. Retrieved from https://www​.researchgate​.net​/publication​/331407717​_ Digital​_health​ _ICT​_and​_health​_in​_ Africa. Adera, E.O., Waema, T.M., May, J., Mascarenhas, O. & Diga, K. (2014). ICT Pathways to Poverty Reduction: Empirical Evidence from East and Southern Africa. Rugby, United Kingdom: Practical Action Publishing. Retrieved from https://idl​-bnc​-idrc​.dspacedirect​.org​/ bitstream​/ handle​/10625​ /52420​/ IDL​-52420​.pdf. Ahmed, S. (2021, June 21–23). A gender perspective on the use of artificial intelligence in the African fintech ecosystem: case studies from South Africa, Kenya, Nigeria, and Ghana. Paper presented at International Telecommunications Society (ITS) 23rd Biennial Conference – Digital Societies and Industrial Transformations: Policies, Markets, and Technologies in a Post-Covid World. Retrieved from https://researchictafrica​.net​/wp​/wp​-content​/uploads​/2021​/07​/ Final​-Revised​-AI4D​-Gender​-ITS​ -conference​-paper​.pdf. Alemany, C. & Gurumurthy, A. (2019). Governance of data and artificial intelligence. In Spotlight on Sustainable Development 2019: Reshaping governance for sustainability – Transforming institutions – shifting power – strengthening rights (pp. 86–95). Global Civil Society Report on the 2030 Agenda and the SDGs. Retrieved from https://www​.2030spotlight​.org​/sites​/default​/files​/ spot2019​/Spotlight​_Innenteil​_2019​_web​_gesamt​.pdf. Allard, G. (2015). Science and technology capacity in Africa: A new index. Journal of African Studies and Development, 7(6), 137–147. Retrieved from https://wiki​.lib​.sun​.ac​.za​/images​/0​/01​/ Article1434799458​_ Allard​.pdf.

290  Research handbook on law and technology Alureba, K. & Jere, N. (2022). Exploring digital transforming challenges in rural areas of South Africa through a systematic review of empirical studies. Scientific African, 16, 1–13. Retrieved from https:// doi​.org​/10​.1016​/j​.sciaf​.2022​.e01190. Anyetimi, D.T. & Burgess, J. (2019) Is the fourth industrial revolution relevant to sub-Sahara Africa? Technology Analysis & Strategic Management, 31(6), 641–652. Retrieved from https://doi​.org​/10​ .1080​/09537325​.2018​.1542129. Arewa, O.B. (2021). Disrupting Africa: Technology, Law, and Development. Cambridge, United Kingdom: Cambridge University Press. Arewa, O.B. & Fakolade, A. (2022). Law and the Regulation of New Technologies in Africa. In M. Ndulo & C. Emezien (Eds.). Routledge Handbook on African Law 2021 (pp. 293–308). Abingdon: Routledge. Arksey, H. & O’Malley, L. (2005). Scoping studies: towards a methodological framework. International .org​ /10​ .1080​ Journal of Social Research Methodology, 8(1), 19–32. Retrieved from https://doi​ /1364557032000119616. Artificial Intelligence for Africa: An Opportunity for Growth, Development, and Democratisation. (2018). Access Partnership (pp. 1–46). Retrieved from https://www​.up​.ac​.za​/media​/shared​/7​/ZP​ _Files​/ai​-for​-africa​.zp165664​.pdf. Ayalew, Y.E. (2019). The Internet shutdown muzzle(s) freedom of expression in Ethiopia: competing narratives. Information & Communications Technology Law, 28(2), 208–224. Retrieved from https:// doi​.org​/10​.1080​/13600834​.2019​.1619906. Bankole, F.O., Osei-Bryson K. & Brown, I. (2013). The impact of information and communications technology infrastructure and complementary factors on intra-African Trade. Information Technology for Development, 21(1), 12–28. Retrieved from http://dx​.doi​.org​/10​.1080​/02681102​.2013​.832128. Barakabitze, A.A., Lazaro, A.W., Ainea, N., Mkwizu, M.H., Maziku, H., Matofali, A.X., Iddi, A. & Sanga, C. (2019). Transforming African Education Systems in Science, Technology, Engineering, and Mathematics (STEM) Using ICTs: Challenges and Opportunities. Education Research International (pp. 1–29). Retrieved from https://doi​.org​/10​.1155​/2019​/6946809. Benyera, E. (2021). The Fourth Industrial Revolution and the Recolonisation of Africa: The Coloniality of Data. Abingdon: Routledge. Retrieved from https://library​.oapen​.org​/ handle​/20​.500​.12657​/48368. Blom, A., Lan, G. & Adil, M. (2016). SubSaharan African Science, Technology, Engineering, and Mathematics Research: A Decade of Development. World Bank Study. Washington, DC. Retrieved from http://hdl​.handle​.net​/10986​/23142. Borghi, M. & Brownsword, R. (Eds.). (2022). Law, Regulation and Governance in the Information Society: Informational Rights and Informational Wrongs. Abingdon: Routledge. Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. Retrieved from 10.1191/1478088706qp063oa. Brownsword, R. & Yeung, K. (Eds.). (2008). Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes. London, United Kingdom: Bloomsbury Publishing. Brownsword, R., Scotford, E. & Yeung, K. (2017). Law, Regulation, and Technology: The Field, Frame, and Focal Questions. In R. Brownsword, E. Scotford & K. Yeung (Eds.). The Oxford Handbook of Law, Regulation and Technology, Oxford Handbooks (pp. 3–38). Oxford, United Kingdom; Oxford University Press. Brownsword, R., Scotford, E. & Yeung, K. (Eds.). (2017). The Oxford Handbook of Law, Regulation and Technology, Oxford Handbooks. Oxford, United Kingdom: Oxford University Press. Brownsword, R. & Goodwin, M. (2012). Law and the Technologies of the Twenty-First Century: Text and Materials. Cambridge, United Kingdom: Cambridge University Press. Brenner, S. (2007). Law in an Era of Smart Technology. New York, NY: Oxford University Press. Retrieved from https://iuristebi​.files​.wordpress​.com ​/2011​/07​/ law​-in​-an​-era​-of​-smart​-technology​.pdf. Bryant, J. (2021). Africa in the information age: Challenges, opportunities, and strategies for data protection and digital Rights. Stanford Technology Law Review, 24(2), 389–439. Retrieved from https://law​.stanford​.edu​/wp​-content​/uploads​/2021​/05​/ Bry​antA​fric​aInT​heIn​form​ationAge​.pdf. Butcher, N., Wilson-Strydom, M. & Baijnath, M. (2021). Artificial Intelligence Capacity in Sub-Saharan Africa – Compendium Report. Artificial Intelligence Report for Africa. Retrieved from https://idl​ -bnc​-idrc​.dspacedirect​.org​/ bitstream ​/ handle​/10625​/59999​/27ea1089​-760f​- 4136 ​-b637​-16367161edcc​ .pdf​?sequence​=1​&isAllowed​=y.

Afro-centric law and technology discourse  291 Calitz, A.P., Poisat, P. & Cullen, M. (2017). The future African workplace: The use of collaborative robots in manufacturing. SA Journal of Human Resource Management, 15, 1–11. Retrieved from https://doi​.org​/10​.4102​/sajhrm​.v15i0​.901. Campbell-Verduyn, M. & Giumelli, F. (2022). Enrolling into exclusion: African decolonial ambitions in an evolving finance/security infrastructure. Journal of Cultural Economy, 15(4), 524–543. Retrieved from DOI 10.1080/17530350.2022.2028655. Chirwa, D. & Ncube, C.B. (Eds.). (2023). The Internet, Development, Human Rights and the Law in Africa. Abingdon: Routledge. Clarke, V. & Braun, V. (2013). Teaching thematic analysis: Overcoming challenges and developing strategies for effective learning. Psychologist, 26(2), 1–13. Retrieved from https://uwe​-repository​ .worktribe​.com ​/output ​/937596​/teaching​-thematic​-analysis​-overcoming​-challenges​-and​-developing​ -strategies​-for​-effective​-learning. Cloete, A.L. (2017). Technology and education: Challenges & opportunities. Theological Studies, 73(4), 1–7. Retrieved from https://doi​.org​/10​.4102​/ hts​.v73i4​.4589. Comninos, A., Muller, E.S. & Mutung’u, G. (2019). Artificial intelligence for sustainable human development. In Finlay, A. & Nordstrom, L. (Eds.). Artificial Intelligence: Human Rights, Social Justice and Development. Global Information Society Watch 2019 (pp. 47–52). Retrieved from https://giswatch​.org​/sites​/default ​/files​/gisw2019​_web​_intro​_0​.pdf. De Bruyne, J. & Vanleenhove, C. (Eds.). (2022). Artificial Intelligence and the Law. Cambridge, United Kingdom: Intersentia. Donnelly, D. (2022). First do no harm: Legal principles regulating the future of artificial intelligence in health care in South Africa. Potchefstroom Electronic Law Journal, 25(1), 1–43. Retrieved from https://doi​.org​/10​.17159​/1727​-3781​/2022​/v25i0a11118. Easterbrook, F.H. (1996). Cyberspace and the Law of the Horse. University of Chicago Law Forum (pp. 207–216). Retrieved from https://chicagounbound​.uchicago​.edu​/cgi​/viewcontent​.cgi​?referer=​ &httpsredir​=1​&article​=2147​&context​=journal​_articles. Echendu, A.J. & Okafor, P.C.C. (2021). Smart city technology: a potential solution to Africa’s growing population and rapid urbanization. Development Studies Research, 8(1), 82–93. Retrieved from https://doi​.org​/10​.1080​/21665095​.2021​.1894963. Elahi, S. et al. (2013). Knowledge and Innovation in Africa: Scenarios for the Future. Retrieved from https://openair​.africa ​/wp ​- content ​/uploads​/2013​/01​/ Knowledge ​-Innovation​-Africa​- Scenarios​-for​ -Future​.pdf. Everisto, B. (2021). The Fourth Industrial Revolution and the Recolonisation of Africa: The Coloniality of Data. New York, NY: Routledge, Taylor & Francis Group. Retrieved from https://library​.oapen​ .org​/ handle​/20​.500​.12657​/48368. Famubode, V. (2018). Rising Automation in Sub-Saharan Africa: Harnessing Its Opportunities through Public Policy. Retrieved from https://ssrn​.com​/abstract​=3154359. Fasogbon, B.M. & Adebo, O.A. (2022). A bibliometric analysis of 3D food printing research: A global and African perspective. Future Foods, 6, 1–13. Retrieved from https://doi​.org​/10​.1016​/j​.fufo​.2022​ .100175. Fomunyam, K.G. (2020). Theorising machine learning as an alternative pathway for higher education in Africa. International Journal of Education and Practice, 8(2), 268–277. Retrieved from https:// archive​.conscientiabeam​.com ​/index​.php​/61​/article​/view​/640​/953. Gadzala, A. (2018). 3D Printing: Shaping Africa’s Future. Atlantic Council, Africa Centre (pp. 1–14). Retrieved from https://www​.atlanticcouncil​.org​/wp​-content​/uploads​/2019​/08​/3D​_ Printing​_ Africa​ _WEB​.pdf. Gadzala, A. (2018a). Coming to Life: Artificial Intelligence in Africa. Atlantic Council, Africa Centre (pp. 1–10). Retrieved from https://www​.atlanticcouncil​.org​/wp​-content​/uploads​/2019​/09​/Coming​-to​ -Life​-Artificial​-Intelligence​-in​-Africa​.pdf. Gifford, D.J. (2007). Law and technology: Interactions and relationships. Minnesota Journal of Law, Science & Technology, 8, 571–587. Retrieved from https://scholarship​.law​.umn​.edu​/faculty​_articles​ /324. Gilwald, A. (2019). South Africa is Caught in the Global Hype of the Fourth Industrial Revolution. Retrieved from https://theconversation​.com ​/south​-africa​-is​-caught​-in​-the​-global​-hype​-of​-the​-fourth​ -industrial​-revolution​-121189.

292  Research handbook on law and technology Gwagwa, A., Kraemer-Mbula, E., Rizk, N., Rutenberg, I. & De Beer, J. (2020). Artificial intelligence (AI) deployments in Africa: Benefits, challenges and policy dimensions. The African Journal of Information and Communication, 26, 1–28. Retrieved from https://doi​.org​/10​.23962​/10539​/30361. Hildebrandt, M. (2016). Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology. Cheltenham, United Kingdom: Edward Elgar. Holst, C., Sukums, F., Radovanovic, D., Ngowi, B., Noll, J. & Winkler, A.S. (2020). Sub-Saharan Africa – the new breeding ground for global digital health. The Lancet Digital Health, 2(4), e160–e162. Retrieved from https://doi​.org​/10​.1016​/S2589​-7500(20)30027-3. Jepkoech, J. & Shibwabo, C.A. (2019). Implementation of blockchain technology in Africa. European Journal of Computer Science and Information Technology, 7(4), 1–4. Retrieved from https://www​ .eajournals​.org​/wp​-content ​/uploads​/ Implementation​-of​-Blockchain​-Technology​-in​-Africa​.pdf. Jhurree, V. (2005). Technology integration in education in developing countries: Guidelines to policy makers. International Education Journal, 6(4), 467–483. Retrieved from https://files​.eric​.ed​.gov​/ fulltext​/ EJ855000​.pdf. Joubert, A., Murawski, M. & Bick, M. (2019). Big Data Readiness Index – Africa in the Age of Analytics. 18th Conference on e-Business, e-Services and e-Society (I3E) (pp. 101–112). Trondheim, Norway: Hal Open Science Retrieved from https://hal​.inria​.fr​/ hal​- 02510093​/document. Kim, J., Shah, P., Gaskell, J.C., Prasann, A. & Luthra, A. (2020). Scaling Up Disruptive Agricultural Technologies in Africa. International Development in Focus. Washington, DC. World Bank. Retrieved from https://openknowledge​.worldbank​.org​/ bitstream​/ handle​/10986​/33961​/9781464815225​.pdf. Kohl, U. & Charlesworth, A. (2016). Information Technology Law. London: Routledge. Lawrence, L. (2003). Law Regulating Code Regulating Law. Loyola University Chicago Law Journal, 35(1), 1–14. Retrieved from https://dash​.harvard​.edu​/ bitstream​/ handle​/1​/12912675​/ Law​ %20Regulating​%20Code​%20Regulating​%20Law​.pdf. Le Roux, D. (2018). Automation and employment: The case of South Africa. African Journal of Science, Technology, Innovation and Development, 10(4), 507–517. Retrieved from https://doi​.org​/10​.1080​ /20421338​.2018​.1478482. Lessig, L. (2003). Law regulating code regulating law. Loyola University Chicago Law Journal, 35(1), 1–14. Retrieved from https://dash​.harvard​.edu​/ bitstream​/ handle​/1​/12912675​/ Law​%20Regulating​ %20Code​%20Regulating​%20Law​.pdf. Lloyd, I.J. (2020). Information Technology Law. Oxford, United Kingdom: Oxford University Press. Ly, R. (2021). Machine Learning Challenges and Opportunities in the African Agricultural Sector – A General Perspective. Retrieved from https://www​.researchgate​.net​/publication​/353209012​_ Machine​ _Learning​_Challenges​_ and​_Opportunities​_ in​_the​_ African​_ Agricultural​_ Sector_--​_ A​_General​ _Perspective. Makulilo A. (2017). Rebooting democracy? Political data mining and biometric voter registration in Africa. Information & Communications Technology Law, 26(2), 198–212. Retrieved from https://doi​ .org​/10​.1080​/13600834​.2017​.1321097. Makulilo, B.A. (2020). Analysis of the regime of systematic government access to private sector data in Tanzania. Information & Communications Technology Law, 29(2), 250–278. Retrieved from https:// doi​.org​/10​.1080​/13600834​.2020​.1741156. Marda, V. (2019). Introduction. In A. Finlay & L. Nordstrom (Eds.). Artificial Intelligence: Human Rights, Social Justice and Development. Global Information Society Watch 2019 (pp. 9–13). Retrieved from https://giswatch​.org​/sites​/default ​/files​/gisw2019​_web​_intro​_0​.pdf. Marivate, V., Aghoghovwia, P., Ismail, Y., Mahomed-Asmail, F. & Steenhuisen, S.L. (2021). The Fourth Industrial Revolution – what does it mean to our future faculty? South African Journal of Science, 117(5/6), 1–3. Retrieved from https://doi​.org​/10​.17159​/sajs​.2021​/10702. Marivate, V. (2022). More Than Just a Policy – Day to Day Effects of Data Governance on the Data Scientist. African Research Consortium. Retrieved from http://publication​.aercafricalibrary​.org​/ handle​/123456789​/3322. Martin-Bariteau, F. & Scassa, T. (Eds.). (2021). Artificial Intelligence and the Law in Canada. Toronto, Canada: LexisNexis. Mazibuko-Makena, Z., & Kraemer-Mbula, E. (Eds.). (2021). Project MUSE – Leap 4.0. African Perspectives on the Fourth Industrial Revolution. Johannesburg, South Africa: Mapungubwe Institute for Strategic Reflection.

Afro-centric law and technology discourse  293 Melhem, S., Morrell, C. & Tandon, N. (2009). Information and Communication Technologies for Women’s Socioeconomic Empowerment. World Bank Working Paper No. 176. Washington, DC. Retrieved from https://openknowledge​.worldbank​.org​/ handle​/10986​/5935. Michalsons. (2022, November 30). The Department of Communications and Digital Technologies (DCTD), TUT and UJ will be launching the Artificial Intelligence Institute of South Africa on 30 November 2022 at the Johannesburg Business School. Retrieved from https://www​.michalsons​.com​ /focus​-areas​/artificial​-intelligence​-law​/ai​-institute​-of​-south​-africa​-aiisa. Moorosi, N., Thinyane, M., Marivate, V. (2017). A Critical and Systemic Consideration of Data for Sustainable Development in Africa. In J. Choudrie, M. Islam, F. Wahid, J. Bass, J. Priyatma (Eds.). Information and Communication Technologies for Development. Information and Communication Technologies for Development, 2017. IFIP Advances in Information and Communication Technology (Vol. 504, pp 232–241). Cham: Springer. Retrieved from https://doi​.org​/10​.1007​/978​-3​-319​-59111​-7​_20. Moses, L.B. (2011). Agents of change: How the law ‘Copes’ with technological change. Griffith Law Review, 20(4), 763–794. Retrieved from https://doi​.org​/10​.1080​/10383441​.2011​.10854720. Millington, K.A. (2017). How changes in technology and automation will affect the labour market in Africa. Helpdesk Report, K4D. Retrieved from https://gsdrc​.org​/wp​-content​/uploads​/2017​/10​/ Impact​ -of​-automation​-on​-jobs​-in​-Africa​.pdf. Milner, A. & Yayboke, E. (Eds.). (2019, May). Beyond Technology: The Fourth Industrial Revolution in the Developing World. A Report for the CSIS Project on Prosperity and Development. Center for Strategic & International Studies. Retrieved from https://csis​-website​-prod​.s3​.amazonaws​.com​/s3fs​ -public​/publication ​/190520​_Runde​%20et​%20al​_ Fou​r thI​ndus​tria​lRev​olution​_WEB​.pdf. Murray, A. (2019). Information Technology Law: The Law and Society. Oxford, United Kingdom: Oxford University Press. Mutuka, L. (2020). Big data in Africa: Mapping emerging use cases of big data and related technologies. Local Development Research Institute. Retrieved from https://www​.developlocal​.org​/wp​-content​/ uploads​/2020​/07​/ BD4D​-report​-FINAL​-b​.pdf. Ncube, C.B. & Hlomani, H. (2022). Data regulation in Africa: Free flow of data, Open data regimes and cyber security. African Research Consortium. Retrieved from http://publication​.aercafricalibrary​ .org​/ handle​/123456789​/3321. Ncube, C.B. et al (2023). Artificial Intelligence and the Law in Africa. Cape Town: LexisNexis (forthcoming). Ndemo, B. & Weiss, T. (2017). Making Sense of Africa’s Emerging Digital Transformation and its Many Futures. Africa Journal of Management, 3(3–4), 328–347. Retrieved from https://doi​.org​/10​ .1080​/23322373​.2017​.1400260. Ndemo, B. & Mkalama, B. (2022). Digitalization and financial data governance in Africa: Challenges and opportunities. African Research Consortium. Retrieved from http://publication​.aercafricalibrary​ .org​/ handle​/123456789​/3373​?show​=full. Ndemo, B. & Thegeya, A. (2022). A data Governance Framework for Africa. African Research Consortium. Retrieved from http://publication​.aercafricalibrary​.org​/ bitstream​/ handle​/123456789​ /3318​/ DG001​.pdf​?sequence​=1​&isAllowed​=y. Ndung’u, N. & Signe, L. (2020). The Fourth Industrial Revolution and digitization will transform Africa into a global powerhouse. In B.S. Coulibaly & C. Golubski (Eds.). Foresight Africa: Top Priorities for the Continent 2020–2030 (pp. 61–66). Africa Growth Initiative at Brookings. Retrieved from https://www​.brookings​.edu​/wp​-content​/uploads​/2020​/01​/ ForesightAfrica2020​_20200110​.pdf. Nowell, L.S., Norris, J.M., White, D.E. & Moules, N.J. (2017). Thematic Analysis: Striving to Meet the Trustworthiness Criteria. International Journal of Qualitative Methods, 16(1), 1–13. Retrieved from https://www​.researchgate​.net ​/publication ​/320188032 ​_Thematic​_ Analysis​_ Striving​_to​_ Meet​ _the​_Trustworthiness​_Criteria. Olatokun, W.M. (2008). Gender and national ICT policy in Africa: Issues, strategies, and policy options. Information & Communication Technology – Africa, 2. Retrieved from https://repository​.upenn​.edu​ /cgi​/viewcontent​.cgi​?article​=1008​&context​=ictafrica. Olu, O., Muneene, D., Bataringaya, J.E., Nahimana, M., Ba, H., Turgeon, Y., Karamagi, H., C. & Dovlo, D. (2019). How can digital health technologies contribute to sustainable attainment of universal health coverage in Africa? A perspective. Frontiers in Public Health, 7, 1–7. Retrieved from https:// doi​.org​/10​.3389​/fpubh​.2019​.00341.

294  Research handbook on law and technology Onuoha, R. (2019). AI in Africa: Regional data protection and privacy policy harmonisation. In A. Finlay & L. Nordstrom (Eds.). Artificial Intelligence: Human rights, social justice and development. Global Information Society Watch 2019 (pp. 59–62). Retrieved from https://giswatch​.org​/sites​/ default​/files​/gisw2019​_web​_intro​_0​.pdf. Opio, P.J. (n.d). Robotics and the Transformation of Economic Dynamics (A Focus on African Experiments) (pp. 1–24). Retrieved from https://www​.academyforlife​.va​/content​/dam​/pav​/documenti​ %20pdf​/2019​/Assemblea2019​/ Tes​tiRe​lato​r iPu​bblicati​/ FT​%20Opio​.pdf. Ormond, E. (2020). The ghost in the machine: The ethical risks of AI. The Thinker, 83(1), 4–11. Retrieved from https://journals​.uj​.ac​.za ​/index​.php​/ The​_Thinker​/article​/view​/220. Ormond, E. (2022). Global to Local: South African Perspectives on AI Ethics Risks. Retrieved from https://ssrn​.com​/abstract​= 4240356. Oriakhogba, D.O. (2021a). What if DABUS came to Africa? Visiting AI inventorship and ownership of patent from the Nigerian perspective. Business Law Review, 42(2), 89. Oriakhogba, D.O. (2021b). Dabus gains territory in South Africa and Australia: Revisiting the AI-inventorship question. South African Journal of Intellectual Property Law, 9, 87–108. Papadopoulos, S. & Snail ka Mtuze, S. (Eds.). (2022). Cyberlaw @ SA: The Law Of The Internet In South Africa 4ed. Pretoria, South Africa: Van Schaik Publishers. Peña, P. & Varon, J. (2019). Decolonising AI: A transfeminist approach to data and social justice. In A. Finlay & L. Nordstrom (Eds.). Artificial Intelligence: Human Rights, Social Justice and Development. Global Information Society Watch 2019 (pp. 28–32). Retrieved from https://giswatch​ .org​/sites​/default ​/files​/gisw2019​_web​_intro​_0​.pdf. Ponelis, S.R. & Holmner, M.A. (2015). ICT in Africa: Enabling a better life for all. Information Technology for Development, 21(1), 1–11. Retrieved from https://doi​.org​/10​.1080​/02681102​.2014​.985521. Powell, C. & Schonwetter, T. (2019). Africa, the internet and human rights. In M. Susi (Ed.). Human Rights, Digital Society and the Law: A Research Companion. London, United Kingdom: Routledge, Taylor & Francis Group. Romanello, M. (2021). Blockchain technology in Africa: problems and perspectives. Brazilian Journal of Development, 7(7), 74359–74377. Retrieved from https://brazilianjournals​.com​/ojs​/index​.php​/ BRJD​/article​/download ​/33485​/pdf​/85535?_ ​_cf​_chl​_tk​=tkg​a Mqd​mvbb​bsv1​Uyxa ​ZDzO​k K8R​uHA9​ v18Q​lHA13yzg​-1669278960​- 0​-gaNycGzNCNE. Rutenberg, I., Gwagwa, A. & Omino, M. (2021). Use and impact of artificial intelligence on climate change adaptation. In L. Filho, N. Oguge, D. Ayal, L. Adeleke & I. Da Silva (Eds.). African Handbook of Climate Change Adaptation. Cham: Springer. Retrieved from https://link​.springer​.com​ /referenceworkentry​/10​.1007​/978​-3​- 030​- 45106​-6​_80​#citeas. Saint, M. & Garba, A. (2016). Technology and policy for the internet of things in Africa. TPRC 44: The 44th Research Conference on Communication, Information and Internet Policy 2016. Retrieved from http://dx​.doi​.org​/10​.2139​/ssrn​.2757220. Schonwetter. T. & Van Wiele, B. (2020). Social entrepreneurs’ use of Fab Labs and 3D printing in South Africa and Kenya. The African Journal of Information and Communication, 26, 1–24. Retrieved from http://dx​.doi​.org​/10​.23962​/10539​/30356. Smith, M.L. & Neupane, S. (2018). Artificial Intelligence and Human Development: Towards a Research Agenda. International Development Research Centre. Retrieved from https://idl​-bnc​-idrc​ .dspacedirect​.org​/ handle​/10625​/56949. Stolp, J., Perumall, A. & Selfe, E. (2018). Blockchain and Cryptocurrency in Africa: A comparative summary of the reception and regulation of Blockchain and Cryptocurrency in Africa. In Baker & McKenzie (pp. 1–23). Retrieved from https://www​.bakermckenzie​.com/-​/media​/files​/insight​/ publications​/2019​/02​/report​_blo​ckch​aina​ndcr​ypto​curr​encyreg​_feb2019​.pdf. Suri, T., & Udry, C. (2022). Agricultural technology in Africa. Journal of Economic Perspectives, 36(1), 33–56. Retrieved from https://pubs​.aeaweb​.org​/doi​/pdf​/10​.1257​/jep​.36​.1​.33. Townsend, B. (2022). The lawful sharing of health research data in South Africa and beyond. Information & Communications Technology Law, 31(1), 17–34. Retrieved from https://doi​.org​/10​.1080​/13600834​ .2021​.1918905. Tranter, K. (2011). The law and technology enterprise: Uncovering the template to legal scholarship on technology. Law, Innovation and Technology, 3(1), 31-83, Retrieved from DOI: 10.5235/175799611796399830.

Afro-centric law and technology discourse  295 Umezuruike, C. & Ngugi, H.N. (2020). Imminent challenges of adoption of big data in educational systems in sub-saharan Africa nations. International Journal of Recent Technology and Engineering, 8(5), 4544–4550. Retrieved from https://www​.ijrte​.org​/wp​-content​/uploads​/papers​/ v8i5​/ E6885018520​.pdf. Van der Merwe, D, Roos, A., Erlank, W, Eiselen, S., Nel, S., Mabeka, Q. & Pistorius, T. (2022). Information and Communications Technology Law 3ed. Durban, South Africa: LexisNexis. Van der Merwe, D. (2014). A comparative overview of the (sometimes uneasy) relationship between digital information and certain legal fields in South Africa and Uganda. Potchefstroom Electronic Law Journal, 17(1), 297–326. Retrieved from http://dx​.doi​.org​/10​.4314​/pelj​.v17i1​.07. Vernon, D. (2019). Robotics and Artificial Intelligence in Africa. IEEE Robotics & Automation Magazine (pp. 131–135). Retrieved from http://www​.vernon​.eu​/publications​/19​_Vernon​_RAM​.pdf. Wakunuma, K.J. (2013). Mobiles for development in Africa: Are we in danger of losing sight of the bigger picture? Feminist Africa, 18, 131–139. Retrieved from https://feministafrica​.net​/wp​-content​/ uploads​/2019​/10​/standpoints​_mobiles​_for​_development​_in​_africa​.pdf. Wilhelm, A. (2019). Blockchain technology and the development of African economies: Promises, opportunities, and the legal issues at stake. Law in Africa, 22(1), 3–42. Retrieved from https://doi​.org​ /10​.5771​/2363​-6270​-2019​-1​-3. Woherem, E.E. & Odedra-Straub, M. (2017). Potentials and challenges of developing smart cities in Africa. Circulation in Computer Science, 2(4), 27–39. Retrieved from https://www​.evanswoherem​ .com​/wp​- content​/uploads​/2019​/03​/ The​-Potentials​-and​- Challenges​-of​-Developing​-Smart​- Cities​-in​ -Africa​.pdf. Yonazi, E., Kelly, T., Halewood, N. & Blackman, C. (2012). The Transformational Use of Information and Communication Technologies. Overview prepared for consideration by the African Union Ministers in charge of Communication and Information Technologies, Summit in Khartoum. Retrieved from https://www​.afdb​.org​/fileadmin ​/uploads​/afdb​/ Documents​/ Publications​/ The​_Transformational​_Use​ _of​_Information​_and​_Communication​_Technologies​_in​_ Africa​.pdf. Zeufack, A.G., Calderon, C., Kambou, G., Kubota, M., Korman, V., Canales, C.C. & Aviomoh, H.E. (2021). Africa’s Pulse, No. 23. Washington, DC: World Bank. Retrieved from https://openknowledge​ .worldbank​.org​/ bitstream​/ handle​/10986​/35342​/9781464817144​.pdf.

18. Incorporating digital development perspectives in international trade law Binit Agarwal and Neha Mishra

1. INTRODUCTION Since the advent of the World Wide Web in the early 1990s, the digital revolution has consistently promised development dividends for countries across the world. For instance, experts have highlighted various opportunities that the Fourth Industrial Revolution can bring for developing countries (Ndung’u & Signé, 2022). Writing well back in 2005, Boas et al. contemplated that digitalisation of the economy can have “sweeping” consequences for the developing world, including its impact on the “welfare of the average citizen” (Boas et al., 2005). At the same time, they also recognised the other side of this emerging reality: a divide between the developed world, which had already started to grasp the possibilities and pitfalls of digital technologies, and, the developing world, which while hoping to use digital technologies, neither had the infrastructure nor societal structures to reap developmental dividends out of it (Boas et al., 2005). After almost two decades since Boas et al. wrote the article, the development dividends promised by the digital revolution are yet to be realised meaningfully, even though digital innovations have upended the way the world works. This digital divide is further exacerbated as digital technologies get monopolised by a handful of countries (Murthy et al., 2021). In that regard, international trade law and policy has arguably also failed to contribute to reducing this divide. Instead of promoting digital development and reducing digital inequality as was set out in the conception of the Work Programme on Electronic Commerce in 1998,1 a prolonged stalemate in e-commerce negotiations at the World Trade Organization (“WTO”) has worsened the situation and failed to address the concerns of various stakeholders including small businesses and revenue-strapped governments (Basu, 2021; Reuters, 2017). One of the key factors behind the stalemate at the WTO is the deep-rooted malaise between the digitally developed and less developed countries. This chapter examines why this unease exists, how it exacerbates the digital divide, and then proposes policy options that enable digital development in a globally interconnected world. This chapter is divided as follows: first, we contextualise the current state of digital development (or the lack of it) and explore the various reasons behind it; second, we examine the political consequences of the absence of digital development, in so far as they affect international trade policies; third, we analyse certain relevant provisions in existing international trade agreements and their policy consequences; and finally, we advance our proposals for reforming international trade law to achieve more meaningful consequences for digital development.

1  General Council, Work Programme on Electronic Commerce, WTO Doc. WT/L/274 (adopted 25 September 1998).

296

Digital development perspectives in international trade law  297

2. CONTEXTUALISING DIGITAL DEVELOPMENT: WHERE DO WE STAND? Digital development has been asymmetric to date. Yet, developing countries also use digital technologies as tools for addressing their developmental problems. For example, digital government and digital public services have long become a reality in many developing countries (Pardo, 2000; World Bank, 2002). Several case studies indicate the role of digital technologies in developing countries, for instance, India has instituted electronic public procurement for fair and transparent government procurement (ET Government, 2021); Karnataka (a state in India) has digitised land records to address property disputes (Singh & Ahuja, 2006); and Vietnam has digitised various business licensing systems to improve the ease of doing business in the country (Blunt et al., 2017). Similarly, digital technologies hold the potential to bring more affordability and accessibility in sectors such as healthcare (Abernethy, 2022) and education (Wellings & Levine, 2009). Digital innovations have also transformed the financial sector; some examples from the developing world include m-Pesa in Africa (Ndung’u, 2017), UPI in India (USAid & mSTAR, 2019), and e-wallets in China and Southeast Asia (Miro, 2022; Tyler Jackson & Roest, 2017). Despite these advancements, digital development is far from its true potential, as shown by countless metrics (Henry, 2019, 1–4).2 The most glaring indicator is access to the internet: at least 37% of people globally have never used the internet in their lives, with an overwhelming majority of these (96%) residing in the developing world.3 Furthermore, as per ITU, of the rest 63% who have accessed the internet, “many hundreds of millions may only get the chance to go online infrequently, via shared devices, or using connectivity speeds that markedly limit the usefulness of their connection”.4 Most vulnerable groups have even weaker access: twothirds of school children, for instance, who were meant to access education online during the COVID-19 pandemic, have no access to the internet.5 In low-income countries, only 1 in 20 children had access to the internet, while in developed countries 9 in 20 did.6 The gap in internet accessibility between the developing and developed world has translated into a widening economic gap. Take, for instance, the gap between Nigeria and the United Kingdom, countries with similar populations. While Nigeria has a GDP of $70 billion, the digital sector in the United Kingdom alone contributes an equivalent amount to its GDP (Lavery et al., 2018). The absence of digital development and the ever-widening digital divide is a circular trap: developing countries lack the infrastructure, technology, wealth, skills, and

2  See United Nations. (2021). With Almost Half of World’s Population Still Offline, Digital Divide Risks Becoming “New Face of Inequality”, Deputy Secretary-General Warns General Assembly [Press release]. Retrieved from https://press​.un​.org​/en​/2021​/dsgsm1579​.doc​.htm 3  See ITU News. (2021). Facts and Figures 2021: 2.9 billion people still offline. ITU Hub. Retrieved from https://www​.itu​.int​/ hub​/2021​/11​/facts​-and​-figures​-2021​-2​-9​-billion​-people​-still​-offline/ 4  See ITU News. (2021). Facts and Figures 2021: 2.9 billion people still offline. ITU Hub. Retrieved from https://www​.itu​.int​/ hub​/2021​/11​/facts​-and​-figures​-2021​-2​-9​-billion​-people​-still​-offline/ 5  See UNICEF. (2020). Two thirds of the world’s school-age children have no internet access at home, new UNICEF-ITU report says [Press release]. Retrieved from https://www​.unicef​.org​/press​ -releases​/two​-thirds​-worlds​-school​-age​-children​-have​-no​-internet​-access​-home​-new​-unicef​-itu 6  See UNICEF. (2020). Two thirds of the world’s school-age children have no internet access at home, new UNICEF-ITU report says [Press release]. Retrieved from https://www​.unicef​.org​/press​ -releases​/two​-thirds​-worlds​-school​-age​-children​-have​-no​-internet​-access​-home​-new​-unicef​-itu

298  Research handbook on law and technology knowledge to access the full suite of digital possibilities; and because they are unable to access the full gamut of digital technologies, they continue to lose the ability to build required infrastructure and develop a knowledge economy (Mubarak et al., 2020; Karar, 2019). The same asymmetry exists with respect to data-driven technologies: since greater volume and higher quality data is collected by companies from developed countries, their data products are better, therefore, their ability to succeed in the global markets is greater than companies from developing countries (Weber, 2017, p. 417). As a result of the yawning gap in digital trade, digitally advanced countries have accelerated their pace of technological growth and monopolised global markets, while others have been left out. Take for instance the US stronghold over the knowledge economy: of the $388 billion global R&D receipts for patents (accounting for the total sums paid across the globe in the form of patent use royalties), $114 billion (over a third) is received by the United States (World Bank, 2022). In 2020, China and Turkey were the sole developing countries in a list of the 20 biggest spenders on R&D (World Bank, 2022). China, in particular, is an interesting example of an anomaly. While China identifies itself as a developing country (a claim often contested by Western countries in bodies such as the WTO), it is also a highly advanced technological and economic power. China’s aggregate economic strength, its centralised approach to digital and data governance, and a huge population base, has collectively given it the strength to become digitally self-reliant and Chinese technology giants now compete globally with US-based technology companies, particularly in developing countries in Asia, Africa, and Latin America (Woetzel et al., 2017). Yet, China simultaneously faces several economic challenges and has a per capita income similar to many developing countries (Benoit & Tu, 2020). The emerging economic rivalry between China and the liberal Western world (particularly the United States) is partially driven by the incompatibility of the Chinese model of digital and data regulation with the Western liberal values of a free and open internet (an idea that, however, remains contested given the recent actions of several Western countries, including the United States’ history of mass digital surveillance) (Hara & Hall, 2021). Owing to this deep division between China and the United States, both sides have attempted to monopolise markets of developing countries to consolidate their position, including by buying out the allegiance of smaller countries in need of funds for infrastructure (Yeoh & Chen, 2022). Ultimately, this rivalry adversely impacts developing countries. As an example, developing countries in Africa have agreed to adopt Chinese technologies, including invasive facial recognition technologies, which would result in mass transfer of data to China, in return for China’s willingness to invest in its digital infrastructure (Chutel, 2018; van der Made, 2021; Cilliers, 2022). Therefore, while this superpower competition may lead to the development of some basic digital infrastructure, it will come at the cost of absolute government control and domination of foreign technology firms. This would be another important factor exacerbating the global digital divide in addition to creating other human rights harms in many developing countries. The global data divide (which we consider to be an inevitable consequence of the global digital divide) relates to the inability of developing countries to meaningfully collect and use their data for promoting domestic development. In many cases, this divide is a consequence of the lack of data capabilities of domestic companies and the inability of individuals to access their data and protect it from being exploited in an indiscriminate manner by both the private sector and governments. This divide is unsurprising, given that the majority of big technology

Digital development perspectives in international trade law  299 companies are concentrated in a handful of developed countries (Ponciana, 2022). The consequences of this divide became obvious during the COVID-19 pandemic: while developing countries spiralled into a deep recession and many defaulted on their national debts (Blake & Wadhwa, 2020), developed countries could keep afloat, for instance, due to remote work, digital services, and online education. During the pandemic, digital MNCs from developed countries and major developing countries made record profits and achieved unforeseen market caps, while businesses in developing countries floundered due to the lack of the capacity to collect, process, and curate data.7 For instance, Microsoft saw a 47% rise in profits in the financial year 2020–2021; Apple’s profits doubled; YouTube saw a doubling of revenues; Alphabet’s revenues increased from $166 billion in 2020 to $278 billion (a 67% rise); Meta’s profits increased from $20 billion in 2019–2020 to $33 billion in 2020–2021 (BBC, 2021; Macrotrends, 2022a; Macrotrends, 2022b). Overall, the ten largest global technology firms generated revenues worth $2 trillion, constituting 2.35% of the global GDP.8 Global data flows are also concentrated in these economic centres: 23% of all data flows are through China and 12% through the United States (Tsunashima, 2020). The data divide can thus be bifurcated into two prongs: first, most people in developing countries are unable to access digital-data-based services (which means no remote work, healthcare and education, no digital entertainment, no e-commerce, no public services, and so on, affecting the economic possibilities); second, those who are able to access digital services in the developing countries do so on platforms owned by foreign MNCs (and with limited ability to develop local, customised digital solutions), thus, limiting employment, revenue, and economic opportunities derived from having a competitive domestic data industry. This economic reality has given a fillip to the decades-old (and previously receding) politics of protectionism. There is a surge in policies across the globe that aim to fragment global data flows, prevent foreign digital companies from entering domestic markets, enact inward-looking data policies, mandate localisation of data, and seek to impose discriminatory tariffs on data services (Steck, 2020).

3. POLITICAL FALLOUTS OF THE DIGITAL DIVIDE AND UNDERDEVELOPMENT The lack of digital development and the concentration of the digital economy in selected regions of the world has two immediate political fallouts: first, the emergence of data regulatory realms with the European Union, the United States, and China monopolising regulatory possibilities; and second, deepening the politics of anti-neo-colonialism, complemented with reactionary and inward-looking policies.

7  See UNCTAD. (2021a). How COVID-19 triggered the digital and e-commerce turning point. Retrieved from https://unctad​.org​/news​/ how​-covid​-19​-triggered​-digital​-and​-e​-commerce​-turning​-point. 8  See Statista. (2021). Digital Economy Compass. Retrieved from https://www​.statista​.com​/study​ /105653​/digital​-economy​-compass/

300  Research handbook on law and technology 3.1 Emergence of Data Realms Given the economic might of the United States, China, and the European Union, it is unsurprising that these three players have given effect to the three “predominant” data regulatory spheres in the world today (Aaronson & Leblond, 2018). The reasons are obvious: these countries act as rule-setters (and as standard setters, to varying degrees), and the majority of emerging digital economies are under immense pressure to side with one of these “data realms”. However, this is akin to a Hobson’s choice: on the one hand, most developing countries do not yet have the regulatory structures or technical capacity to determine the data policies appropriate to their developmental needs; on the other hand, they are under continued economic and geopolitical pressure to ally with one of these realms to be able to trade with them or risk exclusion from supply chains (Aaronson & Leblond, 2018). This Hobson’s choice is reflected in how several developing countries have implemented data protection frameworks mimicking the General Data Protection Regulation or GDPR of the European Union, which is often characterised as the “Brussels Effect”. As argued by Bradford, the sheer size of the European market and its regulatory homogeneity has empowered it to force regulatory principles on the rest of the world and emerge as the predominant global rule-setter (Bradford, 2012). Non-compliance with the GDPR can mean complete disruption of data/digital services export to the European Union, given the adequacy requirements under the GDPR (i.e. data transfers are allowed to non-EU countries only if they have a comparable or essentially equivalent, to be precise, data protection framework) (Mannion, 2020). However, adopting a GDPR-like framework necessitates immense regulatory capacity, which is naturally easier for both the governments and entities based in those countries. For instance, companies based in developed countries are more likely to have the resources to comply with the legal requirements of the GDPR (such as standard contractual clauses), i.e. in the absence of an adequacy framework. In turn, firms from developing countries face higher barriers to trade in digital services with the European Union. Firms from the United Kingdom, for instance, are expected to spend between £1 billion and £1.6 billion annually to comply with the adequacy requirements, an infeasible cost for most companies in developing countries (McCann et al., 2020). Studies have reported that most small businesses (even in richer countries) have struggled to implement standard contractual clauses for data transfer after the implementation of GDPR (Gal & Oshrit, 2020). Another study indicates the severe adverse impact of GDPR on African e-commerce firms (Mannion, 2020). These data realms also curtail the possibility of the emergence of appropriate socio-legal institutions and public awareness necessary to create an effective and practical regulatory set up in developing countries. For instance, many countries have unique criminal and civil law institutions, accounting for local needs. While certain broad norms or legal principles may be instructive in the development of legal regimes such as data protection, developing countries are under pressure to adopt highly detailed data regulations based on often alien regimes such as the GDPR of the European Union (Murthy & Medine, 2020). This is impractical since the notions of data, privacy, cybersecurity, and surveillance, all differ across countries (Murthy & Medine, 2020). For, e.g. while an EU citizen may be concerned about how their data is processed and used by Spotify or Netflix, a citizen of Ivory Coast might be more concerned about getting access to the internet and being able to use it to secure better skills, opportunities, and access to public services.

Digital development perspectives in international trade law  301 The forced importation by developing countries of rules from a prevailing digital power can be counterproductive to various public interests. Civil society bodies in several developing countries are concerned that privacy and data protection laws are being drafted to help advance the cause of state surveillance while ensuring that private companies have no choice but to hand over data to the government (Mukherjee, 2022; Joshi, 2022). It has also been argued that when citizens are not aware of their privacy rights and their importance, these laws make it easier to get public approval for surveillance tools like facial recognition technology by creating a false sense of security and enabling deceptive consent forms (Selinger & Hartzog, 2019). 3.2 Trust Deficit and the Politics of Anti-Neocolonialism There is some truth to Marx’s statement that the history of all previous societies is a history of class struggle (Marx & Engels, 1848). The same is true for today’s digital world. As data and data-driven technologies become concentrated in a handful of countries, concerns about excessive dependence and unequal distribution of economic benefits are emerging (Ciuriak, 2018). These concerns have given rise to a new form of class struggle and a loaded political argument of neocolonialism. Across various policy circles, stakeholders have argued that the domination and exploitation of developing countries in the digital realm is akin to the early stages of colonialism, where raw resources flowed out of the subject nation (data is the new raw resource), and finished products flowed back, leading to a drain of wealth (Kwet, 2019; Young, 2019; Coleman, 2019). While it remains debatable as to how countries can best achieve equitable digital development, the concerns around neo-digital colonialism cannot be entirely dismissed for two reasons: first, these concerns are propelling real political consequences in the form of inwardlooking policies that deeply fragment the global digital economy; and second, developing countries often do not have sufficient resources to compete fairly with digitally advanced countries. For instance, even though developed countries coerce their developing partners to liberalise their digital markets, there is negligible transfer of technology and capacity support to enable this transition. All-in-all, a wide trust deficit exists between these two groups of countries. The first important consequence of reactionary policies is an obvious threat to an interconnected digital market. One such significant policy intervention is data localisation, mandating companies to store and/or process data within domestic territory (González et al., 2022). A survey shows that 62 countries impose varying degrees of data localisation measures (Cory & Dascoli, 2021). While such measures are more popular, frequent, and stricter in developing countries, developed countries also implement such measures (González et  al., 2022). Australia, for instance, imposed localisation requirements on healthcare data in 2012.9 Many EU countries also impose such measures for public administration data (Kathuria et al., 2019, pp. 16–19). Many developing countries now argue that localisation would allow them to guarantee security of information, ensure compliance with domestic regulatory obligations, and enable more efficient and meaningful use of domestic data (Burman & Sharma, 2021). Further, data localisation is seen as being important to the creation of basic data management infrastructure 9 

My Health Records Amendment (Strengthening Privacy) Bill, 2018.

302  Research handbook on law and technology (e.g. data centres, data processors, etc.) (Kathuria et al., 2019). An op-ed in an Indian media outlet reads, “Indian Cloud Data Centres will make or break Digital India”.10 A Malaysian government press release claims, “The establishment of data centre hubs and cloud service providers will continue to be the driving force behind the rise of the country’s digital economy”.11 Thus, data localisation is emerging as a supposed shortcut to the development of digital infrastructure, digital economy growth and jobs. The imposition of data localisation measures, however, interferes with the basic end-toend architecture of the global internet, breaking it down into fragments, ultimately slowing down the flow of data and alongside it, data-driven trade.12 Trade restrictions lead to a loss of efficiency and negative externalities in the form of increased costs of trade, thereby causing economic losses (University of Minnesota, 2016, 17.3). For instance, one estimate indicates that data localisation and related policies can impact the Indian economy to the tune of $700 million in losses for every 1% decline in data flows (Kathuria et al., 2019). Data localisation also impacts scale efficiencies by forcing companies to build numerous data centres that in turn increase the prices of digital services for end consumers (Cory, 2019). Thus, small businesses would find it costlier to go online and access cloud services (Cory, 2017; Ankeny, 2016). The above arguments, however, do not mean that developing countries do not have any legitimate concerns vis-à-vis data localisation measures. In addition to the economic dependency risk, cybersecurity remains a major concern. Given the rise of cyberwarfare and its usage by countries like the United States, China, and Israel, which are all digitally advanced, developing countries argue that their data should stay within their borders to ensure sufficient protection. According to the Cybersecurity Exposure Index, Africa has the maximum cyber-risk exposure and over 75% of African countries are exposed to very-high or high risk (D’mello, 2020). Most developed countries, however, have low or very-low risk exposure. However, ensuring cybersecurity is a function of technology rather than location. In other words, data localisation is not the appropriate response to address cybersecurity risks, especially in countries with high cyber-risk exposure. The other policy measure that has become a deal-breaker in e-commerce negotiations is the moratorium on customs on electronic transmissions, which has been in place since 1998.13 In 1998, WTO members agreed not to impose customs duties on all forms of electronic transmissions. This moratorium enabled companies to freely transfer data across borders using global servers, without having to pay any border taxes to any government. To date, the moratorium has been extended by Member States on a temporary basis. But in recent years, a group of developing countries, led by India and South Africa, have questioned the impact of the moratorium on developing countries, with respect to losing valuable tariff revenues that could 10  See Firstpost. (2015). Indian Cloud Data Centres will make or break Digital India. Retrieved from https://www​.firstpost​.com ​/ business​/sponsored​-indian​- cloud​- data​- centres​-will​-make​-or​-break​- digital​ -india​-2475598​.html 11  See Malaysian Reserve. (2021). Data centre infrastructure to bolster country’s digital economy. MIDA. Retrieved from https://www​.mida​.gov​.my​/mida​-news​/data​-centre​-infrastructure​-to​-bolster​ -countrys​-digital​-economy/ 12  See UNCTAD. (2021b). Digital Economy Report 2021. Retrieved from https://unctad​.org​/webflyer​ /digital​-economy​-report​-2021 13  See Reuters. (2022). WTO provisionally agrees to extend e-commerce tariff moratorium – sources. Retrieved from https://www​.reuters​.com​/markets​/commodities​/wto​-provisionally​-agrees​-extend​-e​ -commerce​-tariff​-moratorium​-sources​-2022​- 06 ​-16/

Digital development perspectives in international trade law  303 be collected from electronic transmissions, especially as most developing countries are net consumers of such transmissions.14 While this proposition is yet to be examined in practice, conflicting studies have emerged. A study by Banga from the UNCTAD argues that the moratorium is costing developing countries over $10 billion annually (in 2017) (Banga, 2020). An opposing study published by the European Centre for International Political Economy argues that not only is the moratorium not causing any loss but is generating net benefits of $24.3 billion ($10.6 billion in GDP growth and $13.7 billion in additional investment) (Lee-Makiyama & Narayanan, 2019, pp. 11–12). Few factors need to be considered in the context of this debate around the moratorium: first, developing countries’ consumers and businesses, who have lesser purchasing power and currently lack access to digital services, would pay more if tariffs were imposed on electronic transmissions, further excluding them from the digital economy; second, developing countries’ governments, who have limited capacity to maintain a high-tech customs system, would find it costly and inconvenient to track data movements and impose tariffs on them;15 third, any tariff is bound to cause deadweight loss to society, and is reasonable only when there is an observable social harm which it can help avoid (e.g. anti-dumping or environmental harm);16 and fourth, the imposition of tariffs is bound to slow down data movements and significantly increase the administrative and compliance costs of companies, including for small businesses and emerging firms in developing countries (Andrenelli & González, 2019). Finally, the non-renewal of the moratorium may lead to further retaliation among countries, for instance, by the introduction of numerous tit-for-tat digital taxes, ultimately harming the growth of the digital economy.

4. TRADE POLITICS, NEGOTIATIONS, AND FREE TRADE AGREEMENTS The political economy of digital trade, as detailed in the previous section, has far-reaching consequences for trade negotiations and emerging digital trade rules. For one, the divergent views held by countries mean that no noteworthy agreement has been reached on digital trade at the multilateral level (Mishra, 2021). This is a glaring gap given that e-commerce now comprises over 30% of the global GDP and is inherently cross-border in nature.17 While some progress has been made in discussions under the Joint Initiative on Electronic Commerce at the WTO, almost half of the WTO members (incl. major economies like India and 76 other countries) have either boycotted these discussions or are yet to take a decision on joining it 14  See IISD (2020). WTO Members Highlight Benefits and Drawbacks of E-commerce Moratorium. Retrieved from https://sdg​.iisd​.org​/news​/wto​-members​-highlight​-benefits​-and​-drawbacks​-of​-e​-commerce​-moratorium/ 15  See ICCWBO. (2019). The WTO Moratorium On Customs Duties On Electronic Transmissions – A Primer For Business. Retrieved from https://iccwbo​.org​/content​/uploads​/sites​/3​/2019​/11​/2019​-icc​ -wto​-moratorium​-custom​-duties​.pdf 16  See WITS World Bank. (2010). Effects on Tariff Revenue, Consumer Surplus and Welfare. Retrieved from  https://wits​.worldbank ​.org ​/ wits ​/ wits ​/ witshelp ​/ Content ​/ SMART​/ Effects​%20on​%20Tariff ​%20Revenue​.htm 17  See UNCTAD (2020). Global e-commerce hits $25.6 trillion – latest UNCTAD estimates. Retrieved from https://unctad​.org​/news​/global​-e​-commerce​-hits​-256 ​-trillion​-latest​-unctad​-estimates

304  Research handbook on law and technology (WTO, n.d.; Chakraborty, 2019). The deadlock at the WTO is a regrettable outcome in many ways: first, the uncertainty regarding how the existing WTO rules apply to digital trade continues to fester, depriving businesses of the benefits of a rules-based trade regime; and second, the countries who want digital trade rules to be enacted are moving ahead and sometimes without developing countries, leading to the emergence of FTAs of countries that are mostly digital-haves (something we characterise as “digital-have FTAs” in this chapter). 4.1 Existing WTO Law and Digital Trade The three key contentious areas in applying current WTO law to digital trade issues are: (i) the classification of digital products, (ii) relevance of technological neutrality, and (iii) translating exceptions available in WTO treaties to the digital context. On the first point, countries have not been able to arrive at a common understanding on whether to treat digital products as goods or services. For instance, the United States and Japan, favour the classification of digital products as goods, so as to bring them within the fold of the General Agreement on Tariffs and Trade (“GATT”). In contrast, the European Union and several developing countries argue that digital products are in fact services and thus fall within the purview of the General Agreement on Trade in Services (“GATS”) (Herman, 2010). This conflict exists particularly for products that earlier (at the time of negotiation of these agreements) existed in physical form and are now digitally delivered as a service: these would include a range of major products including movie and music streaming, software like an office suite, content sharing, news media, and so on. This also affects futuristic technologies like 3D printing, artificial intelligence, and the Internet of Things, all of which are geared towards digitising goods and converting them into a service (Fleuter, 2016). At the same time, for new technologies including IoT-driven technologies, the sharp distinction between goods and services is fading (Chander, 2019). For such technologies, the divergence of disciplines in GATT and GATS (as explained below) poses further legal uncertainty regarding the applicable obligations. GATT and GATS have varying commitments for signatories: while GATT provides for mandatory national treatment under GATS it is a negotiated commitment (i.e. subject to commitments made by countries on a sector-by-sector basis); similarly, Most-Favoured-Nation treatment can be opted-out of under GATS (under Art II exemption list) but is mandatory under GATT; and GATT contains general prohibitions on discriminatory practices like dumping, quantitative restrictions, rules on origin and valuation, etc., that is absent in GATS (Baker et al., 2001). In many ways, the policy space provided by GATS to selectively undertake liberalisation commitments is significantly more than GATT. The second key problem is the application of technological neutrality. Technological neutrality requires WTO members to treat like service alike even if they are served through different technological mediums (Kwak, 2021). While broadly accepted as an informal rule under the GATS framework,18 technological neutrality has not yet been explicitly recognised as a mandatory principle by the WTO members. While the WTO Panel has twice upheld the concept, it has never done so explicitly. In the US-Gambling and China-Audio-Visual cases, 18  WTO, China – Measures Affecting Trading Rights and Distribution Services for Certain Publications and Audiovisual Entertainment Products – Report of the Appellate Body, WT/DS363/ AB/R (2009).

Digital development perspectives in international trade law  305 the AB held that GATS does not limit the technological means that can be used to deliver services, without explicitly mentioning technological neutrality (Peng, 2012). This strategic absence of a mandatory commitment has allowed many members to question whether technological neutrality must be acknowledged as a binding principle (Herman, 2010), thus causing uncertainty regarding applying WTO rules to the changing digital environment (WunschVincent et al., 2006, pp. 13–32). Finally, a considerable degree of uncertainty exists regarding the manner in which exceptions available in WTO treaties and other FTAs such as the general and security exceptions apply in the digital context. For instance, several developing countries are concerned about whether the existing exceptions provide sufficient policy space to impose domestic cybersecurity and privacy laws to achieve legitimate public policy objectives. A common example is data localisation, which as discussed earlier, is a tool of preference for many developing countries. Yet, these measures can violate trade obligations. In such a scenario, can the exceptions be meaningfully used to justify these localisation measures, for instance, as being important to protect vital political, social, and economic interests? Similarly, the wording of the national security exception in WTO treaties (and the recent WTO Panel decisions)19 leaves sufficient room for debate regarding the extent to which it covers state responses to the range of cyber threats existing in the world today. Further, the assessment of a particular measure under the exception would entail an assessment of a country’s regulatory capacity to undertake a less trade-restrictive measure. For example, most developing countries may consider data localisation more feasible than adopting a sophisticated, expensive certification mechanism for regulating/monitoring data transfers. 4.2 Emergence of “Digital-Haves” FTAs While negotiation deadlocks at the WTO have delayed the development of digital trade rules at a multilateral level, digital trade rules are now being created under various bilateral and plurilateral arrangements. For instance, these rules can be found in a range of FTAs, which now include comprehensive electronic commerce or digital trade chapters (a trend set by the Singapore-Australia FTA of 2003), or standalone digital trade agreements (for e.g. Digital Economy Partnership Agreement – “DEPA”20) (Basu, 2021). The TAPED dataset, which examines over 379 FTAs, found that 138 of them contained provisions on electronic commerce and 106 had comprehensive electronic commerce chapters (University of Lucerne, 2022). Our assessment of digital trade chapters across FTAs indicates that developing countries are often not the ones articulating or shaping digital trade rules. Of the 106 FTAs in the TAPED dataset which have electronic commerce chapters, 70 FTAs (two-thirds) involved just six countries as one of the parties: Singapore, the United States, the United Kingdom, 19  See generally WTO, Russia – Measures Concerning Traffic in Transit – Panel Report, WT/DS512/7 (2019); WTO, Saudi Arabia – Measures concerning the Protection of Intellectual Property Rights, WT/ DS567/11 (2022); WTO, United States – Certain Measures on Steel and Aluminium Products, WT/ DS548/19 (2022). 20  DEPA is a free trade agreement solely concerned with digital trade. First negotiated by New Zealand, Chile, and Singapore, it is being considered by South Korea and China which have both requested to join it and have started the accession procedure. It is also open to other countries which may request to join it.

306  Research handbook on law and technology Australia, Chile, and China (University of Lucerne, 2022). This means that only a handful of countries are leading the creation of digital trade rules and are the ones who are also likely to have a deep impact on the future evolution of digital trade rules at the multilateral level. Second, even in FTAs where developing countries are involved, majority of the rules are driven by the developed country counterparts. For instance, in our assessment of digital chapters in FTAs, we did not have find any instances of binding provisions on technical assistance and capacity-building in digital sectors (Agrawal & Mishra, 2022), which could be instrumental in facilitating digital growth in developing countries. On the other hand, rules liberalising digital trade flows (including data flows) are much more common. Consequently, the rules emerging through these FTAs can be seen as being constructed by “digital-haves” with little input from the “digital-have nots”. For example, not one of those 106 FTAs had an African country as a party (except for Morocco’s FTA with the United States, which was negotiated well back in 2006), even though such countries are among the worst hit by the global digital divide (Fuchs & Horak, 2008). Unless developing countries, especially the smaller ones, join the negotiations and become part of bilateral and multilateral agreements, the possibility of ushering in inclusive digital growth or development remains bleak. Expectedly, the focus of the existing FTAs is on increasing digital trade volumes (consistent with the interests of leading digital trade rules negotiating countries) without necessarily considering concerns regarding digital development. While the DEPA has accounted for certain elements of digital development (for instance, it has an entire module on digital inclusion),21 it lacks a comprehensive framework to enable any such inclusive development. Instead, this module is a high-level acknowledgement of the need for digital inclusion and the need for cooperation between the parties to address various digital trade barriers faced by disadvantaged groups. Most other FTAs do not even delve into digital development possibilities. For instance, provisions on data protection (albeit increasingly common in FTAs) do not yet contain a robust mechanism to enable regulatory cooperation or promote interoperability of different data protection frameworks and thereby generate consumer trust and ensure business certainty. Given that developing countries have weak regulatory structures, lack technical or regulatory support from developed countries, and in the absence of global consensus on various aspects of digital regulation, companies based in developing countries face numerous disadvantages in accessing the global markets. As an illustration, a software solution supplied from Vietnam or Indonesia, is unlikely to enjoy a comparable degree of trust/acceptance as a solution supplied from Germany (irrespective of its technical efficacy). Furthermore, given that developing countries are often absent in many trade negotiations (or participate subject to the terms of more powerful countries), there is no package in sight that can help developing countries achieve higher levels of digital development (Peng, 2022). For instance, none of the FTAs contains a dedicated mechanism for Special and Differential Treatment (SDT) mechanism in the context of digital trade (with limited exceptions in certain FTAs as we discuss later). SDT measures, appropriately designed, can be targeted to help developing countries, for instance, by providing them adequate time to set up both domestic regulatory infrastructure and industries and benefit from access to foreign markets through favourable terms (Michalopoulos, 2002). SDT measures could be an important negotiation strategy to bring developing countries on board, especially if they are provided with adequate

21 

Module 11.

Digital development perspectives in international trade law  307 time and sufficient guidance (as discussed further below) to implement commitments on market access, data protection and localisation, technological neutrality, etc. While some FTAs such as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) provide additional time for some countries like Vietnam, Brunei and Peru to adopt their digital trade commitments, this is a limited measure without any complementing binding commitments on regulatory assistance or capacity-building support (Australian Government DFAT, n.d.). Similarly, measures to address digital inequality such as enabling participation by MSMEs or implementing trade facilitation measures to make trade cheaper (e.g. digital identities) are only recommendatory measures in a small number of FTAs. Without such support, most developing countries are unlikely to create an implementable regulatory framework for the digital economy (Graham, 2002). The existing FTAs also do not recognise the role of technology transfer in bridging the digital divide. While the mechanics and benefits of technology transfer need further analysis, smaller developing countries evidently do not have sufficient knowledge stock or technical resources to develop their own digital services (UNCTAD, 2014). Technology transfer can entail important economic and trade benefits; economists have observed that trade agreements containing commitments on technology transfer bring about higher trade volumes compared to the others (Martínez-Zarzoso & Chelala, 2021).

5. PROPOSALS FOR INTERNATIONAL TRADE LAW REFORM Any proposal to reform international trade law for the digital era must contain three elements: first, rules that protect an unfragmented digital economy and seamless data flows, including prohibiting data localisation and confirming the moratorium on customs duties; second, enabling meaningful participation in developing countries in multilateral and bilateral negotiations to help shape trade rules relevant to their needs, including feasible and streamlined mechanisms for technology transfer, SDT measures, and technical assistance;22 and third, the commitment from both developed and developing countries to make genuine efforts for enabling regulatory cooperation, interoperability, and unrestricted data flows. Provisions prohibiting data localisation and enabling data flows are not common in FTAs. This is unsurprising given that over 62 countries have some form of data localisation measures (Cory & Dascoli, 2021). Most FTAs contain a “legitimate public policy” exception for data localisation/cross-border data flows, which is subject to further interpretation by tribunals.23 Similarly, although many FTAs make the moratorium on customs duties binding and permanent, some FTAs hinge this commitment on any changes at the WTO (as discussed earlier, the moratorium is now under persistent challenge from a faction of developing countries). For instance, the India–UAE FTA provides: “Each Party may adjust its practice referred to in paragraphs 1 and 2 with respect to any further outcomes in the WTO Decisions on customs duties on electronic transmission within the framework of the Work Programme on

22  Meaningful participation implies that developing countries have the ability and resources to contribute to the dialogues, a point that was brought up by Ivory Coast in one of its JSI submissions (INF/ ECOM/49). 23  See for instance, Art. 13.12, CPTPP; Art. 12.14, RCEP.

308  Research handbook on law and technology Electronic Commerce”.24 The RCEP reads, “3. Each Party may adjust its practice referred to in paragraph 1 with respect to any further outcomes in the WTO Ministerial Decisions on customs duties on electronic transmissions within the framework of the Work Programme on Electronic Commerce”.25 Future negotiations must remedy this situation and seek a permanent solution to these twin problems threatening the fragmentation of the global digital economy. While the moratorium must be made permanent at the WTO level, more FTAs must seek a prohibition on data localisation measures, ultimately leading to a WTO-level agreement. While such a prohibition should contain appropriate exceptions, those should be limited to a clear and rational necessity test, and should be adjudicated by independent institutions created under the FTAs (Burri & Polanco, 2020). Not all developing countries might consider this approach desirable in their FTA negotiations. Therefore, it is important for negotiating parties to thrust what lies at the core of such provisions, namely restoring mutual trust between countries, which has been damaged inter alia by the global digital divide (Mitchell & Mishra, 2021; Kilic & Avila, n.d.). Digital development and regulatory assistance and capacity-building support from developed countries and technologically advanced developing countries will be critical in this regard. Therefore, we propose that any obligations undertaken by developing countries to liberalise their digital/data markets must be contingent on receiving appropriately designed digital development packages. As an example, digital development assistance packages should be geared towards helping developing countries build basic regulatory frameworks (e.g. on data protection, spam, cybersecurity protection, and online consumer protection), set up minimum digital infrastructure and enable R&D, and create a knowledge economy. These measures are in many ways selfserving for developed countries as greater digital trust and a rule-based system can create more prospects for investment and digital innovation in the developing world. While the WTO (especially, the ongoing Joint Initiative on E-Commerce) would be the ideal forum to reach such an understanding on a global basis, this mechanism could also be relevant for FTA negotiations involving developing countries. Further, regional bodies containing developing countries such as ASEAN and Pacific Alliance, could also offer a consolidated position on how such bargains may be reached with developed countries at the WTO and in other trade fora. Let us consider the example of data protection and privacy frameworks in Africa. Only half of the 54 African countries have some law on data protection/privacy (Nume, 2021). Yet, Africa is one of the most promising digital markets given their low internet penetration rates. Technical assistance and regulatory support to such countries can thus yield multiple benefits for both the developed world and African countries. However, technical assistance/capacitybuilding support cannot be a tool for forced regulatory harmonisation, particularly because of both divergence in regulatory needs and enforcement priorities across countries. Where harmonisation is forced by exporting uniform laws to developing countries (as opposed to incrementally developing convergence through ongoing support, gradual reforms and development of suitable mutual recognition mechanisms), a high probability exists for such regulatory 24  Art. 9.15.3, Comprehensive Economic Partnership Agreement (CEPA) between the Government of the Republic of India and the Government of the United Arab Emirates (UAE), 2022; also see, Art. 10.4.1, footnote 10–13, Comprehensive Economic Cooperation Agreement between The Republic of India and the Republic of Singapore, 2005. 25  Art. 12.11.

Digital development perspectives in international trade law  309 structures to eventually collapse and/or create incapable public institutions ill-suited to the domestic needs. Interoperability of regulatory frameworks in developing and developed countries contributes to an unfragmented digital economy and can enable meaningful international regulatory cooperation. Regulatory divergence leads to higher compliance costs and difficult operating conditions (OECDiLibrary, 2021). But through such technical assistance packages, the developing and developed world can cooperate to understand their respective regulatory needs and preferences, and implement their domestic frameworks after a high-level consensus on the fundamental principles in areas such as data protection, competition, and cybersecurity enforcement. Such an alignment would facilitate data flows (for instance, countries could develop benchmarks for interoperability) and significantly improve market access, especially for developing country firms, who find compliance to be costlier and harder. Further, regulatory cooperation would help develop more widely accepted and representative regulatory standards. The DEPA provides a practical mechanism for this. First, it provides for basic principles in key regulatory areas like data protection that all parties are expected to implement26, and second, it provides for mutual recognition of varying regulatory regimes (as against the stringent EU adequacy norm). It has created three mechanisms for this: “(a) the recognition of regulatory outcomes, whether accorded autonomously or by mutual arrangement; (b) broader international frameworks; (c) where practicable, appropriate recognition of comparable protection afforded by their respective legal frameworks’ national trustmark or certification frameworks”.27 It also provides for interoperability in the areas of e-invoicing (Art. 2.5) and e-payments (Art. 2.7). Similarly, the United States-Mexico-Canada Agreement (USMCA), which replaced the North American Free Trade Agreement (NAFTA, 1994) as the primary trade agreement between the three major North American countries, attempts to harmonise regulatory frameworks by recognising the APEC Cross-Border Privacy Rules (CBPR) as the base framework which the parties must abide by.28 Developed as early as 2011, the CBPR is a regulatory convergence framework developed by 21 APEC economies to ensure that companies enjoy interoperability if the countries have certain overarching features in their privacy regulatory system. These include features like enforceable standards, accountability, riskbased protections, consumer protection, consistent protection, and regulatory cooperation. The above mechanisms create more options for interoperability. However, the voice of developing countries is equally important in enabling these mechanisms, especially because the existing instruments (e.g. CBPR) are mostly developed and utilised by digitally advanced countries. For instance, questions regarding the design, implementation, feasibility and affordability of trust marks or certification mechanisms for data transfers are critical for MSMEs based in developing countries. Developing countries may also need technical assistance in participating meaningfully in the development of such trustmarks/certifications in the appropriate fora. Future digital trade rules must thus account for these gaps in current FTAs.

Art. 4.2(3). Art. 4.2(6). 28  Art. 19(8), United States-Mexico-Canada Agreement (2019). https://ustr​.gov​/trade​-agreements​/ free​-trade​-agreements​/united​-states​-mexico​-canada​-agreement. 26  27 

310  Research handbook on law and technology

6. CONCLUSION The expansion of the digital economy and the pace at which it is upending our world has created a digital data divide between countries. Large numbers still remain unconnected to the internet and experience the potential opportunities and convenience that it can provide. Further, richer countries are usually the sole suppliers of key digital technologies, while many smaller and developing nations are reduced to being mere buyers. This misalignment is also reflected in the global regulatory framework, where developed countries lead the development of digital regulations, whereas developing countries have little choice in developing their regulatory path. This unravelling has created anxiety amongst policy-makers across the world, reflected in the high-pitch politics of protectionism. It entails the enactment of inward-looking policies such as limiting data flows, territorialising data, imposing tariffs on its movement, and fragmenting the internet across the world and especially in developing countries. Trade law and policy can and should play a role in allaying these fears and ensuring an open and robust internet that supports opportunities for digital trade. As a starting point, trade negotiators must become more aware and attuned towards the digital development needs of developing countries, such as enabling requisite SDT measures, developing capacity-building packages, and enabling meaningful transfer of technology. At the same time, developing countries also need to recognise the importance of global interconnectivity and refrain from imposing unnecessarily trade-restrictive measures. This article demonstrates that several mutual interests exist that can be explored further to develop broad principles.

BIBLIOGRAPHY Aaronson, S.A. & Leblond, P. (2018). Another Digital Divide: The Rise of Data Realms and its Implications for the WTO. Journal of International Economic Law, 21(2), 245-272. Retrieved from https://doi​.org​/10​.1093​/jiel​/jgy019 Abernethy, A. (2022). The Promise of Digital Health: Then, Now, and the Future. National Academy of Medicine. Retrieved from https://nam​.edu​/the​-promise​-of​-digital​-health​-then​-now​-and​-the​-future/ Agarwal, B. & Mishra, N. (2022). Addressing the Global Data Divide through Digital Trade Law. Trade, Law and Development, 14, 311 Andrenelli, A. & González, J.L. (2019). Electronic transmissions and international trade – shedding new light on the moratorium debate. OECD Trade Policy Papers No. 233. Retrieved from https:// www​.oecd​-ilibrary​.org ​/trade​/electronic​-transmissions​-and​-international​-trade​-shedding​-new​-light​ -on​-the​-moratorium​-debate​_ 57b50a4b​-en Ankeny, C. (2016). The Costs of Data Localization. ITI. Retrieved from https://www​.itic​.org​/news​ -events​/techwonk​-blog​/the​-costs​-of​-data​-localization Australian Government DFAT (n.d.). Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). Retrieved from https://www​.dfat​.gov​.au​/trade​/agreements​/in​-force​/cptpp​/ comprehensive​-and​-progressive​-agreement​-for​-trans​-pacific​-partnership Baker, S.A., Lichtenbaum, P., Yeo, M.S. & Shenk, M.D. (2001). E-Products and the WTO. The International Lawyer, 35(1), 5-21. Retrieved from https://www​.jstor​.org​/stable​/40707591 Banga, R. (2020). Should digitally delivered products be exempted from customs duties? UNCTAD. Retrieved from https://unctad​.org​/news​/should​-digitally​-delivered​-products​-be​-exempted​-customs​-duties Basu, A. (2021). Can the WTO build consensus on digital trade? Hinrich Foundation. Retrieved from https://www​.hinrichfoundation​.com​/research​/article​/wto​/can​-the​-wto​-build​- consensus​- on​- digital​ -trade/ BBC. (2021). Tech giants’ profits soar as pandemic boom continues. Retrieved from https://www​.bbc​ .com ​/news​/ business​-57979268

Digital development perspectives in international trade law  311 Benoit, P. & Tu, K. (2020). Is China Still a Developing Country? And Why It Matters for Energy and Climate. Columbia Center on Global Energy Policy. Retrieved from https://www​ .energypolicy​ .columbia​.edu​/research​/report​/china​-still​-developing​-country​-and​-why​-it​-matters​-energy​-and​-climate Blake, P. & Wadhwa, D. (2020). 2020 Year in Review: The impact of COVID-19 in 12 charts. World Bank Blogs. Retrieved from https://blogs​.worldbank​.org​/voices​/2020​-year​-review​-impact​-covid​-19​ -12​-charts Blunt, M., Davidsen, S., Agarwal, S., Pfeil, H. & Schott, B. (2017). One-Stop Shops in Vietnam: Changing the Face of Public Administration for Citizens and Businesses through a Single Door to Multiple Services. World Bank. Retrieved from https://openknowledge​.worldbank​.org​/ handle​/10986​ /27487 Boas, T., Dunning, T. & Bussell, J. (2005). Will the Digital Revolution Revolutionize Development? Drawing Together the Debate. Studies in Comparative International Development, 40(2), 95-110. Retrieved from https://doi​.org​/10​.1007​/ bf02686296 Bradford, A. (2012). The Brussels Effect. Northwestern University Law Review, 107(1), 1-67. Retrieved from https://nor​thwe​ster​nlaw​review​.org​/issues​/the​-brussels​-effect/ Burman, A. & Sharma, U. (2021). How Would Data Localization Benefit India? Carnegie India. Retrieved from https://carnegieindia​.org​/2021​/04​/14​/ how​-would​-data​-localization​-benefit​-india​-pub​ -84291 Burri, M. & Polanco, R. (2020). Digital Trade Provisions in Preferential Trade Agreements: Introducing a New Dataset. Journal of International Economic Law, 23(1), 187–220. Retrieved from https://doi​ .org​/10​.1093​/jiel​/jgz044 Chakraborty, S. (2019). India refuses to join e-commerce talks at WTO, says rules to hurt country. Business Standard. Retrieved from https://www​.business​-standard​.com​/article​/economy​-policy​/ india​-refuses​-to​-join​-e​-commerce​-talks​-at​-wto​-says​-rules​-to​-hurt​-country​-119022500014​_1​.html Chander, A. (2019). The Internet of Things: Both Goods and Services. World Trade Review, 18(1), 9–22 Chutel, L. (2018). China is exporting facial recognition software to Africa, expanding its vast database. Quartz. Retrieved from https://qz​.com​/africa​/1287675​/china​-is​-exporting​-facial​-recognition​-to​ -africa​-ensuring​-ai​-dominance​-through​-diversity Cilliers, J. (2022). China vs the West: a contest that will hurt Africa’s future. Institute for Security Studies. Retrieved from https://www​.polity​.org​.za​/article​/china​-vs​-the​-west​-a​-contest​-that​-will​-hurt​ -africas​-future​-2022​-11​- 07 Ciuriak, D. (2018). Rethinking Industrial Policy for the Data-driven Economy. CIGI Papers No. 192. Retrieved from https://www​.cigionline​.org​/static​/documents​/documents​/ Paper​%20no​.192web​.pdf Coleman, D. (2019). Digital Colonialism: The 21st Century Scramble for Africa through the Extraction and Control of User Data and the Limitations of Data Protection Laws. Michigan Journal of Race and Law, 24, 417–439. Retrieved from https://repository​.law​.umich​.edu​/mjrl ​/vol24​/iss2​/6/ Cory, N. (2017). Cross-Border Data Flows: Where Are the Barriers, and What Do They Cost? ITIF. Retrieved from https://itif​.org​/publications​/2017​/05​/01​/cross​-border​-data​-flows​-where​-are​-barriers​ -and​-what​-do​-they​-cost/ Cory, N. (2019). The False Appeal of Data Nationalism: Why the Value of Data Comes From How It’s Used, Not Where It’s Stored. ITIF. Retrieved from https://itif​.org​/publications​/2019​/04​/01​/false​ -appeal​-data​-nationalism​-why​-value​-data​-comes​-how​-its​-used​-not​-where/ Cory, N. & Dascoli, L. (2021). How Barriers to Cross-Border Data Flows Are Spreading Globally, What They Cost, and How to Address Them. ITIF. Retrieved from https://itif​.org​/publications​/2021​/07​/19​/ how​-barriers​-cross​-border​-data​-flows​-are​-spreading​-globally​-what​-they​-cost/ D’mello, A. (2020). Cybersecurity index shows the most exposed countries: Afghanistan tops the list. IoT Now. Retrieved from https://www​.iot​-now​.com ​/2020​/06​/08​/103342​-cybersecurity​-exposure​ -index​-the​-most​-exposed​-countries​-afghanistan​-tops​-the​-list/ ET Government. (2021). Digital India takes center-stage as govt issues digital guidelines for public procurement. Retrieved from https://government​.economictimes​.indiatimes​.com​/news​/digital​-india​ /digital​-india​-takes​-centerstage​-as​-govt​-issues​-digital​-guidelines​-for​-public​-procurement ​/87464356​ ?redirect=1 Fleuter, S. (2016). The Role of Digital Products Under the WTO: A New Framework for GATT and GATS Classification. Chicago Journal of International Law, 17(1), 153–177. Retrieved from https:// chicagounbound​.uchicago​.edu​/cjil ​/vol17​/iss1​/5/

312  Research handbook on law and technology Fuchs, C. & Horak, E. (2008). Africa and the digital divide. Telematics and Informatics, 25, 99-116. Retrieved from https://doi​.org​/10​.1016​/j​.tele​.2006​.06​.004 Gal, M. & Oshrit, A. (2020). The Competitive Effects of the GDPR. Journal of Competition Law and Economics, 16(3), 349-391. Retrieved from https://doi​.org​/10​.1093​/joclec​/nhaa012 González, J.L., Casalini, F. & Porras, J. (2022). A Preliminary Mapping of Data Localisation Measures. OECD. Retrieved from https://www​.oecd​-ilibrary​.org​/trade​/a​-preliminary​-mapping​-of​-data​ -localisation​-measures​_c5ca3fed​-en Graham, C. (2002). Strengthening Institutional Capacity in Poor Countries: Shoring Up Institutions, Reducing Global Poverty. Brookings. Retrieved from https://www​.brookings​.edu​/research​/strengthening​ -institutional​-capacity​-in​-poor​-countries​-shoring​-up​-institutions​-reducing​-global​-poverty/ Henry, L. (2019). Bridging the urban-rural digital divide and mobilizing technology for poverty eradication: challenges and gaps. United Nations. Retrieved from https://www​.un​.org​/development​/ desa ​/dspd​/wp​-content​/uploads​/sites​/22​/2019​/02​/ Henry​-Bridging​-Digital​-Divide​.pdf Herman, L. (2010). Multilateralising Regionalism: The Case of E-Commerce. OECD Trade Policy Papers. Retrieved from https://doi​.org​/10​.1787​/5kmbjx6gw69x​-en Joshi, D. (2022). What Could the Future of Indian Data Protection Law Look Like? The Wire. Retrieved from https://thewire​.in ​/tech ​/future​-of​-data​-protection​-law​-india Karar, H. (2019). Algorithmic Capitalism and the Digital Divide in Sub-Saharan Africa. Journal of Developing Societies, 35(4), 514–537. Retrieved from https://doi​.org​/10​.1177​/0169796x19890758 Kathuria, R., Kedia, M., Varma, G. & Bagchi, K. (2019). Economic Implications of Cross-Border Data Flows. ICRIER. Retrieved from https://www​.icrier​.org​/pdf​/ Economic​_Implications​_of​_Cross​ -Border​_ Data​_ Flows​.pdf Kilic, B. & Avila, R. (n.d.). Crossborder Data Flows and Privacy. Public Citizen. Retrieved from https:// www​.citizen​.org​/article​/crossborder​-data​-flows​-privacy/ Kwak, D. (2021). No More Strategical Neutrality on Technological Neutrality: Technological Neutrality as a Bridge Between the Analogue Trading Regime and Digital Trade. World Trade Review, 21(1), 18–32. Retrieved from https://doi​.org​/10​.1017​/S1474745620000580 Kwet, M. (2019). Digital colonialism: US empire and the new imperialism in the Global South. Race & Class, 60(4), 3–26. Retrieved from https://doi​.org​/10​.1177​/0306396818823172 Lavery, M.P.J., et  al. (2018). Tackling Africa’s digital divide. Nature Photonics, 12(5), 249-252. Retrieved from https://doi​.org​/10​.1038​/s41566​- 018​- 0162-z Lee-Makiyama, H., & Narayanan, B. (2019). The Economic Losses From Ending The WTO Moratorium On Electronic Transmissions. ECIPE Policy Brief No. 3. Retrieved from https://ecipe​.org​/publications​ /moratorium/ Macrotrends. (2022a). Meta Platforms Net Income 2010–2022. Retrieved from https://www​.macrotrends​ .net​/stocks​/charts​/ META​/meta​-platforms​/net​-income Macrotrends. (2022b). Alphabet Revenue 2010-2022. Retrieved from https://www​.macrotrends​.net​/ stocks​/charts​/GOOG​/alphabet ​/revenue Mannion, C. (2020). Data Imperialism: The GDPR’s Disastrous Impact on Africa’s ECommerce Markets. Vanderbilt Law Review, 53(2), 685–711. Retrieved from https://scholarship​.law​.vanderbilt​ .edu​/vjtl​/vol53​/iss2​/6/ Martínez-Zarzoso, I. & Chelala, S. (2021). Trade agreements and international technology transfer. Review of World Economics, 157, 631–665. Retrieved from https://doi​.org​/10​.1007​/s10290​- 021​ -00420-7 Marx, K. & Engels, F. (1848). Manifesto of the Communist Party. Retrieved from https://www​.marxists​ .org​/admin​/ books​/manifesto​/ Manifesto​.pdf McCann, D., Patel, O. & Ruiz, J. (2020). The Cost of Data Inadequacy. New Economics. Retrieved from https://neweconomics​.org​/2020​/11​/the​-cost​-of​-data​-inadequacy Michalopoulos, C. (2002). The Role of Special Differential Treatment for Developing Countries in GATT and the World Trade Organization. World Bank Policy Research Working Paper No. 2388. Retrieved from https://openknowledge​.worldbank​.org​/ handle​/10986​/19819 Miro, M. (2022). How eWallets Can Help Address Financial Inclusion Challenges. Forbes. Retrieved from  https://www​.forbes​.com ​/sites ​/for​b esb​u sin​e ssc​ouncil ​/ 2022 ​/ 07​/ 21​/ how​- ewallets​- can​-help​ -address​-financial​-inclusion​-challenges/​?sh​=1ebcecb42689

Digital development perspectives in international trade law  313 Mishra, N. (2021). Data Governance and Digital Trade in India: Losing Sight of the Forest for the Trees? ANU College of Law Research Paper No. 21.16. Retrieved from https://papers​.ssrn​.com​/sol3​/papers​ .cfm​?abstract​_id​=3835497 Mishra, N. (2022). Can trade agreements narrow the global data divide? Hinrich Foundation. Retrieved from https://www​.hinrichfoundation​.com ​/research ​/wp​/digital ​/trade​-agreements​-global​ -data​-divide/ Mitchell, A. & Mishra, N. (2021). WTO Law and Cross-Border Data Flows – An Unfinished Agenda. In M., Burri (Ed.). Global Trade Law and Policy in the Age of Big Data (pp. 83–112). Retrieved from https://doi​.org​/10​.1017​/9781108919234​.006. Mubarak, F., Suomi, R. & Kantola, S.P. (2020). Confirming the links between socio-economic variables and digitalization worldwide: the unsettled debate on digital divide. Journal of Information, Communication and Ethics in Society, 18(3), 415–430. https://doi​.org​/10​.1108​/jices​- 02​-2019​- 0021 Mukherjee, A. (2022). India’s new rules for data privacy may be more like China’s than Europe’s. Business Standard. Retrieved from https://www​.business​-standard​.com​/article​/economy​-policy​/ india​-s​-new​-rules​-for​-data​-privacy​-may​-be​-more​-like​-china​-s​-than​-europe​-s​-122080900115​_1​.html Murthy, G. & Medine, D. (2020). Making Data Work for the Poor – New Approaches to Data Protection and Privacy. CGAP. Retrieved from https://www​.cgap​.org​/sites​/default​/files​/publications​/2020​_01​ _Focus​_ Note​_ Making​_Data ​_Work ​_for​_ Poor​_0​.pdf Murthy, K.V.B., Kalsie, A. & Shankar, R. (2021). Digital economy in a global perspective: is there a digital divide? Transnational Corporations Review, 13(1), 1–15. Retrieved from https://doi​.org​/10​ .1080​/19186444​.2020​.1871257 Ndung’u, N. & Signé, L. (2022). The Fourth Industrial Revolution and digitization will transform Africa into a global powerhouse. Brookings. Retrieved from https://www​.brookings​.edu​/research​/the​-fourth​ -industrial​-revolution​-and​-digitization​-will​-transform​-africa​-into​-a​-global​-powerhouse/ Ndung’u, N. (2017). Practitioner’s Insight: M-Pesa, a success story of digital financial inclusion. Blavatnik School of Government. Retrieved from https://www​.geg​.ox​.ac​.uk​/publication​/practitioners​ -insight​-m​-pesa​-success​-story​-digital​-financial​-inclusion Nzume, C. (2021). Slowly but surely, data protection regulations expand throughout Africa. IAPP. Retrieved  from  https://iapp​.org​/news​/a​/slowly​-but​-surely​- data​-protection​-regulations​- expand​ -throughout​ - africa/#:~​ : text​ = Despite​ %20these​ %20developments​ %2C ​ %20only ​ %20half ​ , have​ %20enacted​%20data​%20protection​%20laws O’Hara, K. & Hall, W. (2021). Four Internets: Data, Geopolitics, and the Governance of Cyberspace, Oxford: Oxford University Press OECDiLibrary (2021). Rethinking rulemaking through international regulatory co-operation. Retrieved from https://www​.oecd​-ilibrary​.org​/sites​/38b0fdb1​-en ​/1​/3​/4​/index​.html​?itemId=​/content ​/publication​ /38b0fdb1​- en&​_ csp_​=981​2608​2d8c​d9c3​becb​c075​f 085ad466​&itemIGO​= oecd​&itemContentType​ =book​#abstract​-d1e14087 Pardo, T.A. (2000). Realizing the Promise of Digital Government: It’s More than Building a Web Site. Center for Technology in Government. Retrieved from https://ctg​.albany​.edu​/media ​/pubs​/pdfs​/ realizing​_the​_promise​.pdf Peng, S.Y. (2012). Renegotiate the WTO “Schedules of Commitments”? Technological Development and Treaty Interpretation. Cornell International Law Journal, 45(2), 403-430. Retrieved from https://scholarship​.law​.cornell​.edu​/cilj​/vol45​/iss2​/3/ Peng, S.Y. (2022). The Uneasy Interplay between Digital Inequality and International Economic Law. European Journal of International Law, 33(1), 205–236. Retrieved from https://doi​.org​/10​.1093​/ejil​ /chac019 Ponciano, J. (2022). The World’s Largest Tech Companies In 2022: Apple Still Dominates As Brutal Market Selloff Wipes Trillions In Market Value. Forbes. Retrieved from https://www​.forbes​.com​ /sites​/jonathanponciano​/2022​/05​/12​/the​-worlds​-largest​-technology​- companies​-in​-2022​-apple​-still​ -dominates​-as​-brutal​-market​-selloff​-wipes​-trillions​-in​-market​-value/​?sh​=55a07fbd3448 Reuters. (2017). WTO meeting ends in stalemate. The Hindu. Retrieved from https://www​.thehindu​ .com​/ business​/ Economy​/wto​-meeting​-ends​-in​-stalemate​/article21618425​.ece Selinger, E. & Hartzog, W. (2019). The Inconsentability of Facial Surveillance. Loyola Law Review, 66, 101-122. Retrieved from https://scholarship​.law​.bu​.edu​/faculty​_scholarship​/3066/

314  Research handbook on law and technology Singh, A.P. & Ahuja, M. (2006). Evaluation of Computerisation of Land Records in Karnataka. EPW, 41(1). Retrieved from https://www​.epw​.in​/journal​/2006​/01​/special​-articles​/evaluation​-computerisa tion​-land​-records​-karnataka​.html Steck, C. (2020). The fragmentation of the technological world. Telefonica. Retrieved from https://www​ .telefonica​.com ​/en ​/communication​-room ​/ blog​/the​-fragmentation​-of​-the​-technological​-world/ Tsunashima, T. (2020). China rises as world’s data superpower as internet fractures. Nikkei Asia. Retrieved  from  https://asia​.nikkei​.com ​/Spotlight ​/Century​- of​-Data ​/China​-rises​-as​-world​-s​- data​ -superpower​-as​-internet​-fractures Tyler Jackson, A. & Roest, J. (2017). China’s Alipay and WeChat Pay: Reaching Rural Users. World Bank. Retrieved from https://documents​.worldbank​.org​/en ​/publication ​/documents​-reports​/documentdetail​ /451921533193590101​/china​-s​-alipay​-and​-wechat​-pay​-reaching​-rural​-users UNCTAD. (2014). Transfer of technology and knowledge-sharing for development: Science, technology and innovation issues for developing countries. UNCTAD Current Studies on Science, Technology and Innovation, No. 8. Retrieved from https://unctad​.org​/webflyer​/transfer​-technology​-and​ -knowledge​-sharing​-development​-science​-technology​-and​-innovation University of Lucerne. (2022 June). TAPED A Dataset on Digital Trade Provisions. Retrieved from https://www​.unilu​.ch ​/en ​/faculties​/faculty​-of​-law​/professorships​/ burri​-mira ​/research ​/taped/ University of Minnesota. (2016). Principles of Economics. Retrieved from https://open​.lib​.umn​.edu​/ principleseconomics​/chapter​/17​-3​-restrictions​-on​-international​-trade/ USAid & mSTAR. (2019). India Digital Financial Inclusion. USAid. https://www​.usaid​.gov​/sites​/default​ /files​/documents​/15396​/mSTAR​_IndiaDFI​_Report​_DRAFT​_ FINAL​.pdf van der Made, J. (2021). Chinese tech, ignored by the West, is taking over Africa’s cyberspace. RFI. Retrieved from https://www​.rfi​.fr​/en ​/science​-and​-technology​/20210722​-chinese​-tech​-ignored​-by​-the​ -west​-is​-taking​-over​-africa​-s​-cyberspace Weber, S. (2017). Data, Development and Growth. Business & Politics, 19(3), 397-423. Retrieved from https://doi​.org​/10​.1017​/ bap​.2017.3 Wellings, J. & Levine, M.H. (2009). The Digital Promise: Transforming Learning with Innovative Uses of Technology. Apple. Retrieved from https://www​.intel​.com​/content​/dam​/doc​/white​-paper​/ education​-the​-digital​-promise​-transforming​-learning​-with​-innovative​-uses​-of​-technology​-paper​.pdf Woetzel, J., et al. (2017). China’s digital economy: A leading global force. McKinsey. Retrieved from https://www​.mckinsey​.com ​/featured​-insights​/china ​/chinas​-digital​-economy​-a​-leading​-global​-force World Bank. (2002). The E-Government Handbook For Developing Countries. Retrieved from https:// documents1​.worldbank​.org ​/curated ​/en ​/317081468164642250​/pdf​/320 ​450e​govh​a ndb​ook0​1pub​l ic1​ 2002​111114​.pdf World Bank. (2022). Charges for the use of intellectual property, receipts (BoP, current US$). Retrieved from https://data​.worldbank​.org​/indicator​/ BX​.GSR​.ROYL​.CD WTO. (2022). E-commerce negotiations resume with call for intensified efforts in 2022. Retrieved from https://www​.wto​.org​/english​/news​_e​/news22​_e​/jsec​_04feb22​_e​.htm WTO. (n.d). Joint Initiative on E-commerce. Retrieved from https://www​.wto​.org​/english​/tratop​_e​/ ecom​_e​/joint​_statement​_e​.htm Wunsch-Vincent, S., Ortino, F., Marceau, G., Shaffer, G. & Schefer, K.N. (2006). The WTO, the Internet and Trade in Digital Products: EC-US Perspectives. London: Bloomsbury Publishing Yeoh, K. & Chen, C. (2022). China-US Competition: Who Will Win the Digital Infrastructure Contest. The Diplomat. Retrieved from https://thediplomat​.com ​/2022​/12​/china​-us​-competition​-who​-will​-win​ -the​-digital​-infrastructure​-contest​%EF​%BC​%9F/ Young, J.C. (2019). The new knowledge politics of digital colonialism. EPA: Economy and Space, 51(7), 1424-1441. https://doi​.org​/10​.1177​/0308518x19858998

19. Perspectives on digital constitutionalism Francisco de Abreu Duarte, Giovanni De Gregorio and Angelo Jr Golia

1. INTRODUCTION Since the end of the twentieth century, digital technologies have increasingly pushed society towards an “onlife” dimension (Floridi, 2015). This process influences the spaces where rights and powers have been exercised (Cohen, 2021; Pasquale, 2016), thus moving the perspective from a unitary source of power to a networked system of public and private actors that contribute to shaping constitutional values. This shift is not without constitutional challenges. As an example, when Elon Musk bought Twitter, it showed how dangerous it is to have one person in charge of online rights like freedom of speech. When Musk decided to “free” Twitter from its current content moderation prison (Elon Musk [@elonmusk], 2022), he adopted a free-speech libertarian position that partially ignored content safeguards that had been put in place before. This libertarian approach meant erasing some policies on disinformation, admitting previously banned accounts, or disbanding content moderation structures (e.g. Trust and Safety Council). Most importantly, all of this was unilaterally implemented by Twitter without any say or action by Twitter users. While classic liberal constitutionalism in Europe reacted with mistrust, as in the case of the European Union (Thierry Breton [@ThierryBreton], 2022), users were largely subject to Musk’s unilateral decisions. Ultimately, the takeover raised questions of legitimacy, rule of law and representation in the decision-making of the digital age. Discussing these and other similar matters under a common framework has been the objective of digital constitutionalism (Celeste, 2018; De Gregorio, 2021) often described as an attempt “that seeks to articulate and realize appropriate standards of legitimacy for governance in the digital age” (Suzor, 2018). The digitisation of daily life has led scholars to discuss the role of rights and powers, thus, de facto, triggering a debate on the future of constitutionalism in the digital age. This space of discussion usually combines the technological dimension based on digital technologies such as automated systems to process data or moderate content, and the political and legal theories born in the eighteenth century, particularly based on the Lockean idea that the power of governments should be legally limited, and its legitimacy depends upon complying with these limitations (Grimm, 2016). This means that, more than a buzzword, digital constitutionalism promises to provide a framework to discuss the challenges to rights and freedoms, and the exercise of powers emerging from the digitalisation of society. Although digital constitutionalism is often used to analyse matters of online speech and platform governance, its scope is broader. It encompasses other rights beyond freedom of expression, such as data protection, non-discrimination, property, security, or access to the internet, to name but a few. In all of them, the core idea is to limit arbitrary power and protect fundamental rights in the digital age. In essence, digital constitutionalism does not aim to revolutionise those two pillars of modern constitutionalism. Instead, it aims to understand how to interpret the (still hidden) role of 315

316  Research handbook on law and technology constitutionalism in the digital age where powers tend to relocate from public to private, from local to global, from unity to pluralism. As Suzor observes, “digital constitutionalism requires us to develop new ways of limiting abuses of power in a complex system that includes many different governments, businesses, and civil society organisations” (Suzor, 2019). Indeed, digital constitutionalism is not only shaped by liberal constitutionalism but also by global standards and social practices, norms and procedures. Digital constitutionalism is not a monolith concept but looks at the normative order of the internet (Kettemann, 2021) and the traditional characteristics of modern constitutionalism. Therefore, it is the expression of different constitutional normativities to digital technologies, thus making liberal constitutionalism only one point of reference in the digital age. First, liberal constitutionalism still plays a critical role. Whether accurate or not in historical and sociological terms, modern constitutionalism – as a theory – has been traditionally linked to the emergence of states as specific forms of political organisation, each with their local traditions, cultures and identities. This is primarily because, even in a phase of internationalisation of constitutional law (Bartole, 2020; Klabbers et al., 2009), or global constitutionalism (Wiener et al., 2012), constitutions still represent the identity and values of a certain community which, by definition, is connected to certain traditions and territory. Although the protection of constitutional rights or the rule of law are missions shared by constitutional democracies, nonetheless, how these values are effectively protected depends on the political, institutional, and social dynamics of different systems. Constitutional democracies rely on policies to address common challenges but are themselves based on different values. For instance, the way in which freedom of expression promotes or limits platform power across the Atlantic shows a different constitutional sensitivity (De Gregorio, 2022). Second, digital constitutionalism is driven by norms and procedures coming directly from societal phenomena. Digital constitutionalism cannot be disassociated from theories of societal constitutionalism (Teubner, 2012), or, more broadly, pluralism (MacCormick, 1993). Even if society participates as part of the liberal constitutional architecture in the local and global arenas, their contribution is more complex, with the development of autonomous orders. These orders have their own processes, elements, structures and identities that do not necessarily overlap or are aligned with constitutional values in nation-states constitutions or global standards. From civil society to the economy or religion, up to the digital world itself, these orders have their own normative interactions that shape digital constitutionalism. Thirdly, global dynamics equally contribute to shaping digital constitutionalism. Beyond the traditional boundaries of political and legal constitutionalism (Krisch, 2010), transnational organisations have consolidated their role as standard-setters (Büthe & Mattli, 2010) and, more broadly, decision-makers. The power of international organisations or transnational corporations to make decisions over regulating content or data flows forces the local dimension, and the rule of law, to be increasingly subject to a process of transnational hybridisation. In this case, technological standards and norms are increasingly shaped by international organisations, private transnational associations and transnational corporations as underlined by the framework of business and human rights in the case of online platforms (Jorgensen, 2019), as well as by the development of technological standards by computer scientists in private labs. Likewise, the increasing power of online platforms to set the standard of protections on a global scale are nothing else than paths of constitutionalisation beyond the traditional boundaries of liberal constitutionalism.

Perspectives on digital constitutionalism  317 These different perspectives of digital constitutionalism put liberal constitutionalism under a broader umbrella, where the link between law and territory is being at least partially decentred by the link between norms and powers that come from different autonomous rationalities and shape each other through a process of mutual influence. Even outside the framework of digital technologies, constitutional law has struggled with maintaining its role in relation to the consolidation of normative principles resulting from international organisations, transnational corporations and standard-setting entities. Even if constitutional democracies are based on the same ideas, they do not always have the same understanding of rights and powers and legitimation mechanisms, which can lead to different responses. Likewise, multiple entities influence the governance of digital technologies by imposing their internal values, while defining standards of protection competing with the principles and safeguards of constitutional democracies.​ In those three perspectives, some key questions find different answers. From these different points of view on digital constitutionalism, different subjective and objective sources make up digital constitutionalism. They provide overlapping or conflicting standards and safeguards to protect users. This chapter aims then to provide an overview of these perspectives – the liberal, the societal, and the global – to underline the potential normative path of digital constitutionalism. We argue that the key to understanding digital constitutionalism lies in the connection between those perspectives, proposing a normative understanding of the narrative that ensures user representation, submission to the rule(s) of law and due process considerations as well as remedies. The liberal, the societal and the global as the three pillars of digital constitutionalism return to the core of historical constitutionalism and propose ways to ensure the protection of fundamental rights and the limitation of powers in the digital age.

Figure 19.1  Three perspectives on digital constitutionalism

318  Research handbook on law and technology The first half of this work analyses the different perspectives on digital constitutionalism in order – liberal (2), societal (3) and global (4). The second half concludes with some normative remarks as to the future of digital constitutionalism and how those perspectives constitute the core of the narrative.

2. THE LIBERAL PERSPECTIVE Digital constitutionalism finds its primary roots within the framework of legal and liberal constitutionalism. Constitutions have emerged as a way to limit and organise governmental powers and, thus, shielding individuals from interference by public authorities. The idea of “power” has been traditionally conceived as a prerogative of public authorities (Sajò & Uitz, 2017). Indeed, constitutions provide different systems, or rather checks and balances, to deal with the exercise of public powers. Nonetheless, the spread of digital technologies and automated systems implemented by public and private actors across society has questioned this vertical approach. The primary threats to rights and freedoms do not seem to come any longer exclusively from public powers but from private actors that exercise quasi-public functions, particularly, by governing spaces which are formally private, but exerting in practice, and without any safeguard, functions traditionally vested in public authorities (De Gregorio, 2021). The case of biometric technologies developed by the private sector and then deployed for public surveillance or the use of algorithmic systems to moderate online speech are only some examples of how power is less and less public at its core. This hybridisation of power is a critical challenge for liberal constitutionalism that has not been conceived as a system of barriers against the consolidation of paralegal systems or the exercise of private powers. In a certain sense, liberal constitutionalism aims to protect pluralism, thus creating the conditions for enlarging the scope of freedoms, particularly limiting the interferences by public actors that can only intervene to protect fundamental rights in respect of principles such as legality and proportionality. Constitutional doctrines of “horizontal effect” or “third-party effects” of fundamental rights (Tushnet, 2003), be they direct or indirect – which emerged as mechanisms to “compensate” the intrinsic verticality of constitutional law – remain limited in their scope of application and in their use by courts. So, while this constitutional turn from the vertical to the horizontal dimension has been generally the exception, the digital age has underlined how constitutionalism faces the crucial challenge of horizontality (Frantziou, 2019). In some senses, constitutionalism should no longer only fear the “empire of government”, but rather the “empire of private law” (Koskenniemi, 2013). The digital environment is far from being outside of any form of control. Apart from the interferences of public actors (Clark et al., 2017), the digital environment is indeed subject to the governance (or authority) of private actors designing the digital world individuals are experiencing in their daily lives (Zuboff, 2018). Google, Facebook, Amazon or Apple are paradigmatic examples of new digital forces competing with public authorities in the exercise of powers online. The increasing role of digital technologies has raised questions about the protection of fundamental rights and democracy, ultimately leading to a new digital constitutional phase at the door of the algorithmic society. From a constitutional standpoint, this revolution has led to a positive alteration of constitutional stability. At first glance, digital technologies appear to have fostered the exercise of rights and freedoms, thus promoting liberal constitutionalism as an engine of such a process.

Perspectives on digital constitutionalism  319 Nonetheless, public actors are no longer – if they ever were – the only powerful regulators or source of threats for rights and freedoms but they are just one piece of the fragmented framework of online governance. Even if states have not lost their power over the digital environment (Haggart et al., 2021), there are new actors expressing their politics such as online platforms (Greene, 2018). In this case, constitutional democracies are increasingly put at the corner of the digital or algorithmic society. Particularly, when looking beyond the transatlantic debate, the model of liberal constitutionalism has been under pressure not only by the consolidation of illiberal democracies but also by the decentralised position of public actors compared to private actors in making standards, enforcing and balancing rights and freedoms in the digital age. And, when looking at constitutional democracies in Africa and Asia, this situation has contributed to making narrower the already tiny space for liberal constitutionalism as underlined by attempts to criminalise online intermediaries or shutting down the internet (De Gregorio & Stremlau, 2020), as answers to the inequality between public and private actors in the digital age. Still, this situation has not produced the same reactions by constitutional democracies. In the last 20 years, the strategies to address the challenges raised by digital technologies have been increasingly polarised, particularly in the European Union and the United States. From the first period of regulatory convergence based on neo-liberal positions at the end of the last century, the United States and the European Union have taken different paths (De Gregorio, 2022). Unlike the Western side of the Atlantic, the European Union has slowly complemented its liberal imprinting to digital technologies which it has been primarily influenced by the US legal framework (Christou & Simpson, 2006). At the end of the last century, the European Union primarily focused on promoting the growth of the internal market, and this approach has been enriched by a constitutional democratic strategy, also through judicial activism (Pollicino, 2021). The adoption of the General Data Protection Regulation (GDPR) – as a key piece of a broader regulatory strategy – has been crucial in the ongoing process of constitutionalisation of European data protection after the Lisbon Treaty. Likewise, the Digital Services Act or the Digital Markets Act are other paradigmatic examples showing the shift of paradigm in the Union towards more accountability of online platforms to protect European democratic values. Looking solely at the former perspective as a tool for analysing digital constitutionalism does not portray the complexity of the narrative. In a more and more interconnected world, some of the traditional premises of liberal constitutionalism, such as the strict separation between public and private or autonomy through territory, make less and less sense. A traditional approach to constitutionalism fails to encompass the growing myriad of actors, institutions and normativities that regulate our online lives. Instead of focusing solely on the regulatory power of the state or the European Union, inspired by systems theory and sociology, digital societal constitutionalists propose a more elaborated view of digital constitutionalism. From this perspective, digital constitutionalism becomes also a way to look at different social systems critically, including those emerging directly from society besides law-making processes centred around public powers.

3. THE SOCIETAL PERSPECTIVE As states regulatory approaches increasingly departed from classic notions of commandand-control, so should digital constitutionalism evolve away from single-minded liberal

320  Research handbook on law and technology constitutionalism and embrace a wider view of jusgenerative phenomena, understood as social processes of creation of legal meaning (Cover, 1983). This means rejecting the “rigid divide between state and society and between the public and private” (Golia, 2023) and replacing it with a constellation of autonomous yet structurally coupled societal orders (Teubner, 2011). The constitutional normativity is understood in a plural way, as the potential result of the interaction, collision and mutual reinforcement of semi-autonomous normative orders emerging from different societal fields with distinct rationalities and legitimacy principles ultimately irreducible to each other, each reflecting unique processes, elements, structures and identities. In their mutual interaction, such orders craft their autonomy through processes in which their original features could be reinforced (autopoiesis). This view of digital constitutionalism is rooted in a broader theory of societal constitutionalism. According to Sciulli (1991), early societal constitutionalism constituted the perfect synthesis between two great theories of the twentieth century, namely Fullers’ procedural legality (Fuller, 1964) and Habermas’ communicative theory (Habermas, 1984, 1996). According to the author, the only possible way to combine both views would be to create a theory that analytically admitted and normatively promoted the existence of some “collegial formations” through which privates would organise and produce deliberative decision-making structures outside of the state. For Sciulli, societal constitutionalism constituted a framework of thought, one which researchers could use to compare different realities under a common umbrella (Sciulli, 1988). Teubner brought the concept decisively to legal studies and most digital societal constitutionalists today derive much from his works (Golia & Teubner, 2021a; Teubner & Beckers, 2013; Teubner, 2012, 2017). In Teubner’s works, which builds on but also differs in several aspects from Luhmann’s theory of functional differentiation (Luhmann, 2012–2013), societal constitutionalism is developed as a critical theory of the law challenging traditional analytical tenets of constitutionalism, namely state-centricity, excessive focus on power as the main source of constitutional questions, and a conception of rights focused on the protection of individuals and overlooking their institutional and systemic dimensions. Inspired by systems theory, then, Teubner rejected the immediacy of politics typical of liberalism and replaced it with an idea of constitutionalism in which different “autonomous societal orders” existed simultaneously. These autonomous societal orders contribute to creating multiple “civil constitutions” – both in international organisational systems, such as the WTO, the European Union or NAFTA, or privately with the examples of lex mercatoria, lex informatica, lex sportiva, or the private orderings of collective actors such as corporations, or the privatisation of core state functions such as judicial power – digital constitutionalism being an expression of such an interaction. Only the interaction of the norms emerging from such ever-changing fragments can actually be found and constrain power and other social processes. To be sure, societal constitutionalism shares with liberal constitutionalism the goal of finding the best way to protect social systems from the dangers of totalitarianism and abuses of power. However, it expands its target, highlighting that constitutional problems do not derive only from the power imperative of politics or the commodification/monetisation imperative of economy, but also from the knowledge imperative of science, the innovation imperative of technology, the news/information cycle imperative of the press, the juridification imperative of law. Threats to human and ecological integrity do not derive only from purposeful actions of individuated actors, but also from depersonalised processes, linked to the reproduction/ accumulation of power, money, information, knowledge, juridical authority among others.

Perspectives on digital constitutionalism  321 Effective constitutionalisation, then, takes place only to the extent that norms emerging within and between different social systems both found and constrain such phenomena. In this regard, societal constitutionalism features a certain scepticism towards traditional regulatory approaches to control such abuses, namely those of a normative nature (such as elections, state coercion and classic liberal constitutional structures). Those societal collegial formations or autonomous orders are called to participate in the preservation of a society’s multiple rationalities while counteracting authoritarian unitary drifts stemming from each of them. More generally, the societal perspective of digital constitutionalism focuses on the ways in which digital technologies affect and shape the concrete social existence of individuals, collective actors, and social systems alike. In that sense, it highlights that digital technologies and globalisation have not created but rather made more visible questions largely left unaddressed by state-centred liberal constitutionalism. In that sense, it is also a critical perspective, that is, it aims to deconstruct “acquired” constitutional concepts and tenets, to show how they sometimes contribute to hiding and obscuring subtler forms of societal power and colonisation. According to this view, digital constitutionalism should then encompass all of this complexity and reframe the public–private dichotomy (Golia & Teubner, 2021b), identifying the “private governments” within society and building new arenas of conflict, contestation, and – possibly – democratisation within them (Teubner, 2018). In this sense, digital constitutionalism should look just as much as to liberal constitutionalism and public regulation such as the AI Act and the Digital Services Act as to other normative orders, including companies, NGOs, civil society, and strategise their combinations to reach the ultimate goal of founding/constraining the related social processes. In the case of online speech, this approach could lead, for example, to investigate how Wikipedia manages internal dissent, what core rules exist to guide the authors’ conduct (for example, why one should not write about oneself), and what sanction mechanisms are put in place in case of breach. Applied to a social media platform, could be to understand how Airbnb sets their “constitutional rules” on non-discrimination (Sheffi, 2020) or how Facebook sets foundational rules that resemble a social contract between users and the platform (Shadmy, 2019) or how rules concerning net neutrality are effectively constitutionalised (Graber, 2023). Likewise, the societal perspective calls for the inclusion into the analytical and normative targets of digital constitutionalism of the effects of digital platform communication – in itself, a depersonalised social process – on the institutional integrity of science and press, beyond the protection of individuals’ rights (Kunz, 2023). To sum up, by developing criteria to identify the emergence of authentically constitutional phenomena – namely the presence of i) constitutional functions; ii) constitutional arenas; iii) constitutional processes and iv) constitutional structures (Teubner, 2012) – societal constitutionalism provides an analytical framework to identify and possibly manage the complexity of autonomous orders existing within society and, in normative terms, to adopt regulatory strategies that take into consideration and strategise such complexity.

4. THE GLOBAL PERSPECTIVE The complexity brought about by societal constitutionalism to the narrative of digital constitutionalism is then fully continued in the global perspective. Similarly to other global trends, the rise of the Internet and the development of digital technologies has provided an example of different patterns of convergence (Walker, 2015), usually named “globalisation” where the

322  Research handbook on law and technology state-centric model has started to lose its power (Ip, 2010). The idea of state sovereignty is still relevant (Walker, 2020), but under pressure (Johns, 2021). Territorial borders are challenged by “a world in which jurisdictional borders collapse, and in which goods, services, people and information ‘flow across seamless national borders’” (Hirschl & Shachar, 2021). Therefore, norms are not only the result of states’ production but they also come from multiple sources on a global scale (Berman, 2012), thus leading to discussion of the role of global law from different perspectives (Walker, 2015; Ziccardi-Capaldo, 2008). By refusing to focus solely on the role of the state or the role of society, rather advocating for an essential combination of both, digital global constitutionalists support the existence of a transnational constitutional moment. In the search for a constitutional equilibrium, this perspective also agrees with the ultimate objective of liberal constitutionalism: to strike a balance between organised and spontaneous spheres. Authors such as Gill, Redeker or Gasser (Gill & Cutler, 2014; Redeker et al., 2018), as well as Celeste (2022), have advocated for a plural view of constitutionalism in which different sources interact and counteract to adapt to the new nature of the transnational digital society. Documents such as internet charters or Internet bills of rights are good examples of how societal movements can create norms beyond the territory of nation-states and incorporate contrasting views from different communities. While this was predicted and is consistent with societal constitutionalism, digital global constitutionalists emphasise the importance of the state and liberal institutions in enforcing these norms. Here too, limiting arbitrary power and protecting fundamental rights seem to dominate the discourse, with constant references to public international law documents such as the Universal Declaration of Human Rights, the International Covenants, and multiple human rights conventions. Regional expressions of this global digital constitutionalism, such as the Declaration on Digital Rights and Principles of the European Union or the Nigerian Digital Rights and Freedoms Bill, refer to fundamental rights’ texts. In this sense, global digital constitutionalism does not reject the core of constitutionalism but rather promotes its adaptation to a new transnational reality. Besides, the increasing role of global standards is expanding their decision-making in digital spaces. The creation of standards in content moderation by social media can be considered the exercise of transnational powers by private companies. Likewise, the development and spread of technological standards for artificial intelligence applications on a transnational scale shapes how technology affects constitutionalism from its global perspective. Therefore, the influence of global digital constitutionalism is not only just related to the bill of rights but also to the governance of transnational actors. In many ways, global digital constitutionalism is connected with the other two liberal and societal perspectives. For example, while internet charters are often non-binding documents from the perspective of state law, these instruments tend to include a wider range of stakeholders in their drafting process, than other processes in Internet governance (Cath, 2021). Likewise, the application of global standards to moderate content can be considered an expression of digital societal constitutionalism. This multiplicity of sources in developing normative structures is typical of a societal view of constitutionalism, in which politics is one among many different media to achieve craft autonomous orders. In this sense, global digital constitutionalism applauds societal complexity. Likewise, due to its transnational nature and connection to international law structures, states play a relevant role in the definition and enforcement of rights and the exercise of power. It is often through mechanisms of reception of those non-binding provisions of internet charters or bills of rights that rights therein

Perspectives on digital constitutionalism  323 enshrined are upgraded to binding statutes. In this sense, global digital constitutionalism does not reject the liberal digital constitutionalism, but rather accepts the key role that state institutions play in protecting rights and freedoms. This connection between the former two perspectives comes naturally with challenges that are typical of such exercises. For example, by accepting the continuous role of states in the enforcement of fundamental rights, this perspective remains anchored in the publicprivate divide, inadvertently maintaining the hierarchy between public power and civil society. Through this, digital global constitutionalists somehow escape the difficult question of conflicting autonomous orders and the polycentric nature of transnational governance that lies at the core of societal perspectives (Golia & Teubner, 2021b). Similarly, much of the societal perspective emerged as a reaction to alleged blind spots of state-centred, liberal constitutional theory. And many constitutional theorists attached to traditional forms of political legitimation would still deny that at the global level any democratic or contestatory process worth this name is possible. Put otherwise, this connection does not aim to forcefully build coherence and/or agreement at all costs, as some key differences undeniably remain.

5. A NORMATIVE PATH: THE FUTURE OF DIGITAL CONSTITUTIONALISM This systematisation is far from exhaustive, but it aims to reduce the complexity of the perspectives when building digital constitutionalism. These approaches share common traits that shape digital constitutionalism. An important takeaway from this exercise is how all three perspectives seem to agree on the fundamentals of constitutionalism: to limit arbitrary uses of power through the (rule of) law, and to promote the protection of fundamental rights, at both individual and institutional/systemic levels. After putting together these pieces of digital constitutionalism, these different points of view contribute to defining some common principles which, as normative stepping stone, lead towards a theory of digital constitutionalism. . 5.1 Rule(s) of Law One common reference among all perspectives of digital constitutionalism is the idea of binding organised power (e.g. the state) to rules/principles that limit its arbitrary power. This could be described as the rule of law in a broad sense, with law reflecting a broader idea of rules and principles of conduct. This lies at the core of classic constitutionalism and its ambition to limit abuses of power (Grimm, 2005). However, all three perspectives struggle to agree on exactly what rules could perform that function and who is responsible for enacting them. While for liberal digital constitutionalists the answer is rather straightforward – state or regional/international bodies – they quickly run into difficulties when dealing with the jurisdictional reach of the Internet (Bradford, 2020). For societal constitutionalism, there are multiple actors producing constitutional norms from the bottom, so the state is only one of the societal fragments. Further, societal constitutionalism claims that not only power (in the narrow sense), but also social processes based on money, truth, and juridical authority as such may violate fundamental rights and thus need to be constrained. Likewise, global norms are not always developed through institutionalised processes participated by states. They are primarily shaped by transnational

324  Research handbook on law and technology corporations and international organisations (Zalnieriute, 2019) that tend to mirror processes of constitutionalisation. Although far from new in contemporary legal systems (Radu et al., 2021), fragmentation comes at a cost. Conflicting orders without obvious norms of conflict can result in decreased protection of fundamental rights while also threatening legal certainty, and, therefore, the rule of law. The multiplicity of autonomous systems calls then for some sort of inter-systemic rule of conflicts, which could resemble other parts of the legal system in which hierarchy is absent (e.g. private international law). Whether such rules of conflict exist and what they could look like is far from clear at this stage (Golia & Teubner, 2021a). Likewise, global digital constitutionalists recur to transnational documents such as internet charters or bills of rights to find their sources of digital constitutionalism and extract applicable rules. However, even they acknowledge the limitations of upholding non-binding rules to enforce fundamental rights. These are often limited by states’ willingness/unwillingness to welcome those norms into their own legal systems. Nonetheless, answering this question is crucial. Not only because of the practicality associated with applying rules to the given case and protecting rights, but also for conceptual clarity. What distinguishes rules as “digitally constitutional” in a transnational setting populated by multiple normative orders (public, private, global)? Who creates them? Does any rule with impact on the digital sphere immediately become “digital constitutionalism”? Without the safety of hierarchy rules and the separation of public and private which is typical of classic liberal constitutionalism, it is still unclear what these rules are and whether they are more than just private rules. In this sense, the key lies in the idea of submission of power to the rule of law by both public and private institutions. This lies at the core of digital constitutionalism, for any of the perspectives. Principles such as transparency, accountability, representation, or due process, should be observed whenever organised power threatens fundamental rights, irrespective of the power’s nature. This framework also extends to the entanglements between public and private powers which lead to forms of collaboration and conflict. Progressively reframing the dichotomy between private and public is societal constitutionalism’s greatest contribution. Adding to that a transnational dimension, which questions jurisdictional limits, is global constitutionalists’ great idea. To incorporate that into well-established liberal constitutionalism must be the role of digital constitutionalism, whose major challenge is to find exactly when a certain power (public or private) needs to be contained by the rule of law. Society will craft its own power-control structures but will often require the help of state institutions to enforce their rights. 5.2 Representation and Due Process This limitation of power via the rule of law requires legitimacy to be effective. This is something on which all perspectives agree. Depending on the regulatory moment or actor under consideration, this representation changes. It can range from representation in rulemaking, adjudication, or enforcement of rules. It can mean representation of individuals, private companies, societal structures such as NGOs, or states themselves. Unsurprisingly, the three perspectives have different approaches to representation. In liberal constitutionalism’s early days, representation was closely tied to the rule of law and the limits on public power abuse. Representation meant citizens’ representation, which

Perspectives on digital constitutionalism  325 then conferred legitimacy on rulemaking and granted the administration the power to execute decisions in their name. However, in the absence of a defined territory in the digital age, a key classic liberal constitutionalist premise is removed. In contemporary times, due to its own representation challenges, the European model has favoured representation as due process. Pieces of legislation such as the Online Terrorist Content Regulation, the Digital Single Market Copyright, or the recently passed Digital Services Act, all profess this objective of empowering citizens with information (De Gregorio, 2022). In these situations, representation means letting people know that they can check the power of organised power at any time, but especially at the enforcement stage. Those pieces of legislation create rights of transparency, justification, or appeal and enforce them on private power, empowering individuals with information and an ex-post right of control. Simultaneously, private organised powers cooperate with public organised powers by means of co-regulation, such as the cases of codes of conduct on hate speech or the code of practice on disinformation in the European Union. Societal constitutionalism takes representation very seriously and is critical of liberal constitutional structures in this way. In a sense, societal constitutionalism is a true “constitutionalism from below”, contrasting with the thinner/indirect representation of liberal constitutionalism. This perspective advocates for representation beyond the boundaries of public power and calls for each autonomous order to create its own constitutional arenas between organised and spontaneous spheres (Teubner, 2012, 2018). Yet, even there, those organised powers can sometimes welcome enormous spontaneous spheres (for example, social media platforms) whose representation is often reduced to Hirschman’s dynamics of Exit, Voice and Loyalty (Hirschman, 1970). The societal approach looks at each system as developing rules through their internal and independent rules of procedures, even if their outcome inevitably shapes other systems (through the idea of “legal irritants”) (Teubner, 1998). In this case, representation comes from the insight of each system and raises fewer questions than from the liberal approach which is primarily struggling not to decentralise powers beyond national territory while protecting economic freedoms and keeping an open approach to the internal community. From a global perspective, representation is subject to a process of bureaucratisation. Community standards and rules are developed by online platforms while norms at the international level are shaped by influences that are far from local dimensions. In these cases, the user, or the citizen, plays a marginal role. Even if the Arab Spring or the #Wallstreetbets have underlined the power of online communities, still the rules of the digital environment are not only primarily driven by transparent and democratic processes. Representation constitutes another key point of convergence between the different perspectives. Digital constitutionalism must have representative structures of the different actors in play. This ensures that rights and powers are not shaped and limited in spaces that are far from the individual. Nonetheless, considering the limit of ensuring a full representation in a networked system of normativities and powers, it is critical to focus on the remedies that aim to rebalance the position of the individual in the digital age. 5.3 Remedies The questions about the protection of fundamental rights and limitation of power in the digital age do not exclusively involve the rule of law and representation. Another important part is

326  Research handbook on law and technology figuring out what can be done to make sure that checks and balances are not just formal limits on the use of power. From a liberal perspective, constitutional remedies involve the possibility of protecting rights through democratic institutions. In this case, the possibility of extending the protection of fundamental rights horizontally or the positive obligation of states to protect rights and freedoms pushes, respectively, courts and lawmakers to play an active role in protecting rights and freedoms in the digital age. In this case, the CJEU has already underlined the critical role of extending remedies for users, as in the decision of Google Spain (Post, 2018), when the European court recognised the right to be forgotten in search engines by an extensive constitutional interpretation of the European Charter (Pollicino, 2021). However, traditional liberal institutions struggle to meet the demand for remedies in the digital age. Other perspectives, such as digital societal constitutionalists and digital global constitutionalists, also consider the remedies from autonomous orders and processes of constitutionalisation beyond the state. In this case, attention is given to normative experiences such as Meta’s Oversight Board. Importantly, however, the societal perspective does not necessarily advocate for “less government” and/or private (self-)regulation. On the contrary, it calls for regulatory strategies to take into consideration the existence of non-state normativities and combine them in regulatory strategies which ultimately exploit the specific reflexivity of every field (Kunz, 2023; Motsi-Omoijiade, 2022). This is closely linked to representation in the digital environment. If private standards are developed outside a system of transparency and accountability but are only legitimised by users’ adherence, the role of oversight and review loses its power and its primary role, i.e. ensuring that constitutional values agreed by a community limit arbitrary decisions. Even in this case, the European approach seems to provide an example of remedies for the exercise of powers in the digital age. The possibility for users to rely on redress mechanisms to ask the review of content moderation can be seen as an example of how liberal constitutionalism is tending to expand its boundaries also taking into account the societal and global perspectives. Likewise, increasing the accountability of the actors involved in processing personal data is another path to give new possibilities to complain about the exercise of power, and the abuse of freedoms. Nonetheless, the questions about remedies in the digital age are not exclusively relevant to liberal constitutionalism. The more systems of justice will be outsourced and governed by private organisations or standards bodies at the transnational level, the more the role of remedies outside the state will be critical for users, and, therefore, citizens. Therefore, digital constitutionalism also supports the development of a system of remedies that aims to ensure that individuals are not subject to the lack or fragmentation of remedies in digital spaces.

6. CONCLUSIONS These three perspectives on digital constitutionalism underline a path towards a new compromise, or social contract, in the digital age where the limitation of power and fundamental rights are passed beyond the public-private dichotomy. The narrative welcomes a network system of normativities, producing standards and norms coming from liberal, social and global perspectives. The normative issues for digital constitutionalism to address are then defining

Perspectives on digital constitutionalism  327 what the rule of law, representation, and remedies should entail. Defining and compromising over those principles holds the key to a normative path of constitutionalism in the digital age. This internal richness of points of view is far from negative. Nor is it an instrument to coopt the symbolic capital of constitutionalism. Instead, it allows distinct discourses sharing broader normative goals to co-exist, interact, and potentially compensate for each other’s limits. Further, such diversity also facilitates critical engagements and contributes to debunking co-option attempts. What matters is the shared ambition to build legal instruments that protect and constrain the dynamics of the digital code in its relation to society. It is precisely this focus on the way digital technologies and, more generally, the impact of digitisation on the social existence of both individual and collective actors that makes “digital constitutionalism” “digital”. For a more traditional formulation, the goal is the translation and implementation of constitutional principles into different – both old and new – emerged societal fields. However, a shared normative horizon of any constitutionalism, be it liberal, global, or societal, defines this project.

BIBLIOGRAPHY Bartole, S. (2020). The Internationalisation of Constitutional Law. Oxford: Hart. Berman P.S. (2012). Global Legal Pluralism: A Jurisprudence of Law Beyond Borders. Oxford: Oxford University Press. Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford: Oxford University Press. Büthe, T. & Mattli, W. (2010). Standards for Global Markets: Domestic and International Institutions for Setting International Product Standards. Handbook on Multi-level Governance. Edward Elgar. 455. Cath C. (2021). The Technology We Choose to Create: Human Rights Advocacy in the Internet Engineering Task Force. Telecommunications Policy, 45(6), 102144. Celeste, E. (2018). Digital Constitutionalism: Mapping the Constitutional Response to Digital Technology’s Challenges. SSRN Electronic Journal. Retrieved from https://doi​.org​/10​.2139​/ssrn​ .3219905 Celeste, E. (2022). Digital Constitutionalism: The Role of Internet Bills of Rights (1st ed.). London: Routledge. Retrieved from https://doi​.org​/10​.4324​/9781003256908 Christou G. & Simpson. (2006). The Internet and Public–Private Governance in the European Union. Journal of Public Policy, 26(1) 43. Clark, J., Faris, R., Morrison-Westphal, R., Noman, H., Tilton, C. & Zittrain, J. (2017). The Shifting Landscape of Global Internet Censorship. Berkman Klein Center for Internet & Society Research Publication. Cohen J. (2021). Between Truth and Power: The Legal Construction of Information Capitalism. Oxford: Oxford University Press. De Gregorio, G. (2022). Digital Constitutionalism in Europe: Reframing Rights and Powers in the Algorithmic Society. Cambridge: Cambridge University Press. De Gregorio, G. (2021). The Rise of Digital Constitutionalism in the European Union. International Journal of Constitutional Law, 19(1), 41–70. Retrieved from https://doi​.org​/10​.1093​/icon​/moab001 De Gregorio, G. (2022). Digital Constitutionalism Across the Atlantic. Global Constitutionalism, 11(2), 297. De Gregorio, G. & Stremlau, N. (2020). Internet Shutdowns and the Limits of Law. International Journal of Communication, 14, 4224. Elon Musk [@elonmusk]. (2022, October 28). The Bird Is Freed [Tweet]. Twitter. Retrieved from https:// twitter​.com​/elonmusk​/status​/1585841080431321088 Floridi, L. (Eds.). (2015). The Onlife Manifesto: Being Human in a Hyperconnected Era. Cham: Springer Frantziou, E. (2019). The Horizontal Effect of Fundamental Rights in the European Union. Oxford: Oxford University Press.

328  Research handbook on law and technology Fuller, L. (1964). The Morality of Law. New Haven: Yale University Press. Gill, S. & Cutler, A.C. (Eds.). (2014). New Constitutionalism and World Order. Cambridge: Cambridge University Press. Retrieved from https://doi​.org​/10​.1017​/CBO9781107284142 Golia, A.J. (2023). Critique of digital constitutionalism: Deconstruction and reconstruction from a societal perspective. Global Constitutionalism, FirstView, 1–31. Retrieved from https://doi​.org​/10​ .1017​/S2045381723000126. Golia, A.J. & Teubner, G. (2021a). Societal Constitutionalism (Theory of). In: Cremades, J., Hermida, C. (Eds.). Encyclopedia of Contemporary Constitutionalism. Springer, Cham. https://doi​.org​/10​ .1007​/978​-3​-319​-31739​-7​_111–1. Golia, A.J. & Teubner, G. (2021b). Societal Constitutionalism: Background, Theory, Debates. ICL Journal, 15(4), 357–411. Retrieved from https://doi​.org​/10​.1515​/icl​-2021​- 0023. Greene, L. (2018). Silicon States: The Power and Politics of Big Tech and What It Means for Our Future. Berkeley: Counterpoint. Grimm, D. (2005). The Constitution in the Process of Denationalization. Constellations, 12(4), 447– 463. Retrieved from https://doi​.org​/10​.1111​/j​.1351​- 0487​.2005​.00427.x Habermas, J. (1984). Reason and the Rationalization of Society. London: Heinemann. Habermas, J. (1996). Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. Cambridge: MIT Press. Haggart, B., Tusikov, N. & Scholte, J.A. (Eds.). (2021). Power and Authority in Internet Governance Return of the State? London: Routledge. Hirschman, A.O. (1970). Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge: Harvard University Press. Hirschl, R. & Shachar, A. (2019). Spatial Statism. International Journal of Constitutional Law, 17(2), 387. Ip, E.C. (2010), Globalization and the Future of the Law of the Sovereign State. International Journal of Constitutional Law, 8(3), 636. Johns, F. (2021). The Sovereignty Deficit. International Journal of Constitutional Law, 19(1), 6. Jorgensen, R.F. (Eds.). (2019). Human Rights in the Age of Platforms. Cambridge: MIT Press. Kettemann, M. (2021), The Normative Order of the Internet. Oxford: Oxford University Press. Koskenniemi, M. (2013). Globalization and Sovereignty: Rethinking Legality, Legitimacy and Constitutionalism. International Journal of Constitutional Law, 11(3), 818–822. Retrieved from https://doi​.org​/10​.1093​/icon​/mot035 Luhmann, N. (1993). Das Recht der Gesellschaft (1. Aufl). Frankfurt am Main: Suhrkamp. Klabber J., Peters, A. & Ulfstein, G. (2009) The Constitutionalization of International Law. Oxford: Oxford University Press. Krisch, N. (2010). Beyond Constitutionalism: The Pluralist Structure of Postnational Law. Oxford: Oxford University Press. Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge: Harvard University Press. Pollicino, O. (2021). Judicial Protection of Fundamental Rights on the Internet: A Road towards Digital Constitutionalism. Oxford: Hart. Post, R. (2018). Data Privacy and Dignitary Privacy: Google Spain, the Right to Be Forgotten, and the Construction of the Public Sphere. Duke Law Journal, 67(5). Radu, R., Kettemann, M.C., Meyer, T. & Shahin, J. (2021). Normfare: Norm entrepreneurship in internet governance. Telecommunications Policy, 45(6), 102148. Retrieved from https://doi​.org​/10​ .1016​/j​.telpol​.2021​.102148 Redeker, D., Gill, L. & Gasser, U. (2018). Towards Digital Constitutionalism? Mapping Attempts to Craft an Internet Bill of Rights. International Communication Gazette, 80(4), 302–319. Retrieved from https://doi​.org​/10​.1177​/1748048518757121 Sajó, A. & Uitz, R. (2017). The Constitution of Freedom: An Introduction to Legal Constitutionalism. Oxford: Oxford University Press. Sciulli, D. (1988). Foundations of Societal Constitutionalism: Principles from the Concepts of Communicative Action and Procedural Legality. The British Journal of Sociology, 39(3), 377. Retrieved from https://doi​.org​/10​.2307​/590484

Perspectives on digital constitutionalism  329 Sciulli, D. (1991). Theory of Societal Constitutionalism: Foundations of a Non-Marxist Critical Theory (1st ed.). Cambridge: Cambridge University Press. Retrieved from https://doi​.org​/10​.1017​/ CBO9780511570933 Shadmy, T. (2019). The New Social Contract: Facebook’s Community and Our Rights. Boston University International Law Journal, 37, 51. Sheffi, N. (2020). We Accept: The Constitution of Airbnb. Transnational Legal Theory, 11(4), 484–520. Retrieved from https://doi​.org​/10​.1080​/20414005​.2020​.1859254 Suzor, N. (2018). Digital Constitutionalism: Using the Rule of Law to Evaluate the Legitimacy of Governance by Platforms. Social Media + Society, 4(3). Retrieved from https://doi​.org​/10​.1177​ /2056305118787812 Suzor, N. (2018). Lawless. The Secret Rules That Govern our Digital Lives. Cambridge: Cambridge University Press. Teubner, G. (2011). Self-Constitutionalizing TNCs? On the Linkage of “Private” and “Public” Corporate Codes of Conduct. Indiana Journal of Global Legal Studies, 18(2), 617. Retrieved from https://doi​ .org​/10​.2979​/indjglolegstu​.18​.2​.617 Teubner, G. & Beckers, A. (2013). Expanding Constitutionalism. Indiana Journal of Global Legal Studies, 20(2), 523. Retrieved from https://doi​.org​/10​.2979​/indjglolegstu​.20​.2​.523 Teubner, G. (1998). Legal Irritants: Good Faith in British Law or How Unifying Law Ends Up in New Divergences. The Modern Law Review, 61(1), 11-32 Teubner, G. (2012). Constitutional Fragments: Societal Constitutionalism and Globalization. Oxford: Oxford University Press. Teubner, G. (2017). Societal Constitutionalism: Nine Variations on a Theme by David Sciulli. In P. Blokker & C. Thornhill (Eds.). Sociological Constitutionalism (1st ed., pp. 313–340). Cambridge: Cambridge University Press. Retrieved from https://doi​.org​/10​.1017​/9781316403808​.010 Thierry Breton [@ThierryBreton]. (2022, October 28). @elonmusk In Europe, the Bird Will Fly by .com​ / ThierryBreton​ /status​ Our EU Rules. #DSA [Tweet]. Twitter. Retrieved from https://twitter​ /1585902196864045056 Tushnet, M. (2003). The Issue of State Action/Horizontal Effect in Comparative Constitutional Law. International Journal of Constitutional Law, 1(1), 79 Walker, N. (2002). The Idea of Constitutional Pluralism. The Modern Law Review, 65(3), 317–359 Walker, N. (2015), Intimations of Global Law. Oxford: Oxford University Press. Walker, N. (2020). The Sovereignty Surplus. International Journal of Constitutional Law, 18(2), 370. Wiener, A., Lang, A., Tully, J., Maduro, M. & Kumm, M. (2012). Global Constitutionalism: Human Rights, Democracy and the Rule of Law. Global Constitutionalism, 1(1), 1–15. Retrieved from doi:10.1017/S2045381711000098 Zalnieriute, M. (2019). From Human Rights Aspirations to Enforceable Obligations by Non-State Actors in the Digital Age: The Case of Internet Governance and ICANN | Yale Journal of Law & Technology. 278 Yale Journal of Law and Technology, 21. Retrieved from https://yjolt​.org​/human​ -rights​-aspirations​-enforceable​-obligations​-non​-state​-actors​-digital​-age​-case​-internet Ziccardi-Capaldo, G. (2008), The Pillars of Global Law. Burlington: Ashgate. Zuboff, S. (2018). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.

20. The saga of copyrighted standards: a perspective on access to regulation Olia Kanevskaia1

1. INTRODUCTION Law should be free and accessible to everybody. This principle is entrenched in the modern legal systems and forms a necessary element of a democratic system adherent to the rule of law. Yet, due to the emergence of new regulatory fields where governmental regulation is ineffective, or even impossible (Berman, 1983), rule-making is being increasingly delegated to private actors. The fact that private entities can produce binding rules is not unfamiliar to legal pluralists or functionalists, as well as scholars studying global law or global and private governance (e.g., Berman, 2017, 2012; Wielsh, 2012; Stringham, 2015, among many others). Different streams of legal literatures have studied the phenomenon of private regulation, collaborative governance, experts-driven regulation, and regulation by the “code” (e.g., Cafaggi, 2011; Alemanno, 2011; Sabel & Zeitlin, 2010; Lessig, 2009; Ansell, 2008; ). In one way or another, most studies in these fields at some point refer to standards – documents established by expert-driven committees and standards bodies. Some of these standards regulate new emerging legal fields, some take over the roles typically performed by States (De Londras, 2011). Examples range from the strong reliance on standards in the proposed EU Regulation of Artificial Intelligence (de Vries, Kanevskaia & de Jager, 2023) to the governance of robots by private standards in the absence of effective public policy (Villaronga, 2019) to mandating the choice of technology standards by law, as it is the case with the, at the moment of writing, proposed EU Directive on the Common Charging Port.2 As these and many other documents largely shape the current normative space, new questions arise regarding their normative value and place in the global and national legal orders. More often than not, access to or implementation of standards is prevented by intellectual property. One of the most pertinent issues frequently addressed by scholars and policymakers are patents that are essential for standards’ implementation.3 In turn, the other salient issue in the realm of intellectual property – copyright protection of standards – has not been discussed to such an extent. Being a property of committees or bodies that created them, standards are under copyright protection, and can be accessed against a fee-payment or through obtaining a

1  I would like to thank Przemysław Palka, Ewa Laskowska-Litak, Joe Tomlinson and five EURAS reviewers for their comments on the earlier draft; and Hans Micklitz and the Dutch Standardisation Forum for the very helpful discussions. All errors are mine. 2  Directive (EU) 2022/2380 of the European Parliament and of the Council of November 23, 2022, amending Directive 2014/53/EU on the harmonization of the laws of the Member States relating to the making available on the market of radio equipment, OJ L 315. 3  See Chapter 29 by Colangelo and Pierucci in this Research Handbook.

330

The saga of copyrighted standards  331 valid license. Only the bodies holding copyrights over the standards are allowed to distribute them. The debate of copyrighted standards goes to the core of the legitimacy of standards as a form of rule-making beyond the state and raises both normative and practical issues. Standards are inherently private, while the law is inherently public. The problem arises when private standards are the only possible mode of regulation or even more, when their normative power is equated to the force of law: instinctive logic requires that those subject to laws have the knowledge of these laws (Scalia, 1989), which is impossible without having access to them. This puzzle is difficult to solve due to the legal design of the standardization systems and the commercial nature of standard-setting bodies. This chapter provides a perspective on the legitimacy and accountability of private and expert-driven regulation by looking at these questions through the lens of public access to copyrighted standards that have become a part of law. It attempts to place this discussion in the framework of the highly digitalized society where the emerging regulatory fields require other means of regulation than the traditional, top-down approach. To that end, it observes how copyrighted standards are addressed in two major Western jurisdictions where the issue of standards’ access has recently been raised before the courts: the European Union (EU) and the United States (US). After examining the legal frameworks and the courts’ decisions, it attempts to offer a non-exhaustive list of options that could facilitate balancing public and private interests in standardization.

2. A BRIEF INTRODUCTION TO VOLUNTARY STANDARDS Legal studies generally attribute different meanings to the word “standard,” depending on the context and legal discipline in which it is applied.4 While this contribution is devoted to “technical standards,” its purpose is not to delve into the semantics; rather, it aims to provide the definition of “standards” that allows understanding their place in the studied legal frameworks and appreciating the arising concerns when it comes to balancing private and commercial interests involved in standardization. 2.1 Conceptualizing Standards The formal definition of a standard is introduced by the International Organization for Standardization (ISO) – the global standardization authority producing standards that are widely applicable and recognized by most States. Following the ISO’s Guide 2:2004, standards are […] documents, established by consensus and approved by a recognized body, that provide for common and repeated use, rules, guidelines or characteristics for activities or their results, aimed at the achievement of the optimum degree of order in a given context.5 4  Examples that are more familiar to lawyers are perhaps the “standard of review” or “standard of proof” in litigation or “standard of care” in tort law. 5  Section 3.2, ISO/IEC Guide 2, Standardization and Related Activities: General Vocabulary (2004), retrieved from https://isotc​.iso​.org​/ livelink​/ livelink​?func​=ll​&objId​=8389141​&objAction​=browse​&sort​ =name

332  Research handbook on law and technology In other words, standards are “formalized sets of rules with a high degree of specificity” (Djelic & den Hond, 2014) and that exist in almost each and every sector, ranging from environmental protection (e.g., ISO 14001) to clothing measurement (e.g., EN 13402) to safety requirements of heavy machinery (ISO 6085) to technological interoperability between devices and networks (e.g., Wireless Local Area Network specifications, or simply, Wi-Fi). Lacking its own academic discipline (de Vries, 2015), standardization has been increasingly discussed in the field of organizational management (e.g., see the seminal work of Knut Blind; Henk de Vries; and Rudi Bekkers), political economy and political science (e.g., Graz, 2019; Büthe & Mattli, 2011; Egan, 2001), and to some extent also in law (e.g., Delimatsis, 2015; Stuurman, 1995). Especially when it comes to standards for (emerging) technologies, the legal scholarship seems to be well-developed in the fields of antitrust and patent law (e.g., see the seminal work of Lemley, Contreras and Petit). Recently, discussions around the legitimacy and democratic deficit in standardization are gaining momentum, with scholars increasingly questioning the legal value and effects of (European) standards (e.g., Eliantonio & Caufmann, 2020; Senden, 2017) as well as institutional aspects and processes of standardization (Kanevskaia, 2023; Wiegmann et al., 2017). Indeed, the aforementioned ISO’s definition implies that standards come into being following a specific procedure: “developed by consensus” and “approved by a recognized body.” This suggests the importance of standards development processes for a document’s classification as a “standard.” Whereas consensus generally implies the “absence of opposition” (Werle, 2001)6 and arguably, takes on a procedural dimension, the element of “recognized bodies” refers to the institutional dimension, suggesting that there is a certain type of bodies that develop material to be considered as a standard. From the normative perspective then, it is important to distinguish between documents developed in “recognized” and “other” bodies. Recognition can take place at the different levels: some standards bodies, like ISO or the International Telecommunication Union (ITU) are recognized globally; others, like the European Committee for Standardization (CEN) and the European Telecommunication Standards Institute (ETSI), are recognized regionally or nationally, like L’Association Française de Normalisation (AFNOR), Deutsches Institut für Normung (DIN) and to a certain extent, the American National Standards Institute (ANSI).7 This recognition may occur either through legislation (e.g., Annex I of the EU Regulation 1025/2012)8 or by means of another global organization (e.g., Article 2.4 of the Technical Barriers to Trade Agreement of the World Trade Organization).9 As explained further in this chapter, whether standards are developed by “recognized” bodies carries more legal weight in the EU than the US framework. However, standards bodies that may not qualify as “recognized” produce standards that are equally important. Especially in the field of the Information and Communication Technologies (ICT) and Internet, many standards are produced by consortia or special interests groups: examples range from Zigbee and Bluetooth connections to standards that pertain Although each standards body can maintain its own definition of consensus. Note that ANSI does not develop standards but mainly coordinates the US standardization activities in the global standards institutions and establishes rules for American standards and standards developers, see infra n 16 and accompanied text. 8  Regulation 1025/2012 of the European Parliament and of the Council of October 25, 2012, on European standardization, OJ 2012 No. L316/12. 9  Agreement on Technical Barriers to Trade, 1868 UNTS 120. 6  7 

The saga of copyrighted standards  333 to the Internet of Things produced by the Organization for the Advancement of Structured Information Standards (OASIS) and web protocols developed by the World Wide Web Consortium (W3C). Such informal bodies are typically sector-specific, as an opposite to the broader scope of the “recognized” standards bodies. In the absence of formal recognition by States,10 standards produced by these bodies are typically legitimized through their established reputation among the industry actors (Kanevskaia, 2023). Irrespective of the type of standards bodies, standards development processes typically have two common features. Firstly, standards are developed by subject matter experts who are employed by research centers, governments or more often, by private companies. Even in such an intergovernmental institution as ITU, the private sector remains largely present.11 Secondly, standards are voluntary; this voluntarism is what sets standards development apart from the (democratic) law-making by the Parliaments as the people’s representatives. Yet, and perhaps precisely for their difference from law, standards are crucial for many regulatory spaces. Standards reach where the state regulation cannot reach due to the lack of specific knowledge or resources, presenting more effective alternatives than governmental regulation. Especially in emerging domains, such as Artificial Intelligence (AI), standards take up an important regulatory role.12 It is then not surprising that scholarship discussing legal arrangements outside the state’s rule-making often refers to standards in the context of co-and self-regulation, transnational private regulation and private governance, emphasizing the role of private actors in the different regulatory systems (Kingsbury, 2019; Cafaggi, 2011; Peters et al., 2009). However, standards do not exist in isolation from legal structures. Their relationship with law, and their linkages with legal requirements, vary per jurisdiction and regulatory regimes. Different international, regional and national rules may apply to different types of standards and standards bodies. This paper focuses on the two Western jurisdictions where standardization (case) law is the most advanced: the European Union and the United States. 2.2 Standards Embedment in the EU and US Legal Systems In the European Union, technical standardization has played a crucial role in European harmonization and the achievement of the Internal Market (Schepel, 2013). Since 1980, the European Union has used the “New Approach” harmonization technique13 (later updated to the “New Legislative Framework”)14 encapsulated in the following formula: the European Commission (EC) produces Directives15 with essential requirements of i.e., health and safety, 10  Although some of these bodies that are located or incorporated in the United States are accredited by the ANSI. 11  See the list of ITU-T Sector Members, available at https://www​.itu​.int​/ hub​/membership​/our​-members​/directory/​?myitu​-industry​=true​&request​=sector​-members 12  See, for instance, the recent EU Proposal for an Artificial Intelligence Act (AIA) (COM/2021/206), actively using the “New Approach” technique, explained in infra n 12 and accompanied text. 13  Council Resolution of May 7, 1985, on a New Approach to Technical Harmonisation and Standards, OJ C 136/1, May 7, 1985. 14  For more information, see the European Commission, “New Legislative Framework,” retrieved from https://single​-market​-economy​.ec​.europa​.eu​/single​-market​/goods​/new​-legislative​-framework​_en. 15  Unlike a Regulation, a EU Directive is binding only regarding the results it aims to achieve but leaves Member States some regulatory space regarding the form and the methods of achieving these results, see Article 266 Treaty of Functioning of the EU (2012) (“TFEU”).

334  Research handbook on law and technology and the three European standards bodies, CEN, the European Committee for Electrotechnical Standardization (CENELEC), and ETSI produce “harmonized” European standards upon request from the EC.16 Once developed, harmonized standards are approved by the EC and reference to them is published in the Official Journal of the EU (OJEU). Compliance with harmonized standards grants presumption (but not a guarantee!) of compliance with the EU Directives. The United States incorporates standards in its legal system using the “incorporation by reference” technique (Bremer, 2016). More specifically, the US law requires Federal Agencies to refer to private voluntary standards when developing their legal acts.17 Unlike it is the case in the European Union, the US legislation does not designate standards bodies to produce standards for incorporation by reference: as a result, there any hundreds of private organizations with predominantly corporate membership that may produce regulatory material (Bremer, 2016; Schepel, 2005). Reference to the incorporated standards is published in the Federal Register and US Code of Federal Regulations (CFR). Many standards bodies get accredited by the ANSI as American Standards Developers, meaning that their processes comply with ANSI essential requirements of due process;18 likewise, standards may also get recognized by the ANSI as American National Standards. While these endorsements arguably grant standards and standards bodies increased legitimacy, they are not mandatory for standards to be used by Federal Agencies: in other words, standards that are not endorsed as American National Standards or developed by American Standards Developers can still be referenced by incorporation. Against this backdrop, there are two common points in the EU and US law approaches to technical standardization that are particularly relevant for the purpose of this contribution.19 First, in both jurisdictions, standards appear to become a part of the national (or, as it is the case with the European Union, also regional) legal rules, either through their incorporation by regulatory agencies or reference in the States’ registers. Curiously, while incorporated standards gain binding force in the United States (Bremer, 2013), they in principle remain voluntary in the European Union, even though compliance with harmonized standards is by far the most preferred method of compliance with EU legislation (Schepel, 2013). Second, while effectively evolving into a part of law, such referenced or incorporated standards are not available to public free of charge. Both EU and US standards bodies are private bodies, and the normative material they produce, including standards to be incorporated by the US Federal Agencies or referenced in the OJEU, belong to these standards bodies’ intellectual property, either in the form of trademarks (Contreras, 2018) or copyrights. Only standards bodies that have developed a particular standard may reproduce and distribute this standard. The legislative registers thus identify the regulatory material, but do not reprint it (Bremer, 2013): it is the reference to standards – and not their text, that is published in these registers. Those willing to access the contents of standards must purchase them for a fee. And while a technical document priced around €350 may seem reasonable for large

Article 10 Regulation 1025/2012. 5 U.S.C. 552(a)(1). 18  Sections 1.0 and 2.0 of ANSI Essential Requirements, available at https://www​.ansi​.org​/american​ -national​-standards​/ans​-introduction​/essential​-requirements. 19  For the elaborative comparison between the two legal systems, see Bremer, 2016. 16  17 

The saga of copyrighted standards  335 stakeholders,20 even if purchasing a large bulk of documents,21 for smaller stakeholders or civil society accessing standards by this means is often costly (Mendelson, 2014).22 What’s more, in the European Union, it is not CEN/CENELEC but their members, i.e., National Standards Bodies or Committees, that distribute or sell CEN/CENELEC standards.23 Stakeholders willing to consult incorporated or harmonized standards without purchasing them from standards bodies may access the text of a standard in a library (either at the Office of the Federal Register (OFR) in Washington DC and in the library of the Federal Agency incorporating those standards,24 or one of the libraries of standards bodies in the EU countries),25 or sometimes through the online “read-only tool,” if available; however, these avenues are rather burdensome, not least because they often require a payment for a library membership or (the organization of) the library visit.

3. STANDARDS AS “COPYRIGHTED LAWS” In the European Union and the United States, laws, judicial opinions and works of governmental officials are not eligible for copyright protection.26 The content of the rule that results in a binding legal obligation should be accessible to everyone:27 otherwise, how would you know which rule to abide by? At the same time, standards are not akin to traditional legislation since they are produced by private bodies whose members and largest financial contributors are predominantly commercial companies; from this perspective, the standards’ ownership 20  See for instance the standard for toy safety (EN 71-1:2014) available for purchasing from the British Standards Institute. 21  In 2016, the CFR contained nearly 9,500 “incorporations by reference” of standards (Mendelson, 2014); due to technical and technological developments and legislative updates in the past years, there is a good probability that this number is currently even larger. 22  It should also be noted that each standards body would have different pricing options, depending on the type of standard, the sector, or even the format (PDF or paper). For instance, NEN-EN 506001:2019 (harmonized standards for information technology – data centre facilities and infrastructure), is priced at €72, while NEN-EN-IEC 62443-2-4:2019 (harmonized standard for security for industrial automation and control systems), is priced at €394. The API STAD 650 standard for welded tanks for oil storage, developed by the American Petroleum Institute (API) is priced at $612. 23  See  https://www​.cencenelec​.eu ​/european​-standardization ​/european​-standards​/obtaining​- european​-standards/ 24  The availability of these standards in the OFR is required by the Federal law. 25  In fact, the EU copyright law allows an exception the benefit of public libraries, see Article 5(2) (3) of Directive 2001/29/EC of the European Parliament and of the Council of May 22, 2001, on the harmonization of certain aspects of copyright and related rights in the information society, OJ L 167 (“Infosoc Directive”). 26  In the United States, this goes back to Wheaton v. Peters, 33 US (8 Pet.) 591 (1834); in the European Union, governmental officials works are exempted from copyright in the national legislation of Most Member States (e.g., § 5 (Urheberrechtsgesetz, BGBI. I.S. 1858, June 23, 2021 (German Copyright Law), and Article 11 Auteurswet, July 1, 2015 BWBR0001886 (Dutch Copyright Law)). Note that such exceptions in the national copyright law are authorized by Article 5 Infosoc Directive. In the United Kingdom, the copyright of officials works is held by the UK government (“crown copyright”), but the works are made available under Open Government License, see Section 163 Copyright, Design and Patents Act 1988 c. 48. 27  See Georgia v. Public.Resource.Org, Inc., 140 S. Ct. 1498, 1507 (2020) (citing Banks v. Manchester, 128; US 244, 253–254 (1888)).

336  Research handbook on law and technology by these bodies and their members, and their right to exclude others from distributing their standards, is understandable.28 This creates a rather perplexing situation in which commercial players produce normative material that is required to be complied with at a risk of a penalty (Sweeney, 2017), but over which they hold exclusive rights. Accordingly, there is a need to find a balance between access to semi-legal material on the one hand and ensuring the property rights on the other. 3.1 Reasonable Availability of Standards Standards bodies do not exist in isolation from international and national legal frameworks and must observe a broadly formulated set of requirements related to their governance or their standards (Kanevskaia, 2023). In the European Union, the obligations for standards bodies are codified in Regulation 1025/2012, and are generally related to the inclusiveness and transparency of standardization processes. Another set of requirements applies to ICT standards and technical specifications that are not harmonized standards and can be selected by the European Commission for its public procurement purposes: to be used by the Commission, such standards, among other things, should be publicly available on “reasonable terms” meaning either for a reasonable fee (without any explanation or threshold for what is “reasonable”) or free of charge.29 Curiously, a parallel requirement of “availability” does not exist for harmonized standards, even though they have a greater normative proximity to legal obligations than ICT standards used for procurement purposes. In the United States, Federal Agencies are required to ensure that private standards they incorporate are “reasonably available” and “usable.”30 The latter can be determined when considering “the completeness and ease of handling the public” and “whether the publication is bound, numbered and organized.”31 Again, the requirement of “reasonable availability” is not explicitly defined; also the US Freedom of Information Act (FOIA) does not provide when a material incorporated by reference is reasonably available.32 The availability of incorporated material however was questioned multiple times during the most recent amendments to the US Federal standardization policy,33 leading to the Office of Management and Budget (OMB) to clarify the factors to be weighted by the Federal Agencies when determining whether a standard is reasonably available: the accessibility of a read-only version during the commenting processes when a standard is drafted, costs associated with the access to an incorporated standard, degree to which such access is required to achieve the Agency’s policy goals, and

28  Note that while in EU copyright law, creations of both public and private entities are protected, but a public entity is more likely to be exempted from this legal protection. 29  Annex II, 4(b), Regulation 1025/2012. 30  1 CFR § 51.7, (3). See also Section 5(f) of OMB Circular No. A-119, Federal Participation in the Development and Use of Voluntary Consensus Standards and in Conformity Assessment Activities, 81 Fed. Reg. 4673 (2016). 31  Ibid. 32  See 5 USC §552(a)(1) of the US Freedom of Information Act (FOIA), as amended by Public Law No. 104–231, 110 Stat. 3048. The statutory obligation of OFR on reasonable availability is balanced with US copyright law, US international trade obligations, and agencies’ ability to substantively regulate under their authorizing statutes; see 79 Fed. Reg. 66267. 33  See comments collected during the revision process, retrieved from https://www​.regulations​.gov​/ document​?D​= OMB​-2012​- 0003​- 0001

The saga of copyrighted standards  337 the availability of a summary explaining the content of a standard to those lacking relevant technical expertise. Interestingly, while both jurisdictions acknowledge the availability of standards referenced in the law as an important issue, their understandings of (reasonable) availability does not stretch to the matter of copyright protection. The logic here may be that it is the authorities that identify and incorporate or reference standards, either the US Federal Agencies or the European Commission, that are supposed to ensure that standards are reasonably available; yet, the procedural and substantive obligations of the legal frameworks are addressed to standards bodies, and it is the standards bodies that hold copyrights over incorporated or referenced standards. The current statutory interpretation of “reasonable availability” – or rather, the lack of defining “reasonable,” – is thus toothless for ensuring access to the normative material that has legal effects. 3.2 Case Law on Legal Value and Copyright Protection of Private Standards With the legal framework not apt for free access to standards, it was only a matter of time before the courts were invited to rule on the legality of standards’ copyright. In the European Union, the landmark decisions have been taken both by the European Court of Justice (ECJ) and the national courts of some Member States. For instance, the German Bundesgerichtshof found that standards referenced in national legislative acts indeed lose copyright protection once these standards have entered the public domain,34 whereas the Gerechtshof and later, the Hoge Raad in the Netherlands reasoned that since a standard was not published in the Dutch official journal following the proper procedure, it was not to be considered legally binding and hence, the Dutch standards body could claim copyright over the standard’s document.35 (Van Gestel & Micklitz, 2013). While these two decisions were based on different factual elements, they also illustrate the different legal and procedural requirements across EU Member States that give standards a legal force. The line of the ECJ decisions for a long time related to the legal value of harmonized standards, but not per se their copyright. The trend of examining the legal effects of private standards and certification bodies started already in Commission vs Belgium36 and “juridification” of harmonized standards and conformity assessment decisions continued throughout the New Legislative Framework with such cases as EMC Development37 and Fra​.bo,38 the latter giving rise to the so-called “horizontal direct effect” of private bodies on the European rules of free movement of goods.39 In its landmark James Elliott judgment, the ECJ held that harmonized standards are measures implementing EU law and therefore, should also be considered a part of EU law,40 restating however that such standards remain voluntary,41 and sparking off critical 34  Bundesgerichtshof [BGH] (June 30, 1983), GRUR 1984. Later upheld by Bundesverfassungsgericht [BVerfGE] (July 29, 1998), ZUM 1998. 35  Hoge Raad (June 22, 2012), LJN: BW0393. 36  Case C-227/06, Commission v. Belgium [2008] ECR I-00046. 37  Case T-432/05, EMC Development AB v. European Commission [2010] ECR II-01629. 38  ​C​ase C-171/11 Fra​.​bo SpA v. Deutsche Vereinigung des Gas- und Wasserfaches. 39  Before Fra​.b​o, direct effect was understood to be vertical, meaning that only governmental bodies could restrict free movement of goods and hence breach Article 34 TFEU. 40  See Case C-613/14, James Elliott Construction Ltd v. Irish Asphalt Ltd [2006] para 40. 41  Ibid., 53.

338  Research handbook on law and technology reactions among the European Commission, standards bodies and academics, that related among others to the constitutional and commercial character of the reference to harmonized standards (Lundqvist, 2017), the ECJ’s shortcomings to clarify the relationship between the European standards bodies and the European Commission (Volpato, 2017); and the practical implications of the judgment for European standardization.42 It is somewhat surprising that until recently, none of the ECJ’s reasonings on the legal value of standards included explicit considerations of the validity of copyright claims over harmonized standards, despite the evident link between the two issues. The court finally had to render the decision on standards’ access in PRO v Commission, where it held that harmonized standards are in fact not a part of EU legislation even if they are a part of the EU law, and hence standards bodies are under no obligation to make them available free of charge.43 Notably, the court noted that CEN is not a public authority and “is not performing public functions which are not subject to any commercial interests.”44 It is difficult to follow how the ECJ’s decision in PRO v Commission squares with its earlier reasoning in James Elliott: the Court seems to pull harmonized standards into the public domain, yet still allowing them copyright protection. In its earlier copyright decisions, the ECJ addressed the potential exclusion from copyright on the public interest grounds,45 yet in PRO v Commission, the commercial nature of the standards body seemed to have prevailed over public interest46 (unlike in the reasoning of the Bundesverfassungsgericht). In the author’s view, a possible way to explain this paradox may be that the James Elliott decision intended to say that it is the reference to a harmonized standard published in the OJEU, and not the standard itself, that becomes a part of EU law. Such reasoning finds confirmation in Stichting Rookpreventie Jeugd, in which the Court held that ISO standards are only binding in the European Union if they have been published in the OJEU,47 thereby confirming the legal importance to the (acts of) references. This would also make sense from the institutional viewpoint since it is the European Commission that decides to publish references to harmonized standards in the OJEU. In the United States, litigation on the question of copyrighted standards started to develop as early as the 1980s. In BOCA,48 CCCC49 and Practice Management,50 the courts have been quite ambiguous in their reasoning on balancing access to standards incorporated in legal acts with standards bodies’ commercial interests (Cunningham, 2005). The landmark decision came in 2001 in Veeck, where the US Court of Appeals found that the model code incorporated by reference fell outside the scope of copyright protection.51 The most recent decision 42  See CEN/CENELEC position on the consequences of the judgment of the European Court of Justice on James Elliott Construction Limited v Asphalt Limited. 2017. 43  Case T-185/19 Public.Resource.Org and Right to Know v Commission [2021], paras 102 and 118. 44  Case T-185/19 Public.Resource.Org and Right to Know v Commission [2021], para. 73. 45  Case C-310/17, Levola Hengelo BV. v. Smidle Food BV [2018]. 46  Case T-185/19 Public.Resource.Org and Right to Know v. Commission [2021], para 102. 47  Case C-160/20 Stichting Rookpreventie Jeugd and Others v. Staatssecretaris van Volksgezondheid, Welzijn en Sport [2022], para 48. 48  Bldg. Officials and Code Admin. v. Code Tech., Inc., 628 F 2d 730 (1st Cir. 1980). 49  CCC Info. Servs. Inc. v. MacLean Hunter Mkt. Rep., Inc., 44 F 3d 61 (2d Cir. 1994). 50  Practice Mgmt. Info. Corp. v. Am. Med. Ass’n, 121 F 3d 516 (9th Cir. 1997), amended by 133 F 3d 1140 (9th Cir. 1998). 51  En banc, Veeck v. S. Bldg. Code Cong. Int’I, Inc., 293 F 3d 791 (5th Cir. 2002), cert. denied 539 US 969 (2003) (“Veeck v. SBCCI”).

The saga of copyrighted standards  339 ASTM v PRO revolved around four US-based standards bodies that have obtained permanent injunctions against any unauthorized use of their standards by PRO, which posted their standards online.52 The DC District Court made recourse to the “fair use” doctrine,53 and decided in the respondent’s favor.54 On appeal, PRO questioned the validity of standards bodies’ copyrights recalling that some US governmental employees have participated in drafting of the standards at issue, and that the work of the US government could not be copyrighted. In the author’s opinion, however, this argument is flawed since governmental officials typically participate in standards development processes on an equal footing with the representatives of private companies: mere participation of public sector employees in standards development processes thus does not make the standard automatically public. It appears that there is a lack of convergence in the EU and US case law on access to standards incorporated or referenced in law; even more, courts within the Member States, and the ECJ itself, seem to adopt different approaches to cases that arguably, bear a lot of similarities. Where the courts do seem to agree on is that referenced and incorporated standards are developed by private bodies that do not exercise any governmental functions: the access to law thus becomes conditional upon access to documents held by commercial stakeholders. 3.3 “Monopolization” of the Law by Private Standards Bodies? Following the legislative frameworks and the reasoning of the courts, it is mainly the private nature of standards bodies that allows copyright protection of their standards to exist, even when these standards are used to demonstrate compliance with legal obligations. Among the very few cases where standards bodies were found to execute public functions, the ECJ found l’Institut Belge de Normalisation in Commission v Belgium to be a public body operating under the supervision of a Ministry. While national standards bodies of the Member States may indeed be parts of governmental bodies and agencies (Schepel, 2005), European standards bodies producing harmonized standards are inherently private. This was also confirmed in PRO v Commission, where the court emphasized the importance of CEN’s commercial interests that would be affected if CEN standards were placed into the public domain. Both the ECJ and the Commission are also consistent in subjecting European standards bodies to competition law by suggesting that these bodies, or their members, are involved in an economic activity.55 Curiously, the US standardization system is claimed not to have been designed for the current copyright disputes, since it was assumed that standards bodies would not claim copyrights over their standards (Sweeney, 2017). Akin to the cases in the European Union, the arguments of standards bodies’ commercial interests have been brought forward in the US litigation, for instance in Veeck and PRO v ASTM, with the Court in Veeck ruling against the standpoint that lifting the copyright of standards would result in a lack of revenues for the standards body. At the same time, the US legislation is straightforward in not assigning public 52  American Society for Testing & Materials v. Public​.Resource​.o​rg, Inc. (ASTM), No. 1:13-cv-01215 (TSC), 2017 WL 473822, (DDC February 2, 2017), amended by No. 17-7035 (DC. Cir. 2018) (“ASTM v. PRO”). 53  See below Section 4 (b) (i). 54  At the moment of writing, the case is on appeal. 55  See Case T-155/04, SELEX Sistemi Integrati SpA v. Commission of the European Communities [2006] and Case T-432/05, EMC Development AB v. European Commission.

340  Research handbook on law and technology functions to standards bodies: these functions are held by the regulatory agencies incorporating those standards, but they are not obliged to make these standards available.56 Moreover, the publication of a document containing incorporation by reference in the Federal Register does equate to an approval of the incorporation by the Agency’s Director, which fuels the question of accountability.

4. THE POSSIBLE SOLUTIONS TO THE COPYRIGHT PROBLEMS – AND WHY NONE WORKS The purposes of copyright law vary in the United States and the European Union: while under the former, copyrights incentivize future authors and creators by granting them temporary exclusive rights over their works,57 the EU copyright doctrine protects the authors’ property that results from their intellectual creation:58 hence, the EU approach in copyright may be viewed as author-oriented, as opposed to the more society-oriented approach in the United States (Ginsburg, 1990). In the case of copyrighted standards, however, both purposes clash with the “new” function that standards acquire due to the incorporation or reference. Neither EU nor US copyright law provides for an exemption for this specific case, making the issue of access to standards even more peculiar. One may even question the extent to which a standard is more than merely an idea or a method and hence, whether it should enjoy copyright protection in the first place;59 or whether standards bodies should be allowed to claim copyright over the work performed by individuals in standardization committees.60 Over the last few years, many scholars have attempted to find solutions to the continuous legal puzzle of access to copyrighted standards. This stance of literature is well-developed in the United States, in part due to the established case law, but is increasingly gaining momentum also in the European Union. Admittedly, the scope of this chapter is too limited to pay due attention to all suggested avenues and to cover in detail the legal technicalities of each solution; instead, this section intends to provide a non-exhaustive “menu of options” – a general overview of the possible options previously discussed in the literature, while also sketching their drawbacks. Finally, it turns to what the author believes, is the core of the problem – the private, commercial nature of standards bodies and the (lack of) formal responsibility of governmental institutions to ensure access to standards they reference, – and ponders on the possible ways to address it, considering the role of different stakeholders.

See generally 5 USC §552. Fox Film: Corp. v. Doyal, 286 US 123, 127–128 (1932) 58  Ginsburg, 1990 demonstrates it on the example of French law. 59  This rationale stems from the fact that copyright protects the expression of an idea and not the idea, method or process itself, see also Jones, 1990. The scope of this Chapter does not provide space to further engage with this question, although some US-based authors have touched upon this n relation to the merger doctrine (see in particular Section 4 (b) (ii)). 60  While intriguing, these questions fall outside the scope of this chapter but present interesting studies for further research under EU and US copyright law. 56  57 

The saga of copyrighted standards  341 4.1 Abolishing Copyright over Incorporated and Harmonized Standards The most obvious solution to ensure access to standards that have become a part of legal acts is to abolish the copyright over standards altogether and put them into public domain, allowing their unlimited distribution. This way, those interested in the standards’ content will be able to consult the whole document without incurring any costs, as it is the case with legal acts and courts’ decisions. While this may seem like an easy fix, the reality is more complex. By far not all standards produced by standards bodies make it into “law”: some technical documents thus do not acquire the “new public functions” to begin with. In theory, it should be possible to abolish the copyright only for the referenced standards; however, it is not always clear from early in the standards development process, whether a standard is being prepared with a view to becoming a part of law. Arguably, this is less of an issue in the European Union, where the Commission can mandate harmonized standards; however, international standards, standards of other private bodies or even proprietary standards that are under the paywall and enjoy copyright protection can also be transposed into harmonized standards; moreover, standards incorporated or references in law may often refer to other international, European or national standards, that in turn also enjoy copyright protection. Navigating through these competing copyright claims is a challenging task. For instance, it would mean that a standards body should revoke its copyright the moment it realizes that the standard will be incorporated: still, it makes a huge difference at which stage of development a standard is, and whether the drafters would still be willing to contribute to standardization processes knowing that their product will end up in the public domain. At the same time, lifting the copyright ex post, after the standard has been developed and identified for incorporation, may be unfair to those who have already purchased the standards document.61 Eliminating copyright protection and mandating free access for all incorporated and referenced standards results in more questions than answers. It may be thus reasonable to seek solutions by providing exemptions for “public access” to standards, rather than stripping standards bodies of their copyrights. 4.2 Copyright Exemptions for Incorporated Standards: The US Approach US scholars have examined different types of exemptions that could apply to standards that are incorporated by reference. By applying these exemptions, placing standards in the public domain would not infringe on the intellectual property of standards bodies. While such exemptions may indeed be useful to secure access to standards that become parts of law, it is worth noting from the outset that they do not offer a “one-size-fits-all” solution, since their application would depend on the factual and legal situation and should be assessed on a caseby-case basis.

61  See, by analogy, Berger, saying that problems with copyright are two-fold: management of IP during standards development and creating an exception once the standard is incorporated (Berger, 2011).

342  Research handbook on law and technology 4.2.1 Fair use doctrine The “fair use” doctrine allows re-using parts of copyright-protected works without the owner’s permission under certain circumstances, e.g., criticism, news reporting or research,62 and has been suggested as a possible avenue for making incorporated standards accessible to the wider public (Bremer, 2019; Sweeney, 2017). Section 107 of the US Copyright Act provides four non-exhaustive, cumulative factors to be considered when determining whether the use is a “fair use,” namely: the purpose and character of the use, including commercial considerations; the nature of the copyrighted work; the amount and substantiality of the part of the work used in relation to the whole copyrighted work; and the effect of the use upon the potential market for or value of the copyrighted work. While certainly, not every governmental use of standards would qualify as a “fair use,”63 and while it is only the use of some portions of standards documents that can benefit from Section 107 of the Copyright Act, one may see how a standard that has become a part of law and is made available to public without any commercial motives may be covered by the fair use doctrine. At the same time, whereas the doctrine protects the use of incorporated standards, it is inept at protecting those who seek to spread information (Sweeney, 2017). In the recent PRO v ASTM appeal, PRO claimed a fair use defense since it is believed to have provided free access to law, served statutory and non-commercial purposes, and aimed to accomplish a transformative purpose.64 The DC Circuit Court declined to directly address the issue on the grounds of constitutional avoidance, but held that the online publication of standards by PRO, while may be a fair use, should be further evaluated by the District Court case-by-case, or a “standard-by-standard,” basis. It should be noted that “fair use” is predominantly a US doctrine; as such, there is no “fair use” equivalent in the EU copyright law, which instead provides an explicit list of exceptions and limitations and leaves the implementation of some of those exemptions at the discretion of the Member States.65 4.2.2 Merger doctrine Another option invoked by US scholars is the merger doctrine in copyright law. This doctrine applies in cases where there is only a very small number of ways to express an idea, and the expression of the idea loses its copyright protection since it becomes indissociable from the idea itself (Samuelson, 2016). Merger doctrine is typically used in software disputes and cases involving computer programs. The rationale is that since copyright law protects the expression of an idea, a specific wording or expression should be the only way to express a particular law (Sweeney, 2017). In the case of standards, for the merger doctrine to apply, the wording of the standard should be the only way how the fact, idea, or law is expressed. The merger doctrine was successfully invoked in Veeck, where the Court considered the copyrighted code a “fact” or an “idea” that cannot be expressed in any other way.

17 U.S.C. 107. Whether Government Reproduction of Copyrighted Materials is a Non-infringing “Fair Use,” 23 Op. O.L.C. 87, 104 (1999). 64  See the D.C. District Court judgement ASTM v. PRO, 2017 WL 47822, para 11. 65  See Article 5 Infosoc Directive. 62 

63 

The saga of copyrighted standards  343 In practice, and especially in the technology sector, standards often compete,66 which already points out that they may not be the only way an idea or a fact (or a law) can be expressed.67 Moreover, multiple standards may address problems that are virtually identical.68 Indeed, merger doctrine can still be used as a copyright defense when there are several ways an idea can be expressed, as long as this number is limited (Samuelson, 2015): but with standards being constantly updated to pace the scientific development, it is very difficult, if not impossible, to identify the exact number of ways in which the ideas or facts conveyed in those standards are expressed: after all, only one expression will make it into law, but it does not mean the others are less valid from the technical perspective. As is the case with “fair use,” the merger doctrine is predominantly applied in the United States. In the European Union, the exclusion from protection of ideas largely applies to computer programs,69 and the ECJ also tends to invoke considerations similar to the merger doctrine in software cases.70 Furthermore, the application of the merger doctrine is clearer in the United States where standards incorporated by reference actually become mandatory; in the European Union, harmonized standards remain voluntary, and are just but one option, albeit the preferred one, to demonstrate compliance with legal requirements. 4.2.3 Tolerated use When discussing copyrighted standards, some scholars make recourse to “tolerated use” (Berger, 2011). Tolerated use implies that the copyright owner is made aware of a frequent or mass infringement of their work but chooses not to enforce their rights,71 and this mostly applies to mass-produced, low-transaction cost works. However, multiple litigations where standards bodies disagree with their standards being placed in the public domain or used by other parties without a obtaining a proper license indicate that tolerated use may not be realistic in the realm of standardization. The exception may be the open license, or a creative commons license solution typically applied in the European Union for the “open standards” to be used by governmental bodies,72 where standards bodies would allow distributing their work under certain conditions of the license and provided that they retain the ownership of standards. Most standards referenced in law are however not “open standards,” but products of multiple stakeholders with vested interests and sunk investment costs in standardization processes: any form of tolerated use or open licenses may be thus difficult to justify due to the funding model of standards bodies.

For instance, there were two alternatives to the global 3G standard and one (LTE) to the 4G. Sweeney confirms it in relation to the US construction field suggesting that a standard is one of the infinite ways to express a model code and there is no public interest to merge the expression with idea (Sweeney, 2017). Note, however, that this may not be the case for in every jurisdiction: in the Netherlands, some construction standards of the Dutch Standards Body (NEN) are mandated by law. 68  For instance, both ETSI and CEN/CENELEC develop parallel cybersecurity standards (thanks to Irene Kamara for pointing this out to me). 69  Directive 2009/24/EC of the European Parliament and of the Council of April 23, 2009, on the legal protection of computer programs, OJ L 111. 70  For further analysis, see (Inguanez, 2020). 71  See also (Wu, 2007). 72  See Article 5 of the Directive 2019/1024 of the European Parliament and of the Council of June 20, 2019, on open data and the re-use of public sector information, OJ L 172. 66  67 

344  Research handbook on law and technology 4.3 Addressing the Copyright Issues through Standards Bodies: A (Potential) EU Approach Due to the inherent differences in US and EU copyright laws, the aforementioned solutions would mainly fit in the US legal system and can be “borrowed” from the European Union only to a certain extent. As discussed in Section 2, one of the similarities between the US and the EU standardization systems is the private, commercial nature of standards bodies, confirmed by the courts in many instances. This may suggest that to solve the copyright conundrum, one should look not at the functions of standards, but rather the functions of the bodies producing them. 4.3.1 “Unbundling” the public and private functions of standards bodies Recall that the number of bodies capable of producing standards that are referred to by law is more limited in the European Union than in the United States; in fact, only standards created by CEN, CENELEC, and ETSI can be referenced in the OJEU, and only CEN and CENELEC keep their standards behind the paywall. The ECJ has multiple times restated the private, commercial nature of the European standards bodies. Indeed, these bodies are largely funded by their members,73 mostly private companies acting through the national standards bodies, and the development of harmonized standards is just a fracture of their activities. In this regard, a practical solution may be found in “unbundling” the public and commercial functions of the European standards bodies, and accepting that once producing harmonized standards, these bodies exercise a public function for which a different types of responsibilities and obligations apply than for producing other type of documents. Such an unbundling exercise would be most probably performed by the court, which will balance the economic interests of standards bodies with the concern of public access. Having the development of harmonized standards fall within the public function of standards bodies would remove the commercial claims related to this type of standard.74 Naturally, this solution should also come with the modifications of funding mechanisms for standards bodies, possibly requiring differentiating between the “public” financing provided by the Federal government or the European Commission and used for harmonized standards, and a “private” one, paid by commercial stakeholders. The ECJ has previously found that some standard-setters could indeed perform public tasks, albeit these cases did not concern European standards bodies. In SELEX, for instance, the General Court held that when adopting standards that become binding on all Member States, Eurocontrol, a European organization for civil and military aviation, exercised a legislative function; on appeal, the Court confirmed that the production of standards could not be separated from this public task.75 However, on more recent occasions involving platforms, the ECJ was not keen on unbundling different functions.76 It is thus the question of which path the Court will take should such a test case arise before it, and whether it will be willing to 73  However, it should be noted that standards development, administration, and other related activities of the ESOs are also co-financed by the European Union, Articles 15 and 16 Regulation 1025/2012. 74  That said, non-commercial entities can still claim copyright, see n 28. 75  It should be pointed out however that Eurocontrol Members were Member States, while in ESOs it is ultimately companies, either through direct membership (ETSI) or through national standards bodies. 76  See Case C-434/15, Asociación Profesional Élite Taxi v. Uber Systems Spain SL [2017]; and Case C-390/18 Airbnb Ireland [2019].

The saga of copyrighted standards  345 overturn its earlier jurisprudence regarding the commercial interests of the European standards bodies. 4.3.2 Pricing options set by standards bodies Another way to address the copyright problem through standards bodies may be introducing different pricing options for standards. For instance, standards bodies may differentiate between commercial stakeholders and civil society, adjusting their pricing accordingly: in fact, some standards bodies have been charging different prices for their membership according to stakeholders’ categories and revenues.77 In this regard, a ”mandatory pricing differentiation” for the sales of standards can be mandated by law as a part of a “reasonable availability” test which is currently lacking a clear definition in Regulation 1025/2012.78 This option addresses, at least in part, the practical problem of access to standards by noncommercial stakeholders while still allowing standards bodies to maintain their commercial nature and generate income from standards sales. However, the implementation of different pricing mechanisms faces some challenges. Firstly, for this option to work, there is a need to provide a clear stakeholder categorization as well as to set clear thresholds for prices to be charged to each type of stakeholder: yet such categorizations and thresholds may differ significantly per standards body, let alone also per Member States’ laws. Secondly, and from the normative viewpoint, the “law” remains under the paywall. 4.4 Responsibility of Referencing Agencies: A (Possible) US and EU Approach? A second common feature between the EU and US standardization systems is that for standards to acquire legal force, they require an act of incorporation, or an act of publication of reference in the OJEU. Hence, a possible solution that may be relevant for both systems may lay within the very act of reference and the regulatory body responsible for it. While focusing on the act of reference is likely to be presented as a justification not to provide public access to standards (since references are publicly available), it may also serve as an invitation to the agencies and the European Commission to provide standards free of charge once requested. From this vantage point, the responsibility over access to standards lies not with standards bodies, but with the public bodies identifying and incorporating these standards. The Federal Agencies and the European Commission then act as a sort of “intermediatory” between the standards bodies and the public. That said, standards bodies remain legitimate copyright holders over their standards, and still require funding for their activities related to the development of incorporated or harmonized standards. The question is then how the regulatory agencies can put standards into public domain, or provide them upon request, while also ensuring that standards bodies are fairly compensated. Arguably, this question can also be addressed by adjusting the budget and financing mechanisms of standards bodies. Furthermore, such practice would lead to another problem already highlighted above and that relates to cross-referencing and citing (parts of) standards developed by other standards bodies and ensuring that these rights remain enforceable.

77  78 

See ETSI, “Calculation of Contributions” retrieved from https://www​.etsi​.org​/membership​/dues I am grateful to an anonymous reviewer for this suggestion.

346  Research handbook on law and technology

5. CONCLUDING REMARKS Standards as a form of expert-driven rule-making are becoming increasingly important for the research agenda of legal scholars. Especially for those working in the field of law technologies, where increased digitalization and the rapid pace of technological development risk unclear and ambiguous rules and where the regulation tends to move from public bodies to expert-driven communities, standardization presents an interesting and much-needed field of study. The phenomenon of blending the borders between private standards and laws raises important concerns about the legitimacy and accountability of private regulatory regimes; when left unaddressed, these concerns pave the way to inserting commercial strategies into the law. The issue of copyrighted standards reference in law is but one of such concerns, the one that clearly demonstrates the continued need to balance between private and public interests in the modern regulatory landscape: the private, voluntary nature of standards tilts this balance toward the former, while the role of standards as an emerging form of regulation can be no longer ignored. While the courts in the European Union and United States have addressed this issue on a number of occasions, they did not provide a solution: in fact, they sometimes make the complex legal puzzle behind copyrighted standards even more confusing, not least by isolating the issue of access to standards from their legal value and linking the former to the commercial nature of standards bodies. Indeed, standards bodies are not law-making institutions: the job of experts in standards committees is to make standards, while the job of judges and parliamentarians is to make laws. This does not take away the fact that when compliance with the law becomes conditional upon compliance with standards, the organizations developing these standards acquire public functions. This contribution discussed different possible solutions to the problem of copyrighted standards that may be adopted in the United States or European Union, noting that the application of each of these solutions would ultimately depend on the national legislation as well as specific exemptions that should be assessed on a case-by-case basis. As the law currently stands, the only way to consult the content of incorporated or harmonized standards is to visit the libraries of standards’ bodies and Federal Agencies. While this may suggest that public libraries should gain a larger role in providing access to law, in the modern digitalized society, reliance on traditional book repositories seems at least controversial. In this regard, the author suggests that regulatory agencies referencing or incorporating standards should be responsible for ensuring that these standards are accessible. While this can be done by adjusting the funding mechanisms of standards bodies, such modifications will inevitably run into further challenges, including ensuring fair compensation and cross-referencing standards developed by other bodies. In this regard, further research is needed to examine in depth the application of national copyright laws to standards, including the problems of originality and the applicability of copyright exceptions. It is also worth exploring some other options that may serve as an inspiration for making standards available to public, including the possibilities of providing open-access licenses or concluding workshop agreements.79 This contribution only discussed the copyright of standards referenced in law. However, a comparable issue of access arises for those standards that become binding in their effect and 79  See CEN, “CEN Workshop Agreement” retrieved from https://boss​.cen​.eu​/dev​elop​ingd​eliv​erables​ /cwa​/pages/

The saga of copyrighted standards  347 in the absence of other applicable legal rules or due to the market functioning mechanisms (Schepel, 2005; Wielsch, 2012). This raises even deeper concerns about accountability and democratic values and potentially stretches the current understanding of the concept of “law.” These and many other issues that pertain to standards and standardization prove just once again that our legal systems are not yet fit to accommodate the increasing demand for private regulation on the one hand, and the demand for democratic legitimacy on the other, and that a careful balance should be sought which would cater to the needs of private regulatory actors as well as public interests.

BIBLIOGRAPHY Alemanno, A. Ed. (2011). Governing disasters: The challenges of emergency risk regulation. Cheltenham: Edward Elgar Publishing. Ansell, C. & Gash, A. (2008). Collaborative governance in theory and practice. Journal of Public Administration Research and Theory, 18(4), 543–571. Berger, T. (2011). Copyright in standards: Open or shut case. Copyright Reporter, 29(3), 106. Berman, H.J. (1983). Law and revolution: The formation of the western legal tradition. Cambridge: Harvard University Press. Berman, P.S. Ed. (2017). The globalization of international law. London: Routledge. Berman, P.S. (2012). Global legal pluralism: A jurisprudence of law beyond borders. Cambridge: Cambridge University Press. Bremer, E. S. (2013). Incorporation by reference in an open-government age. Harv. JL & Pub. Pol'y, 36, 131. Bremer, E. S. (2016). American and European Perspectives on Private Standards in Public Law. Tul. L. Rev., 91, 325. Bremer, E. S. (2019). Technical Standards Meet Administrative Law: A Teaching Guide on Incorporation by Reference. Admin. L. Rev., 71, 315. Büthe, T. & Mattli, W. (2011) The new global rulers: The privatization of regulation in the world economy. Princeton: Princeton University Press. Cafaggi, F. (2011). New foundations of transnational private regulation. Journal of Law and Society, 38, 20–49. Contreras, J. (2018). Trademarks, certification marks and technical standards. In J. L. Contreras (Ed.), Cambridge handbook of technical standardization law, 2, Cambridge: Cambridge University Press, 205–230. Cunningham, L.A. (2005). Private standards in public law: Copyright, lawmaking and the case of accounting. Michigan Law Review, 104, 291–344. Delimatsis, P. (Ed.). (2015). The law, economics and politics of international standardization. Cambridge: Cambridge University Press. De Londras, F. (2011). Privatized sovereign performance: Regulating in the ‘gap’ between security and rights? Journal of law and Society, 38(1), 96–118. De Vries, H. (2015). Standardisation-A developing field of research the law, economics and politics of international standardization. Cambridge: Cambridge University Press, 19–41. Djelic, M.L. & Den Hond, F. (2014). Introduction: Multiplicity and plurality in the world of standards: Symposium on multiplicity and plurality in the world of standards. Business and Politics, 16(1), 67–77. Egan, M. (2001). Constructing a European market: Standards, regulation, and governance. Oxford: Oxford University Press Eliantonio, M. & Cauffman, C. (Eds.). (2020). The legitimacy of standardization as a regulatory technique in the EU – A cross-disciplinary and multi-level analysis. Cheltenham: Edward Elgar Publishing. Ginsburg, J.C. (1989–1990). Tale of two copyrights: Literary property in revolutionary France and America. Tulane Law Review, 64(5), 991–1032.

348  Research handbook on law and technology Graz, J.-C. (2019). The power of standards. Cambridge: Cambridge University Press. Inguanez, D. (2020). A refined approach to originality in EU copyright law in light of the ECJ’s recent copyright/design cumulation case law. IIC-International Review of Intellectual Property and Competition Law, 51(7), 797–822. Jones, R.H. (1990). The myth of the idea/expression dichotomy in copyright law. Pace Law Review, 10, 551. Kanevskaia, O. (2023). The law and practice of global ICT standardization. Cambridge: Cambridge University Press. Kingsbury, B. (2019). Infrastructure and InfraREG: on rousing the internal law ‘Wizards of Is’. Cambridge International Law Journal, 8. 171–186. Lessig, L. (2009). Code: And other laws of cyberspace. Sidney: ReadHowYouWant​.co​m. Lundqvist, B. (2017). European harmonized standards as “part of EU law”: the implications of the James Elliott case for copyright protection and, possible, for EU competition law. Legal Issues of Economic Integration, 44, 421–436. Mendelson, N.A. (2014). Private control over access to the law: The perplexing federal regulatory use of private standards. Mich. L. Rev., 112, 737. Peters, A., Koechlin, L., Förster, T., & Zinkernagel, G. F. (Eds.). (2009). Non-state actors as standard setters. Cambridge University Press. Sabel, C. & Zeitlin, J. (Eds.). (2010). Experimentalist governance in the European Union. Oxford: Oxford University Press. Samuelson, P. (2016). Reconceptualizing copyright’s merger doctrine. Journal of the Copyright Society of the USA, 63, 417. Scalia, A. (1989). Essay, The rule of law as a law of rules. University of Chicago Law Review, 56(4), 1175–1179. Schepel, H. (2005). The constitution of private governance: Product standards in the regulation of integrating markets (Vol. 4). Hart Publishing. Schepel, H. (2013). The New Approach to the New Approach: The Juridification of Harmonized Standards in EU Law. Maastricht Journal of European and Comparative Law, 20. 521-533 Senden, L. (2017). The Constitutional Fit of European Standardization Put to the Test. Legal Issues of Economic Integration, 44, 337-352 Stringham, E. (2015). Private governance: Creating order in economic and social life. Oxford: Oxford University Press. Stuurman, C. (1995). Technische normen en het recht: Beschouwingen over de interactie tussen het recht en technische normalisatie op het terrein van informatietechnologie en telecommunicatie (Wolters Kluwer Nederland BV). Sweeney, J.M. (2017). Copyrighted Laws: Enabling and preserving access to incorporated private standards. Minnesota Law Review, 101, 1331. van Gestel, R. & Micklitz, H. (2013). European integration through standardization: How judicial review is breaking down the club house of private standardization bodies. Common Market Law Review, 50, 145–181. Villaronga, E.F. (2019). Robots, standards and the law: Rivalries between private standards and public policymaking for robot governance. Computer Law & Security Review, 35(2), 129–144. Volpato, A. (2017). Harmonised standards before the ECJ: James Elliott Construction. Common Market Law Review, 54, 591–603. de Vries, S., Kanevskaia, O., de Jagger, R. (2023). Internal Market 3.0: the old “New Approach” for EU Legislative Harmonisation in the field of AI. Forthcoming in European Papers, 8. Werle, R. (2001). Institutional aspects of standardization–jurisdictional conflicts and the choice of standardization organizations. Journal of European Public Policy, 8(3), 392-410. Wielsch, D. (2012). Global law’s toolbox: Private regulation by standards. The American Journal of Comparative Law, 60(4), 1075–1104. Wiegmann, P. M., de Vries, H.J., Blind, K. (2017). Multi-mode standardisation: A critical review and a research agenda. Research Policy, 46, 1370-1386 Wu, T. (2007). Tolerated use. The Columbia Journal of Law & the Arts, 31, 617.

21. The normative novelty of obligations in automated contracts Helen Eenmaa1

1. INTRODUCTION: PURPOSE AND STRUCTURE Legal systems are built on influential legal and moral theories that rely on concepts like “obligation” or “promise” and the connected legal instruments that give a reason for action and normative power to individuals operating within these systems. Technology is increasingly integrated into social practices, including those related to law and legal organisations. Among these interactions, we see technology affecting the creation of legal relations and the fulfilment of our obligations to each other. It is worth exploring whether any of these changes affect the normative content of these relations or, more broadly, whether the rights and duties common to private law might also potentially change. Explaining the challenge and the core questions it raises in relation to contracting practices will be at the heart of this chapter. As we rely on concepts that tend to have a specialised meaning within the law, computer science or philosophy, we introduce the terminology by first clarifying the central concepts of advanced contract automation and then the philosophical notions of normativity, conceptualisation and coherence. Following this, we describe how, in the context of studying advanced contract automation, new complexities arise when traditional legal concepts are used in conjunction with new types of constraints in transactional environments, e.g. the constraints set by a particular software architecture or inflexible algorithms. These constraints have implications for the contract parties’ legal relationship, significantly limiting parties’ options to adjust the contractual terms, fulfilment, or the end of relations once the contract is running. This raises questions about what we may justifiably expect of contracting parties and, more broadly, about the normative content of a contract in light of possible differences between the obligations in traditional and some technology-mediated private relationships. In this chapter, we will focus on the possibility of changes in legal obligations arising from inflexible code-based relations in advanced automated contracts, e.g. computable and algorithmic contracts, as detailed below. This and possible further studies on contracts’ normative content fall in a broader discussion on the impact of technological constraints on private party behaviour, their legal rights and obligations, and how regulation can advance specific values. Unsurprisingly, research on contract automation keeps providing insight into the “as-is” state of private law, legal technologies, algorithmic governance and finance, together with evaluations of the successes and 1  I am grateful to Przemysław Pałka, Agnieszka Jabłonowska and Olia Kanevskaia for their comments on this chapter and an excellent discussion of its central themes at the symposium of the handbook. This work was supported by the Estonian Research Council grant PSG685 “Machines that Make and Keep Promises: Algorithmic Contracts Create New Markets and Change the Fundamental Values of Contracting.”

349

350  Research handbook on law and technology failures of the current regulation. At the same time, it is not certain that it guides the “to-be” governance of advanced automation-led contracting in the private sector, especially outside financial markets. As contract automation practices can potentially change the conceptual framework and values we have grown accustomed to in private law, this chapter’s auxiliary purpose is to examine the future by raising interesting research questions and suggesting paths to address them.

2. CONCEPTS The following discussion builds on a set of concepts, distinctions and practices. While algorithmic trading that relies on automated contracting has been common for decades, the emergence of new technologically enabled contracting practices that use smart contracts and transactions between smart devices has encouraged research in advanced contract automation. The technological advances have grounded academic interest in developing legal technologies and contract representation languages with the help of different algorithms in many law & technology research groups.2 They have also attracted business and research investments to the field of contract automation, where many initiatives build on earlier research findings or further develop the understanding of legal technologies and the computability of contracts.3 There are many kinds of automated contracts and computerised or digital agreements.4 We can distinguish between them based on the extent of their automation, but also, perhaps even 2  Current research and academic initiatives in general legal automation, and contract automation more specifically, include the CodeX group at Stanford and its Insurance Initiative Working Group (formerly the Contract Description Language Working Group), which is known for its work on integrating contractual logic with natural language, and for doing this in conjunction with a specific use case working in a partnership with AXA insurance group. Other global initiatives focusing on merging code and contracts include the teams at SMUCC in Singapore and COMPULAW in Bologna. Other centres in the United States, such as MIT Media Lab, NYU Law & Tech Hub, Yale Information Society Project, or Berkman Klein Center at Harvard, focus less on contracts. Still, we might see a shift in that direction in conjunction with increasing attention to computable contracts (along with signs of that shift in the MIT Computational Law report, for example). This aligns with similar research initiatives in Europe, e.g. at the Helsinki Legal Tech Lab, the Forschungsstelle Legal Tech in Germany, HIIG in Berlin, Digital Legal Lab in the Netherlands, and the Cohubicol research project in Belgium, to name a few, which all study legal automation from an interdisciplinary perspective. 3  This includes, for example, Logical Contracts by esteemed logician Robert Kowalski – a project focusing on using a controlled natural language for computable contracts that would be comprehensible for humans and executable for machines simultaneously. Some other business initiatives focus on guided contract drafting, contract automation and monitoring or contract analysis ex-post. Examples of this include SmartLaw (contract drafting for end users), Juro (Contract automation), Spotdraft (Contract management form creation to monitoring), PreciselyContracts, CreateiQ (contract management from creating to executing), Ontra (contract management), Lawgeex (Contract drafting and Analysis for legal practice) or Legito (document management). Docassemble on the other hand is an open-source framework to create interactive interviews that can then be used to construct custom documents. Other initiatives focus on merging traditional contracting with blockchain-based smart contracts (Lexon) or offer platforms for automated crypto-trading such as Trality. In conjunction with these, we see technologies developed for decentralised exchanges that are relevant when comparing trading platforms. 4  Building on our distinctions in an earlier paper (Schmidt-Kessen et al., 2022), the broad category of automated contracts includes “e-contracts” (Governatori, Idelberger, Milosevic, Riveret, Sartor & Xu, 2018), “XML contracts” (Cunningham, 2006), “computable contracts” (Surden, 2012), “algorithmic

The normative novelty of obligations in automated contracts  351 more productively, based on the automated stages of the contractual lifecycle. There will be automated contracts heavily negotiated by humans before being coded, just as there will be “automated contracts of adhesion” where the code is not negotiated. Many technology-mediated contractual relations rely on hybrid contracts where the contractual clauses, standard terms, and code reinforce each other. For example, such contracts might rely on machine learning-based algorithms to search and select ideal contracting partners or contractual terms, thereby taking over some of the functions of the contracting parties. This differs from a fully automated contractual relationship, where humans might be involved in the programming stage of relevant algorithms. However, once launched, live decision-making is handled by algorithmic agents (Sartor, 2018). Among other things, such contracts might include smart contract-based execution of the terms. Among the contract variants, we use the concept of “algorithmic contracts” to denote contracts in which an algorithm either determines a party’s obligations or automatically enforces the contract’s terms, referencing their hybrid nature. The term “algorithm” is used here to denote any form of automated (i.e. machine or computer-based) instruction, ranging from simple if-then statements to a sequence of nuanced mathematical equations. Algorithmic contracts are generally “contracts in which one or more parties use an algorithm to determine whether to be bound and how to be bound” (Scholz, 2017) to an agreement. They provide the advantage of coding some of the terms of the agreements in a computer programme which can execute them automatically when certain conditions are met without needing to involve intermediaries. As such, we can see them broadly used in algorithmic trading in financial markets (Schmidt-Kessen, Eenmaa & Mitre, 2022). An algorithmic contract needs to be distinguished from a smart contract that might be best understood simply as an executable code (not yet an agreement in a legal sense) – when specific conditions are met, it triggers a follow-on action (Schmidt-Kessen et al., 2022, p. 3). For example, a smart contract might be used to determine that a certain amount of Bitcoin would be sent from one account holder to another at a certain time and then execute this transfer. Simply executing a transfer does not yet amount to a contract in a legal sense. Here, we have computerised protocols that can streamline transfers. However, as such, they can also be used to enforce contractual terms, i.e. they could be set to trigger an action from a contractual relationship. For example, once a contract is defined with the help of code and concluded, its pre-programmed enforcement (self-enforcement) could be set to take place according to the algorithm, being, to an important extent, independent of external influence or control. In this context, however, it is important to remember that “smart contract” is a term in computer science denoting computer programs. The recent literature in this field typically limits the discussion of smart contracts to those operating on blockchain, a hash chain coupled with timestamping technology primarily known as a tool for preserving data integrity (Buldas, Kroonmaa & Laanoja, 2013). This is not a limitation on discussing the legal implications of such technologies, but the descriptive and normative distinctions are important. Algorithmic contracts may employ smart contracts but are normatively rather aligned with smart legal contracts where the terms of an agreement are also drafted in computer language and are executed automatically by transaction protocols, minimising the need for contracts” (Scholz, 2017), “smart contracts” (Szabo, 1996; Staples et al., 2017) and “smart legal contracts” (UK Law Commission, 2021).

352  Research handbook on law and technology human intervention (Szabo, 1996; De Caria, 2019; Schmidt-Kessen et al., 2022). It is good to remember that while smart contract technology allows the automatic (i.e. algorithm-based) implementation of the terms of an agreement, we should not immediately equate this with the enforcement of a contract in a legal sense (Mik, 2017; De Caria, 2019; Schmidt-Kessen et al., 2022). Even where a smart script self-executes, there may be grounds to overrule the obligation to perform the contract (for example, grounds like force majeure); and this would then be grounds to call for reversing the execution (Hildebrand, 2020, p. 265). The degrees of contracts’ automation vary, as do the uses of algorithms in contracts. Following Scholz, we might distinguish between the negotiator and gap-filler algorithmic contracts, which define the parties’ responsibilities (2017). In negotiator algorithmic contracts, parties employ algorithms in the negotiating stage before contract formation to decide which terms to propose or accept or which company to conduct business with (Scholtz, 2017). In gap-filling algorithmic contracts, algorithms are employed to fill in details before or sometime after contract formation (Scholtz, 2017). The parties may agree that the algorithm will determine a specific term in the agreement, e.g. the accurate price of a good for each user based on current conditions, at a certain time during the contractual relationship (Scholtz, 2017). The idea behind the multiple and varied uses of algorithms in contracts’ life cycle has been to create a more secure, transparent and efficient way of conducting transactions, to increase transaction speed, to diminish the possibility of human error, and to control manipulation.

3. NORMATIVITY AND THE NATURE OF NORMATIVE DIFFERENCES Automation of this kind is technically interesting, economically important and normatively significant. There are differences between traditional contracts and emerging automated contracts. Some of these differences might be normative, influencing, for example, the contractual obligations that stem from the parties’ relationship in a fully automated contract. For studying these normative phenomena here (and in the context of emerging technologies more broadly), we need a clear understanding of normativity and the nature of normative differences between regulatory approaches. Making normative claims or taking a normative standpoint means making claims about what makes something “good, or right, or elegant, or fair, or just, or graceful, or virtuous, or well-founded,” along with all their opposites (Dannenberg, 2023). In part, such claims make the practice of law what it is – a practice of principles that guide behaviour. Naturally, such claims are not exclusive to law but characterise much of politics, interpersonal relationships, art, and even the pursuit of knowledge (Dannenberg, 2023). In line with this, normativity characterises a practice and its conceptualisations, signifying that the latter are value-laden, i.e. internally or externally reliant on some moral and political choices. Accordingly, the study of normative concepts, e.g. “value,” “good,” “ought,” “justification,” “rationality” or “obligation” and the normativity of a particular practice, e.g. contracting, is a study of our moral and political choices and a search for their best justification in a particular context (Finlay, 2010). When it comes to private law, we see normatively different stances about the core characteristics of that area of law. These have given rise to a distinction between the functionalist and conceptualist accounts of private law. For functionalists, private law is primarily a mechanism for attaining broader societal goals, deeply connected with the normative stance that law as a

The normative novelty of obligations in automated contracts  353 practice ought to promote well-being or some understanding of distributive justice. Pursuing such goals constitutes its core. In contrast, conceptualist accounts of private law are based on a normatively different standpoint, according to which the core moral and political essence of private law is formed by the concepts and principles entrenched in the law. Accordingly, private law’s doctrine emphasises the values of the doctrine’s internal coherence and meaningful autonomy in pursuing the state’s goals.5 The normative differences between these standpoints have been vivid across private law debates, particularly in tort theory which provides an excellent example of how competing normative accounts about the compensation of losses provide alternatives for the normative content of the resulting regulation. Different forms of duties rely on different grounds, principles, and values, which help determine who has to do, pay, to whom, for what, and why in different circumstances. In law, we can talk about such normative differences between the forms of liability like punishments (e.g. in criminal law), administrative sanctions, duties of repair (e.g. in tort law), the liability for tax payments (as a form of liability in front of the society for the social inequality), the liability for insurance payments, and others (Coleman, 2003). These liabilities are each employed in society with particular reasoning and enforced with the help of particular (and very different) legal procedures. For example, when tort law or contract law imposes duties of repair, these duties can be honoured in several ways, most of which make them distinct from the forms of liability under criminal law, administrative law, and tax law because the grounds of a duty of repair are not the same as the grounds for taxation, punishment, or administrative sanctions (Coleman, 2003). The values and principles that determine the content of a particular kind of liability rely on political and moral considerations, which in different areas of law are founded on different normative theories (Coleman, 2003). Let us consider these differences with the help of a debate in tort law. There are obvious differences between the justice-based and efficiency-based approaches to liability in tort law (consider Coleman & Ripstein, 1995; Coleman, 1992; Kaplow & Shavell, 2002). A justicebased account is built on the centrality of bilateral relationships and the duty of repair that arises in these relationships as a result of wrongful losses: based on the principle of corrective justice, an individual has a duty to repair the wrongful loss that her conduct caused to the person who suffered the loss. The efficiency-based approach offers a different explanation of our legal practices: the central features of tort law are not particular duties or losses that individuals incur in bilateral relationships but the social problem of accidents in general and the minimisation of the costs that accidents impose on society in particular (Coleman, 2005, pp. 341–346). From this standpoint, any liability should be understood as a tool. Tort liability should be understood as a tool for allocating accident costs and a mechanism for optimal deterrence (Calabresi, 1970). While the theory of corrective justice distinguishes the form of liability – the duty of repair – as an essential feature of tort law and shows that tort law gives rise to that duty because it embodies the principle of corrective justice, then the economic analysis views tort liability as just one possible mechanism for pursuing efficiency (Coleman, Hershovitz & Mendlow, 2015). If the selection of the legal tool is made primarily based on how helpful it is for achieving a certain economically justified end (e.g. reduction of the costs 5  See, e.g. Pojanowski (2014) describing the relationship between private law and public regulation and asking whether private law is simply public regulation by adjudicative means or can be understood better on the basis of non-instrumental background doctrine.

354  Research handbook on law and technology of accidents) – certainly also reflecting a particular stance on what is morally or politically valuable – then the choice between different forms of liability is made on grounds external to them. The problem with this stance is its misalignment with (and possibly, disregard of) the distinctions between the normative content (the embodied moral and political values) of the different forms of liability. Whenever a legislator needs to regulate a particular type of traffic accident, there is a likely choice to be made between imposing liability for a wrongful loss within a bilateral relationship (in conjunction with the procedural tenets of tort law) and imposing liability in front of the whole society (following the procedure of criminal law). In essence, this choice primarily involves the consideration of whether the particular types of traffic cases are a matter of individual responsibility where one person owes a duty of care to the other person (the extent depending on what we value), and in case of failure to comply, how she can discharge her duty to repair the responsibility for the wrong keeping in mind the extent of the wrong and the loss that occurred. Alternatively, these particular types of traffic accidents may be deemed to violate our moral or political values to the extent that they demand the public discharge of responsibility for the wrongdoing (going beyond bilateral relationship) and justify a more severe interference in the wrongdoer’s privacy. Choosing between these forms of liability based on reducing accident costs would disregard these normative considerations and significantly diminish the meaning of this choice. Liabilities are not simply costs. Understanding different explanatory theories of tort law is insightful for comparing normative accounts underlying the regulation of traditional contracts and the regulatory approach we potentially need for advanced automated contracts. Theories explaining law (and emerging regulation) are expected to reflect the main normative characteristics of a particular field of social practice, just as they need to account for distinctions between forms of liabilities. There are normative differences between forms of liability imposed in different areas of law and between duties imposed within a particular legal field, resulting from the grounds, reasons and values embodied in imposing the duties (Coleman & Ripstein, 1995). They align with peoples’ differing attitudes towards different conduct and are essential to understanding distinctions between different areas of law. Regarding technology-mediated relations, where actions are constrained by information and software architecture uncharacteristic of traditional relations, the discussion on normative differences has renewed relevance. What principles and values help determine who has to pay, to whom, for what, and why in the case of fully automated contracts, considering various events that may occur over their life cycle? A new conceptualisation, attentive to the normative differences between traditional and newly emerging contractual relations, is needed to understand how private law “hangs together” when it responds to technological changes. This is particularly the case if we seek to understand private law as an area of law that is, at least ideally, internally “coherent on its own terms and meaningfully autonomous from the state’s legislative aims” (Pojanowski, 2014). Such an understanding or theory would consider how the standards, norms, and expectations characterising a particular practice differ from those of another. In traditional and algorithmic contracting, there may be differences in how obligations are created, enforced, and fulfilled. Traditional contracts often rely on linguistic interpretation and human discretion, whereas algorithmic contracts rely on pre-determined rules encoded in software. This shift from human discretion to algorithmic determinacy may lead to differences in understanding and fulfilling obligations. Additionally, the way algorithmic

The normative novelty of obligations in automated contracts  355 contracts can be enforced is bound to differ from the choices available in traditional contracts due to concerns related to explainability, black-box algorithms, and jurisdictions. These differences in the normative nature of contracting practices highlight the need for further investigation into the implications of full contract automation on private law relationships and what we can demand from each other.

4. THE NORMATIVE SIGNIFICANCE OF AUTOMATION Contract automation practices have been described as having normative power for determining expected behaviour and enforcing norms. Part of this may be connected to the fact that creating contracts that self-execute or are fully computable also means the introduction of rigidity, inflexibility, and new kinds of constraints in the contractual relationship. Based on recent literature, advanced contract automation, which increasingly facilitates platform-mediated transactions and relationships, exhibits two types of inflexibility: (1) the inflexibility of algorithms on the level of individual transactions (paralleling with the relative inflexibility of financial trading algorithms), which give rise to private costs and (2) inflexibility as a set of changes and systemic risks that arise in markets as a result of contracting algorithms, which gives rise to social costs (Schmidt-Kessen et al., 2022; Sklaroff, 2017). As the terms of an automated contract are encoded into the code and automatically enforced, they are made unalterable. This first type of inflexibility in algorithmic contracts has been viewed as a source of barriers which, in an earlier paper, we described in the following manner: In the case of smart contracts, the problem of inflexibility derives from the fact that once the code has been placed on a blockchain and launched, it carries out the indicated tasks according to encoded instructions and its performance cannot be modified or cancelled. Even in instances where the code includes special intervention functions (Tapscott & Tapscott, 2016), these are executed only at the envisioned moments based on the explicit and precise instructions foreseen at the moment when the code was launched, and as such are also inflexible. (Schmidt-Kessen et al., 2022, p. 4)

In algorithmic contracts, it is possible to automate some or all contract stages, potentially across search, negotiation, formation, performance, interpretation, and enforcement (SchmidtKessen et al., 2022). This limits options to adjust or modify the contract later. Changes to the terms of such contracts need to be made by creating a new contract. This algorithm’s inflexibility is a constraint in the sense that it limits action just like physical barriers and software architecture-related boundaries do. Moreover, this is not merely a descriptive matter, as some of these constraints may be illegitimate. As the technologies are developed and adopted, be it in closed boardrooms or in the blockchain community that tires of being quite transparent, they impose their normative framework on those relying on them with the “force of technology” that evades living up to the rule of law standard developed for state normative powers over time (Hildebrandt, 2020). This is something anyone using the force of law in a liberal society is held accountable to, and a similar expectation arises in relation to the force of technology (Hildebrandt, 2020). On the other hand, the inflexibility of an automated contract generates a level of certainty and trust between the parties, as the contract terms are automatically executed without human intervention. Contracting algorithms, while limiting action, also enable it in a specific way

356  Research handbook on law and technology (Schmidt-Kessen et al., 2022). Based on the theory of constrained maximisation, we know that constraints (like property rights) are needed for the market cooperation-by-competition model to be possible (Gauthier, 1986; Coleman, 1998).6 Using constraints in the form of contracts may be central to creating markets (Markovits, 2017). While potentially an overall positive development, we see that the shift from traditional contracting to algorithmic contracting has the potential to bring about various changes in the realm of determining expected behaviour and enforcing norms. The use of technologies in coding contracts’ full life cycle could bring about any or all of the following changes (and losses) in contracting: the elimination of linguistic ambiguity and flexible interpretation, the removal of the ability to rely on incomplete contracts, a change in the skills required for contract drafting and interpretation, the loss of the ability to breach a contract, and the loss of the ability to adapt or exercise enforcement discretion at a later stage (Schmidt-Kessen et al., 2022). This shift may also result in further loss of the relation-building environments, which is also eventuated already by the existing mass markets with strong power imbalances. The fact that the contract’s inflexibility does not allow it to adapt to changing circumstances easily has consequences for the parties involved. The inflexible nature of automated contracts shapes their expectations and behaviour. First, this calls for the question of whether the abovelisted changes (and losses) in contracting are normatively significant, i.e. do they result in changes in the grounds, principles, and values embodied in some types of contracts?7 We will take up this lead in the next section, and use an example to explore it. We will study whether an obligation to fulfil one’s part of a contract has the same meaning in a traditional and fully automated algorithmic contract.8 Second, this calls for exploring in which way the differences between traditional and algorithmic contracting could be normative.9 When Hildebrand’s team introduced their typology of legal technologies, they emphasised that there are “differences that make a difference” when distinguishing types of legal technologies. The crucial difference they had in mind was 6  Markets are typically characterised by competing and conflicting interests. Against this backdrop, Gauthier and Coleman suggest that we might, instead, think of the market as a form of cooperation. According to Coleman, rational actors choose markets as appropriate institutional arrangements for themselves, because they contribute to social stability “by allowing individuals to cooperate with one another over a broad range of areas without first having to share deep or controversial commitments about the nature of what is good or valuable in a life” (Coleman, 1998, pp. 319–320). This suggests that market competition is best characterised as a form of cooperation-by-competition, where competition presupposes cooperation (Coleman, 1998, p. 320). Based on this view, markets are not valuable because of their efficiency, but because they contribute to liberal stability (Coleman, 1998, p. 321). 7  For the purposes of this chapter, we take it as given that these changes matter even in circumstances where the disputes do not reach adjudication. The reasons for this are worth exploring and explaining separately. 8  A question related to this but going beyond the current analysis is whether the parallel between the duty to keep one’s promises and the duty to fulfil one’s part of a contract hold in algorithmic contracting in the same manner as in traditional contracting? 9  Going beyond the scope of this chapter, we should also ask about policy implications. Do we need new safeguards or risk-mitigation mechanisms appropriate to the technology-induced normative relationships? Or do we need a regulatory intervention (like a reconceptualisation of contract law), for example, if automated contracts and their legal framework, in balance, do not provide sufficient autonomy to consumers, a sufficient level of democracy in trade, or a well-justified system of liability? As some might suggest, it might be preferable not to use algorithmic contracting in consumer transactions or weaker judicial contexts.

The normative novelty of obligations in automated contracts  357 the one between (1) the interpretation of the norm taking place in light of the facts and (2) the interpretation of the facts taking place in light of the norm. At the same time, many other “differences make a difference,” as discussed above, some of them relating to how one can fulfil obligations or exercise the corresponding rights in automated legal relationships. It is yet unclear whether and how obligations arising from law and the obligations arising from code-based relations might differ in their normative content. In Hildebrandt’s view, “Code does not generate obligations. It generates constraints and may enable new types of behaviour.”10 One might go even further by claiming that a rule that cannot be disobeyed cannot be a legal norm.11 If this is so also in the case of private legal relationships that are fully (or nearly fully) encoded, such as relationships in algorithmic trading on crypto exchanges and similar platforms, we have difficulty describing them with the common legal terminology. We might still want to refer to obligations there, but it seems that this cannot be done easily. Obviously, obligations have not disappeared, but for example, in the case of computable contracts where the parties’ relationship has been automated from start to finish and executed with the help of extremely inflexible code, the exercise of obligations has certainly changed. These matters will frame the content of the following discussion.

5. OBLIGATIONS The current broadly accepted understanding of contracts involves the concepts of obligations and promises, which have the following normative content. “To have an obligation is to have a reason to act or to refrain from acting – a reason with which one is in some sense bound to conform” (Green, 2004). There are multiple sources of legal obligations relevant to contracting, e.g. the general obligation to obey the law,12 the obligation to keep one’s promises, etc. Unsurprisingly, their bindingness has grounds in that of moral obligations. As many suggest, following Hume, there are natural obligations and conventional or artificial ones: we either have an obligation to do something because the act we are expected to do is right, based on the best understanding of morality, or because the act is part of the best way to coordinate our affairs in the society. This duality of the grounds of obligations could be supplemented with a third category of obligations arising from our special relationships (Owens, 2012). 10  Conversation during the conference “CRCL 2022: Computational ‘law’ on edge” in December 2022. 11  Ibid. 12  Law has an obligation-imposing character (Green, 2004). While it is traditionally believed that there is a general obligation to obey the law, an increasing number of moral and political philosophers have cast doubt on that (Green, 2004, p. 515). Accordingly, it would be more suitable to say that law claims obedience, which we typically understand as the obligation to obey the law. This obligation of obedience refers to the responsibility of individuals and entities to follow and abide by the laws and regulations of the jurisdiction in which they reside or operate. This obligation is often considered a fundamental aspect of a well-functioning society, as laws and regulations help maintain order, protect citizens, and ensure everyone is treated fairly. One way to ground such a political obligation in theory is to say that the state has a legitimate authority to make laws and regulations and that individuals and entities have an obligation to obey these laws because a legitimate authority has made them (consider Raz, 1986). At the same time, as the disagreements between voluntarists, non-voluntarists, and their critics have shown, the source of this obligation and the conditions under which obedience can be demanded are a matter of debate (consider Shapiro, 2002).

358  Research handbook on law and technology The normative content of contractual obligations is different from the content of general obligations under the law and can be understood as a free-standing relation between contracting parties, not necessarily dependent on pre-existing social and moral relationships (Markovits, 2011; Markovits & Atiq, 2021). Having an obligation to fulfil one’s part of a contract due to the hierarchical relations in a state (and the state’s power to enforce a contract) is different from an obligation to obey the law. And it is yet another thing to have an obligation to fulfil one’s part of a contract due to the horizontal moral relationship that emerges as a result of an agreement between parties. Here, contractual obligation refers to the commitment to a shared perspective between parties created through their expressions of intent (in statement or conduct) to enter into a contractual relationship (Markovits, 2012). Some have suggested that contracts produce authority relations, which form a central component in the normative structure of market exchange (Dagan, 2020). Following Markovits (2012), they could be considered central to the establishment of valuable market solidarity. At the same time, the nature of these authority relations is a matter of disagreement between contemporary moral theorists due to diverging views on the link between promises and obligations. Promises are generally taken to impose moral obligations, but there is an ongoing debate on the best explanation of how such obligations arise and function (Habib, 2022). Are our obligations founded on our power to obligate ourselves or to obligate others? If founded on our power to obligate ourselves, does the obligation stem from our interest in cooperation, coordination, special relationships, or in having authority? Most contemporary normative power theorists acknowledge the promisor’s power to invoke obligations by promissory utterance, i.e. the self-obligating power. They ground the self-obligating power in our interests, just as our rights and privileges have been grounded in our interests earlier (Feinberg, 1970; Habib, 2022). For example, based on Owens’s views, we are interested in having a certain practical authority over others, and this authority can be attained by being the recipient of a promise (Owens, 2006; Habib, 2022). In contrast, from Raz’s point of view, the selfobligating power stems from our interest in cooperation and coordination (Raz, 1986), while for Shiffrin, it rather stems from our interests in forming and maintaining relationships with others: The power to make binding promises, as well as to forge a variety of other related forms of commitment, is an integral part of the ability to engage in special relationships in a morally good way, under conditions of equal respect. (Shiffrin, 2008, p. 485; Habib, 2022)

In parallel with this, Darwall suggests that it is not the self-obligating power we should talk about as central to agreements but the power we have for making claims and demands on one another (Darwall, 2006; Darwall, 2011). For Darwall, such a “second-personal authority” is normatively foundational and is necessarily assumed in all cases of agreed-upon arrangements, even without an explicit agreement. Promises and other mutual arrangements (e.g. contracts) generate obligations through this second-personal authority, i.e. our power to make claims and demands on each other. Moreover, with the conception of hypothetical contractualism, Darwall suggests that the resulting moral obligations are not limited to the terms of the agreement but can also go beyond these (Habib, 2022). In sum, the deep insights these theories offer on the nature of (contractual) obligation show us the centrality of self-obligating power and the ability to make demands on one another. Providing a good guide for evaluating the characteristics of obligations in fully automated contractual relationships, they warrant further study, particularly in light of the question of

The normative novelty of obligations in automated contracts  359 how the theoretical framework of contracting can account for the kinds of promises or obligations that do or do not arise in different kinds of fully automated algorithmic contracts.

6. OBLIGATIONS IN AUTOMATED CONTRACTS Contract automation makes it possible that the parties’ contractual obligations are, in some parts, determined through a combination of code and data analytics. This shift from human discretion to algorithmic determinacy may lead to differences in how obligations are understood and fulfilled and may have implications for private law relationships and the legal system. Considering the normative nature of obligations in traditional and algorithmic contracts helps us understand how private law responds to technological changes. Based on theories of promise and obligations, our contractual relations can be explained through an authority relationship either stemming from the self-obligatory power, as suggested by Owen, or the other-obligating power, as suggested by Darwall. A contractual obligation is a means for one party to have authority over another. What is the nature of this authority relationship, and how can authority be exercised when contracts are fully automated? The use of rigid algorithms for fulfilling the contract terms certainly offers one way to exercise authority: such algorithms act as a means for both parties to limit the options available to the other, thus providing an avenue for exercising authority over the other. At the same time, such authority is weakened if having it requires continuous control across the contract’s life cycle or control over a particular party (as opposed to control over anyone a negotiator algorithmic contract identifies as a suitable contracting party). Due to the pre-programming, the continuity of control and relation-building properties are lost. Due to the possibilities for the automated search of the contracting party, the authority becomes less targeted as it is extended broadly over anyone who signs the contract. The conclusions we can draw on authority relations in any of such contracts depend on the theory – the grounds (prerequisites) of obligations – and the materialisation of these relations in algorithmic contracts. By hypothesis, the obligations that arise from the interaction of parties and technologies might not only be somewhat different from traditional ones in content but also in kind, considering the decreasing role of parties’ involvement in the contract’s life cycle, their ability to make demands regarding performance, the limited avenues for contract enforcement, and the different linguistic and relational context. Suppose the creation of obligations stems from the interest in special relationships or cooperation, and automated contracts are unsuitable for meeting these interests. How would this affect the obligations? Would they cease to bind sufficiently or cease to exist? Based on doctrine, one is obligated to perform one’s part of a contract, with the other party acquiring the normative power to demand the fulfilment of contractual obligations. Considering this in the context of contracts that rely nearly fully on negotiator, gap-filler and enforcement algorithms, we see that after the pre-coded contract has been launched, the contractual relationship becomes significantly independent of the parties’ external influence or control. Demanding the fulfilment of contractual obligations in the context of an algorithmic contract that has been launched on the basis of pre-coded instructions requires further consideration: could it be deemed reasonable after the launch, and would it have an adequate addressee? Surely, until the contract has been performed, the parties continue to be bound by

360  Research handbook on law and technology the contract, but would they also continue to hold “obligations to perform” their parts of the contract? Once the code has been launched, fulfilling these obligations is beyond the reach of the relevant party, and we would need to think further about how to make sense of any demands in this regard. Being obligated to perform one’s part of a contract loses an important part of its meaning when one’s part, in essence, becomes performed already at the time of contract conclusion, i.e. when the contract’s code starts running, the performance is certain, and intervention with the coded performance is impossible. For example, if the performance concerns a transfer of an asset, temporally, there still seems to be a distinction between (1) the completion of the asset’s transfer and (2) the completion of all necessary steps for the asset’s transfer along with the certainty that it will take place. Substantively, however, in a fully automated contract, these two aspects of performance collapse. As a result, several aspects of a contractual relationship differ from the traditional one: (1) The demand for someone to fulfil their part of the contract makes much less sense (or at least makes sense in a different way) when that party’s contractual obligation was essentially already performed in the contract formation stage. Perhaps we need to replace the contracting party’s right to demand that the other fulfilled their contractual obligation in these contexts with something more suitable. (2) With a party’s contractual obligation essentially fulfilled already at the time of the contract’s conclusion (and considering the significance of time in legal relations), the temporal moment in which a party performed their part of the contract needs clarification. How long would a party to a contract still have an ongoing obligation to fulfil their part of the contract if the fulfilment is fully automated and cannot be modified or cancelled? Should they refer to an upcoming or past contract performance? (3) While a party must fulfil their part of the contract, this obligation might conflict with other (perhaps previously unforeseen) obligations, where all things considered, it might be preferable for the party to default on their obligation to fulfil their part of the contract (with the consequences of the breach). With the inflexibility of automated contracts, this path becomes unavailable, certainly interfering with party autonomy. With the speed of automated interactions, the concept of obligation acquires a renewed meaning. There is a difference between (1) agreeing to perform one’s part of a contract, acquiring the obligation, and subsequently acting on the (self-)imposed terms, and (2) agreeing to and performing one’s part of a contract at the time of launching the software script in the formation stage of the contract, with little need for the conceptualisation of obligation in the course of it. In the first case, typical of traditional contracts, an obligation arising from the contractual relationship gives a party a reason for action. In the second case, where one’s part is in a way performed already at the time of contract conclusion, it is debatable what we should call an obligation in that relationship. We are faced with the challenge of explaining what constitutes an obligation to perform in such relations beyond what has already been done at the stage of contract conclusion, what constitutes a wrong in such contracts (e.g. if non-performance would be an example of this in traditional contracts, then what would be an example here), and how to lay down the most conducive legal framework for demanding repair for wrongs and exercising the duty of repair?

The normative novelty of obligations in automated contracts  361 Moreover, what should be our response to circumstances in which defaulting on one’s contractual obligation would be preferred on the grounds of efficiency or autonomy, but the performance of one’s part cannot be stopped due to the inflexibility of the automated contract? It is worth discussing whether we should try to provide new legal grounds for taking the efficient or autonomy-enhancing path for such contracting situations where contract conclusion and fulfilment co-occur. The changed circumstances suggest that we might need a new norm by which the incomplete performance or non-performance of one contract could give rise to either an intervention or, most probably, a new transaction or contract, which could undo some of the outcomes of the previous contract and lead to the desired result without significant complications in parties’ relations with each other. In sum, an obligation potentially responsive to demands differs from the unresponsive-todemands obligation characteristic of algorithmic contracts. While the former extends across the contractual relationship and, if needed, also substantiates demands in case of a wrong, the latter does not allow characterising the contractual relationship beyond the moment the contract was launched. Similarly, we should ask whether the obligations stemming from algorithmic contracts are binding and enforceable in the same manner as obligations in traditional legal relationships. The ability to legally enforce such contracts can be impacted by factors such as the contracting parties and the algorithms used. The challenges of legally undoing, reversing, or enforcing these contracts can arise from aspects such as consideration, intent, expertise, explainability, black box, and standards of the algorithms used (Schmidt-Kessen et al., 2022). Furthermore, the enforcement of these contracts may be hindered because the party that benefited from the contract may be unidentifiable or in a foreign jurisdiction, making court proceedings difficult (Hildebrandt, 2020, p. 265). Given the challenges in implementing the law in relation to new technologies,13 these complications in enforcing algorithmic contracts raise the question of whether we might require innovation in legal fiction. The trajectory whereby a traditional legal concept acquires a new normative content is in line with the history of developing legal fiction or legal constructs (e.g. a legal person). With possible automation-induced changes in relations and values in private law, which could stand

13  Discussions of technology regulations repeatedly come back to the idea of divergence. Namely, there seems to be a disconnect between traditional legal practices and modern technological developments. There is notable hesitation regarding technology regulation in the legal community, which stems from the fact that legal scholars diverge in their views on the legal system’s ability to guide behaviour in the midst of technological change. They tend to take one of three views on the relationship between law and technology. According to the optimistic view, the law is ready for all the technological challenges. It has an open texture, and with this, it is ready to respond to all challenges that could possibly arise in society (a view possibly defended by Burkhard Schafer claiming that “the law never lags behind”). Taking this position, lawyers mostly mean that legal rules apply to everyone, everywhere, and thanks to their general nature, can be interpreted to be applicable in most situations and to most new technologies. It’s worth noticing, though, that they generally do not address the question of whether law is able to embody or pursue justice or protect fundamental values in the changing society in the same or similar manner as it did (or was meant to do) before the fast surge in data and technologies. According to the pessimistic or progressive view, as technologies are fundamentally changing how societies operate, and considering the growing competition between the force of law and the force of technology in enforcing norms, we should rebuild our legal systems, i.e. start thinking about law from scratch (a view possibly defended by Hildebrandt). According to the view in the middle, there is uncertainty about complete optimism and complete pessimism.

362  Research handbook on law and technology in the way of legal clarity, low transaction costs, and economically efficient algorithmic markets, we could benefit from re-examining the legal constructs we use today and considering new ones. For the law’s ability to guide behaviour, there is a need to make the implicit normative change, if true, explicit.

7. CONCLUSION In a traditional contract, the parties are obligated to perform their respective obligations as outlined in the agreement. This obligation is usually defined in general legal concepts, such as promises, consideration, and performance. The terms of the contract are open to interpretation and may be subject to different understandings and negotiations between the parties. In a fully pre-programmed algorithmic contract, on the other hand, the parties are obligated to perform their obligations precisely as written in the software script that they agreed to in the formation stage of the contract. The contract terms are not open to interpretation or negotiation, as they are automatically enforced by the code. This means that the parties are obligated to perform their obligations precisely as specified in the software script, without any room for discretion or interpretation. The difference between the two is that traditional contracts allow flexibility and interpretation, while algorithmic contracts do not. This inflexibility can be seen as both a barrier and an enabler, as it eliminates the need for negotiation and interpretation and makes it difficult to modify the terms of the contract in response to changing circumstances. It is worth exploring the role of emerging technologies in shaping the nature of obligations in automated contracting and, if they are indeed different not merely in content but also in kind, to consider how these obligations are being integrated into the broader legal system. Overall, the normative nature of obligations in automated contracting is a complex and nuanced area that requires further exploration and examination.

REFERENCES Buldas, A., Kroonmaa, A. & Laanoja, R. (2013). Keyless signatures infrastructure: How to build global distributed hash-trees. In H.R. Nielson & D. Gollmann (Eds.). Secure IT systems. NordSec 2013. Lecture notes in computer science (vol. 8208). Berlin, Heidelberg: Springer. Calabresi, G. (1970). The cost of accidents: A legal and economic analysis. New Haven: Yale University Press. Coleman, J. (1992). Risks and wrongs. Cambridge: Cambridge University Press. Coleman, J. (2001). The practice of principle. Oxford: Clarendon Press. Coleman, J. (2003). Theories of tort law. In E.N. Zalta (Ed.). The Stanford encyclopedia of philosophy. Retrieved from https://plato​.stanford​.edu​/archives​/win2003​/entries​/tort​-theories/ Coleman, J. (2005). The costs of accidents. Maryland Law Review, 64(1–2), 337–354. Coleman, J., Hershovitz, S. & Mendlow, G. (2015). Theories of the common law of torts. In E.N. Zalta (Ed.). The Stanford encyclopedia of philosophy. Retrieved from https://plato​.stanford​.edu​/archives​/ win2015​/entries​/tort​-theories/ Coleman, J. & Ripstein, A. (1995). Mischief and misfortune. McGill Law Journal, 41, 91–130. Coleman, J.L. (1998). Second thoughts and other first impressions. In B. Bix (Ed.). Analyzing law: New essays in legal theory (pp. 257–322). Oxford: Clarendon. Cooter, R. & Ulen, T. (2016). Law and economics (6th ed.). Boston: Pearson. Cunningham, L.A. (2006). Language, deals and standards: The future of xml contracts. Washington University Law Review, 84, 313–373.

The normative novelty of obligations in automated contracts  363 Dagan, H., Dorfman, A., Kreitner, R. & Markovits, D. (2020). The law of the market. Law and Contemporary Problems, 83, i–xviii. Dannenberg, J. (2023). Doing moral philosophy without normativity. The Journal of the American Philosophical Association. Retrieved from https://drive​.google​.com​/file​/d​/1Rn​79EB​H80v​ZPSE​TXsr​ Ivso​V6KJ​Bs5hx9​/view​?usp​=share​_link Darwall, S. (2006). The second person standpoint: Morality, respect, and accountability. Cambridge, MA: Harvard University Press. Darwall, S. (2011). Demystifying promises. In H. Sheinman (Ed.). Promises and agreements: Philosophical essays (pp. 256–274). Oxford, New York: Oxford University Press. De Caria, R. (2019). Definitions of smart contracts, between law and code. In L.A. DiMatteo, M. Cannarsa & C. Poncibò (Eds.). The Cambridge handbook of smart contracts, blockchain technology and digital platforms. Cambridge: Cambridge University Press. Eenmaa-Dimitrieva, H. (2019). The problem of dependency of corrective justice: Corrective entitlements and private transactions. Canadian Journal of Law & Jurisprudence, 32(1), 59–82. Eenmaa-Dimitrieva, H. & Schmidt-Kessen, M.J. (2019). Creating markets in no-trust environments: The law and economics of smart contracts. Computer Law & Security Review, 35(1), 69–88. Feinberg, J. (1970). The nature and value of rights. The Journal of Value Inquiry, 4(4), 243–260. Finlay, S. (2010). Recent work on normativity. Analysis, 70(2), 331–346. Ganuza, J. & Gomez Pomar, F. (2016). The strategic structure of contract Law (book draft). Gauthier, D. (1986). Morals by agreement. Oxford: Clarendon. Governatori, G., Idelberger, F., Milosevic, Z., Riveret, R., Sartor, G. & Xu, X. (2018). On legal contracts, imperative and declarative smart contracts, and blockchain systems. Artificial Intelligence and Law, 26(4), 377–409. Green, L. (2004). Law and obligations. In S. Shapiro & J. Coleman (Eds.). The Oxford handbook of jurisprudence and philosophy of law. Oxford: Oxford University Press. Habib, A. (2022). Promises. In E.N. Zalta & U. Nodelman (Eds.). The Stanford encyclopedia of philosophy. Retrieved from https://plato​.stanford​.edu​/archives​/win2022​/entries​/promises Hildebrandt, M. (2020). Law for computer scientists and other folk. Oxford: Oxford University Press. Kaplow, L. & Shavell, S. (2002). Fairness versus welfare. Cambridge, MA and London, England: Harvard University Press. Markovits, D. (2011). Promise as an arm’s length relation. In H. Sheinman (Ed.). Promises and agreements: Philosophical essays, 295–326. Oxford: Oxford University Press. Markovits, D. (2012). Contract law and legal methods. New York: Foundation Press. Markovits, D. (2017). Market solidarity: Price as commensuration, contract as integration. Retrieved from http://www​.derecho​.uba​.ar​/institucional​/deinteres​/pdf​/2017​_markovits​.pdf Markovits, D. & Atiq, E. (2021). Philosophy of contract law. In E.N. Zalta (Ed.). The Stanford encyclopedia of philosophy. Retrieved from https://plato​.stanford​.edu​/archives​/win2021​/entries​/contract​-law/ Mik, E. (2017). Smart contracts: Terminology, technical limitations and real world complexity. Law, Innovation and Technology, 9(2), 269–300. Owens, D. (2006). A simple theory of promising. Philosophical Review, 115(1), 51–77. Owens, D. (2012). Shaping the normative landscape. Oxford: Oxford University Press. Pojanowski, J.A. (2014). Private law in the gaps. Fordham Law Review, 82, 1689–1750. Posner, R. (2011). Economic analysis of law (8th ed.). New York: Aspen Publishers. Raz, J. (1986). The morality of freedom. New York: Oxford University Press. Sartor, G. (2018). Contracts in the infosphere. In S. Grundmann (Ed.). European contract law in the digital age (pp. 261–277). Cambridge: Intersentia. Scholz, L.H. (2017). Algorithmic Contracts. Stanford Technology Law Review, 20, 128–169. Schmidt-Kessen, M.J., Eenmaa, H., Mitre, M. (2022). Machines that make and keep promises - Lessons for contract automation from algorithmic trading on financial markets. Computer Law & Security Review, 46, 105717. Shapiro, S. (2002). Authority. In J. Coleman & S.J. Shapiro (Eds.). The Oxford handbook of jurisprudence and philosophy of law. Oxford: Oxford University Press. Shiffrin, S.V. (2008). Promising, intimate relationships, and conventionalism. Philosophical Review, 117(4), 481–524.

364  Research handbook on law and technology Sklaroff, J.M. (2017). Smart contracts and the cost of inflexibility. University of Pennsylvania Law Review, 166, 263–303. Staples, M., Chen, S., Falamaki, S. & Ponomarev, A. (2017). Risks and opportunities for systems using blockchain and smart contracts. Data61 (CSIRO). Sydney. Surden, H. (2012). Computable contracts. Davis Law Review, 46, 629–700. Szabo, N. (1996). Smart contracts: Building blocks for digital markets. Extropy, 16. Tapscott, D. & Tapscott, A. (2016). Blockchain revolution: How the technology behind Bitcoin is changing money, business, and the world. New York: Penguin. UK Law Commission. (2021). Smart legal contracts: Advice to government. Retrieved from https:// www​.lawcom​.gov​.uk ​/project ​/smart​-contracts/

22. STS jurisprudence: exploring the intersection between science and technology studies and law Kasper Hedegård Schiølin1

1. INTRODUCTION In pre-modern times, nature and law belonged to the same sphere of reasoning. Nomos (laws) were deduced from physis (nature). As much as acts could be just by nature, they could also be unjust by nature.2 Killing another person would according to this logic be unjust by nature, in so far, at least, as killing is itself not just to nature. The gods, too, were fused into this amalgam, lending law not only natural, but also divine authority. Thomas Aquinas (1989, p. 280) famously conceptualized this metaphysical unity of disciplines that later divorced into separate, and carefully demarcated, fields as lex naturalis, or natural law. While many of the (positive) Roman laws derived from natural law, the long period of the Roman Empire was also the beginning of the separation of science, law and religion into more specialized expertise. Leaving out the divine, there is, however, a relatively new field, new, at least in comparison with that of law, which has sought to reestablish the intrinsic connection between law and science (and technology), namely science and technology studies, or STS. Reestablishing, to be sure, as a rethinking of how law shapes, and is shaped by, science and technology, or of how science and technology and law are, in the vernacular of STS, co-produced. What the world is and ought to be cannot be separated in the reasoning of STS without doing damage to both the ontological and the normative side of the equation, precisely because they are, according to the field, two sides of the same coin. The most crucial institutions of modern society bear witness to this disciplinary community. Think, for example, of the complex legal matters surrounding medical procedures in hospitals, including clinical trials; the ethical and legal guidelines regulating scientific experiments at universities and laboratories; the introduction of algorithmic profiling in law enforcement; the germinating data protection laws harnessing the once almost unregulated realm of digital technologies; or, for that matter, the frequent use of scientific expertise, for example, forensic science, in courtrooms, which, in return, is expected to follow suit with the staggering developments in science and technology by making new laws and updating obsolete regulation. In this chapter, I will present two of the most important contributions in STS for rethinking the relation between law and science and technology. I tentatively conceptualize this as STS 1  This work has been supported by the Independent Danish Research Fund (1024-00178B) and The Carlsberg Foundation (CF19-0432) 2  See, for example, Aristotle’s Rhetoric, Book I – Chapter 13, where he quotes Empedocles’ argument for killing no living creatures: “Nay, but, an all-embracing law, through the realms of the sky. Unbroken it stretcheth [sic], and over the earth’s immensity” (1373b).

365

366  Research handbook on law and technology jurisprudence. With this coinage, I do not mean jurisprudence in a strict, systematic sense, which would not do justice to STS’ style of reasoning. I rather use it as a conceptual umbrella under which I attempt to unearth and bring together prolific fragments of legal thinking in STS. As STS will be uncharted territory for many readers of this book, I will begin the chapter by offering a (very) brief history of the field and its basic assumptions. Focusing on the field’s intersections with law, provides the backdrop for the following two sections, where I take a closer look at Bruno Latour’s sociology of hybrids and Sheila Jasanoff’s crucial work at the intersection of law and science and technology. While STS has originally been more occupied with science than technology, I will in accordance with the overall aim of the book, have a stronger emphasis on technology. This choice, however, can also be justified more pragmatically: while science and law have been the two most prominent authorities of modern society, holding sway, respectively, over what is true and what is just, technology is today challenging this bifurcation of power, interfering in both what we conceive of as true and just. Technology is perhaps even surpassing the authorities of science and law, becoming the primary authority of the world. Turning off science and law for a day or two would probably be less unsettling, than disconnecting technology for only a few minutes. Moreover, distinguishing between science and technology does not make much sense anymore. In general, science is highly technological and technology is highly scientific, which is why the notion of technoscience is perhaps better at capturing the confluence of the domains (Kochan, 2010).

2. A BRIEF HISTORY OF STS It is no coincidence that STS emerged in the 1960s, a decade that saw a considerable change in the public sentiment toward science and technology, which since the aftermath of the world wars had been viewed unequivocally as the carriers of progress and peace.3 But in the 1960s, the public learned about the destructive features of science and technology, whether in the form of napalm in Vietnam, insecticides, such as DDT, silencing nature in people’s own backyards (Carson, 2002), or the escalating arms race of the Cold War (Turner, 2008). While these developments constitute the sociohistorical backdrop to the cautious beginnings of STS, the field also emerged from new intellectual undercurrents that challenged the authority of science. It is probably not fair to trace back the antecedents of any academic field to only a single work or a single person, but it is unquestionably that Thomas Kuhn’s influential 1962 book, The Structures of Scientific Revolutions, was crucial for the formation of STS, providing the field a substantial intellectual backbone and, for many, a provocative edge.4 Kuhn’s central claim that, viewed historically, the development of science is not reducible to a linear process, 3  Highly representative of this sentiment is Vannevar Bush’s 1945 report to President Franklin D. Roosevelt, Science, The Endless Frontier, in which Bush, the engineer, former vice president of MIT, and the first director of the Office of Scientific Research and Development, argued strongly for the ability of science and technology to solve almost every societal problem in both war and peace, and hence the need for considerably increased public funding to realize this promising potential (Bush, 1945). 4  Ludwik Fleck’s (1997) descriptions of the social processes in which scientific claims come into being, and Robert Merton (1979), who showed how certain norms were guiding scientists, were two of STS’ other important intellectual antecedents.

STS jurisprudence 

367

nicely ordered by a rationality inherent to science itself. Rather, Kuhn demonstrated, it was a much more complex process, where the scientists’ subjective world-views competed about the right to frame, and eventually solve the accumulated problems that perplexed other, usually older, scientists. In other words, science was not as objective, autonomous, and ahistorical, as it appeared to be. Hence, to fully understand and assess science, the many subjective factors driving it had to be accounted for. This crucial conclusion was an invitation to not only historians, such as Kuhn himself, but also to sociologists, anthropologists and others trained in studying human societies and human behavior.5 After all, scientists were also behaving in certain ways and living in certain societies, and were thus excellent subjects for the scrutiny of social science. The two STS pioneers, Bruno Latour and Steve Woolgar (1979) were among the many who took the baton from Kuhn in their foundational 1979 book, Laboratory Life, which was based on long periods of fieldwork in a laboratory at Salk Institute for Biological Studies. Equipped with methods developed by anthropologists to study indigenous people, Latour and Woolgar were now turning the ethnographic lenses toward scientists, studying them as an outlandish tribe with a distinct language, customs, rituals, and cultural expressions. The book’s subtitle, The Construction of Scientific Facts, is not only evocative for the book, but captures the most fundamental metaphysical underpinnings of STS, namely constructivism. The assumption that facts are not discovered, but constructed by all kinds of more or less rational means, subjects as well as objects, is thus the point of departure for most STS studies. This implies that all actors involved in the construction of facts have to be considered in order to understand science: the scientists, of course, but also the samples they are analyzing and synthesizing, the technicians, their microscopes and their mice, the journals in which their results are published in, including their editorial policies, the funding structures that made the various research activities possible, but also, and particularly relevant for this book, the laws regulating what is possible in a laboratory and what is not. The simple method that can be derived from these early contributions to the field has been rendered into a catchphrase often repeated by STS scholars: follow the actors! Answering this methodological call is supposed to lead the analyst in and out of the laboratory, to unexpected places, indeed also often into courtrooms, slowly unraveling an ever-expanding network of human and non-human actors, who are all contributing to the making of the facts and knowledge in question.6 Around the same time, in the late 1970s, sociologists at the University of Edinburgh, particularly David Bloor and Barry Barnes, were contributing to the development of STS by advocating, in a more programmatic fashion, for a so-called strong program aimed at providing the new field with some theoretical and methodological grounding. 5  Interestingly, legal scholars, such as Lon Fuller, are in the same period pointing at Kuhns studies as an encouragement to break with legal positivism by not separating the social context in which law appears in, but rather emphasizing it as instrumental for the making and understanding of law (cf. Jasanoff, 2008, p. 763). 6  While “making” or “fabricating” facts today connotates fraud, the etymology of the word “fact” is surprisingly on par with the constructivism of STS. “Facere” is the past participle of “to do” in Latin, suggesting that a thing has been done or performed. It thus comes with a much less static meaning – more a verb than a noun – than we ascribe to the word today. Interestingly, this newer meaning of “fact” appeared in the 17th-century in relation to the emergence of science as an autonomous field. See also Schiølin (2012) for a discussion of the meaning of verbs and nouns in STS.

368  Research handbook on law and technology Causality, the first principle of a strong program, is not inherent to science, but has to be found and studied in psychological and social factors that condition science and decide what knowledge gets elevated to facts. The second principle, impartiality, ensures that both the successes and failures of science are being studied equally; the history of science gets too embellished, and simply not true, when rendered into a textbook version of its great men – never women – and breakthroughs. The third principle, symmetry, builds on this point, by stating that the same explanations must be used for both the failures and success; science does neither fail because it is polluted by too many social interests, nor does it succeed because it is an example of pure science. Finally, the last principle, reflexivity, demanded that social scientists also apply the principles to their own work to understand what conditioned their own theories and made them fail or succeed (Bloor, 1991). At the University of Bath, Harry Collins suggested that the strong program needed to be expanded with what he named the empirical program of relativism (EPOR), directing the social study of science toward the reception of scientific theories and facts. According to EPOR, the researcher should account for the interpretative flexibility of science, showing how certain interpretations get stabilized and gain authority (closure), and connecting these closures to specific social and political factors (Collins, 1981). COVID-19 serves as an illuminating example, as the myriad of data that have been generated by COVID-19 have been interpreted very differently by people, governments, science advisors and even health professionals around the world. As such, COVID-19 is a local, social and negotiable phenomenon, rather than a fixed universal one. Its meaning is, in other words, flexible. What is agreed to be a high infection number and a high fatality rate in Denmark, causing authorities to shut down vital parts of society, declaring a state of emergency and enforcing exceptional measures, were Swedish health authorities interpreted more lenient and did not give rise to similar restrictions. The reasons for these different interpretations of COVID-19 should, according to EPOR, be led back and related to the heterogenous arrangements and different political inclinations of the otherwise rather similar Swedish and Danish societies (cf. Nielsen & Lindvall, 2021). That COVID-19 is both a medical, cultural, and political phenomenon does not make it more or less real, or more or less dangerous. Quite the contrary; it is real exactly because it is constructed. These early theoretical developments in STS would later, in the 1980s, be developed into analytical frameworks that lend explanatory power to the “T” in STS. Wiebe Bijker, Trevor Pinch, and Thomas Hughes (2012), for example, suggested Social Construction of Technology (SCOT) as an approach that adopts many of the principles from the strong program and EPOR to challenge the idea of technological determinism, the widely held view that technology drives human actions and hence eventually history. According to this view, technology, like science, has an inherent logic, insusceptible to external factors, but still convincing and strong enough to drag the world into an inevitable and irreversible direction.7 However, as SCOT has demonstrated through various empirical studies, it is precisely the external, social factors that decide what technologies get accepted and what get rejected, and these decisions do not follow an evolutionary logic, according to which the most advanced technology survives. When we want to understand why a given technology – a smartphone, a bike or a plane – is widely used and why it looks and works as it does, we should, according to SCOT, start the interrogation by trying to understand the various needs of the various social groups that have been using it. 7  For a discussion of contemporary examples of technological determinism and the politics that it carries, see Schiølin, 2020.

STS jurisprudence 

369

Without going further into the many sub-branches of STS, as well as the profound, and often heated, discussions that have characterized the field, it suffices here to round off this brief history of STS by mentioning that STS today is a well-established field with several dedicated STS journals, national and international conferences, academic societies, and study programs and research centers at the most prestigious universities around the world.

3. HUBRIS AND HYBRIDS Bruno Latour’s most explicit work on law, The Making of Law, from 2002, is a very detailed, indeed abstruse, ethnographic study of administrative law at Conseil d’Etat, one of the French supreme courts. While the study excels in empirical details and provides insights into mechanisms of law-making that only a few people have had access to, it is in Latour’s more theoretical writings on science and technology, where he seeks to break with the deep-rooted subject-object dichotomy, that we, in my view, find his most original and thought-provoking challenge and contributions to legal thinking. If read carefully, some of Latour’s most fundamental ideas, such as hybrid actors and technical mediation, invite a far-reaching rethinking of core legal concepts, such as legal personality, liability, property, tort, patents, etc. Interestingly, some of Latour’s more than 30-year-old ideas, which have been pivotal for STS research, are now starting to appear in discussions of the legal recognition of non-humans, such as rivers, ecosystems, and animals (see, for example, Pecharroman, 2018). It is this train of Latour’s thought that I attempt to unfold in the following, and I do this with the simultaneous aim of extending STS’ explanatory power beyond the field and into innovative legal thinking, and, at the same time, to create a space for legal thinking in STS. That is, what I in this chapter call STS jurisprudence. Most people in the Western world consider themselves moderns, as some who are fortunate to live in the epoch of modernity. Often this identity is proudly, if not with a sense of indulgence, contrasted with lives that are primitive, medieval, or even uncivilized and irrational. But what does it mean to be modern, and how can we at all claim to be so? That is the question that drives Latour’s 1991 magnum opus, We Have Never Been Modern, and as the title of this arguably most influential work of STS suggests, our sense of living a modern life at the pinnacle of human civilization is perhaps somewhat misguided. The constitution of modernity is, according to Latour, premised on the separation of powers into nature and society, that is, an imagined bicameralism that cuts clean given objectivity from constructed subjectivity. We are moderns, because we believe in this division and because we meet all attempts to cross or transcend it with processes of purification, which seeks to keep the two chambers hermetically sealed, so that they cannot contaminate each other, or, as Latour (1993, p. 35) states, “here, on the left are things themselves; there, on the right, is the free society of speaking, thinking subjects, values, and of signs”. The constitution of the pre-modern world, on the other hand, rests on monism rather than dualism. As evident in many classic works in anthropology, subjects, objects, gods, and goddesses co-exist and enter into (un)holy configurations. However, despite persistent attempts from the earliest Enlightenment thinkers onwards to divide and purify the world, hybrids have not ceased to emerge and thrive. Quite the contrary. As we have seen, COVID-19 is an excellent example of a hybrid that draws together science, politics, culture and law, just as HIV, global warming, contraception pills, nuclear energy, and other modern phenomena have done it before. To be

370  Research handbook on law and technology modern is, paradoxically, to live in a world of hybrids, while maintaining that it is pure and divided: Everything happens in the middle, everything passes between the two, everything happens by way of mediation, translation and networks, but this space does not exist, it has no place. It is the unthinkable, the unconscious of the moderns … Century after century, colonial empire after colonial empire, the poor premodern collectives were accused of making a horrible mishmash of things and humans, of objects and signs, while their accusers finally separated them totally – to remix them at once on a scale unknown until now. (Latour, 1993, pp. 37–39)

Only as puritans, abstaining from crossing the boundaries between nature and society, can the moderns distinguish themselves from the pre-moderns. Without being fully able to account for the connection, some etymologists even suggest a linguistic relation between “hybrids” and “hubris” (Greek: “hybris”). And if the etymology is vague, then the myths are crystal clear: As soon as Prometheus stole fire from Hephaistos and gave this primordial technology to the humans, rendering them into hybrids, he gets severely punished by the gods for his original sin of creating hybrids. The humans, too, cannot escape punishment. Their hybridity, their hubris, are swiftly punished with nemesis. As this short mythological digression demonstrates, hybrids have been closely related to law from the very beginning of human thinking. When humans equip themselves with technology, low-tech as well as high-tech, they become hybrids, transgressing divine as well as earthly laws, threatening to disturb the order of things. Or, in Latour’s (1993, p. 139) words: they “… will be banished – should we officially say taboo?” Indeed, many positive laws, such as speed laws, or food and drugs laws, directly address hybrids. Even within constitutional law, such as the notorious Second Amendment to the United States Constitution, we find laws that are directly prompted by the existence of hybrids. Drivers, farmers, pharmacists, and gunmen all share hybrid identities, which have been woven together in acts of crossing the safeguarded boundaries of modernity, between nature, on the one side, and culture (and law), on the other side. However, instead of untying the Gordian knot of hybrids by artificially dissolving, dividing, and ordering them into purified domains, we should, according to Latour, rather try to retie it. That is, learn to live with and as hybrids, which obviously also includes rethinking law. To be sure, Latour is not the only STS scholar, who has been arguing that hybrids need to be brought out of the obscurity, and onto the front stage, where they rightfully belong. Indeed, hybrids, and with them the undivided, monistic view on culture and nature, construes one of the most fundamental tropes in STS literature. Prominent STS scholar Donna Haraway, for example, lends the theme an evocative expression in the title of her 1997 book, Modest_ Witness@Second_Millenium. FemaleMan©_Meets_OncoMouse™. Haraway’s point is that despite science and technology’s omnipresent and decisive role in our lives and societies here at the beginning of the new millennium, most of us never actively engage in them, let alone understand them. Who decided the format of the email address, for example, what does it mean, and what actors are drawn together within its cryptic formatting? In Haraway’s words, we are rather “modest witnesses”, that is, passive spectators to the carnival of science and technology, where the cure for breast cancer is pursued through the objectifying gaze of male scientists injecting cancer cells into mice, whose patented and trademarked sacrifice promise salvation to women around the globe. As with Latour, Haraway’s point is that all the distinctions that we thought of as defining characteristics of modernity, dissolve upon closer scrutiny.

STS jurisprudence 

371

To Latour, the question of hybrids is ultimately a constitutional question that demands a radical interpretation of the symmetry principle from the strong program: How to create a constitution that does not differentiate between humans and non-humans, but treats them symmetrically by not giving any of them ontological precedence? For this reason, Latour’s thinking has often been described as promoting a “flat ontology”, where all beings are equal (see, for example, Harman, 2009). Toward the end of We Have Never Been Modern, Latour (1993, p. 144) spells out the constitutional consequences of this ontology, when he suggests that humans and non-humans alike should be represented in what he calls a Parliament of Things: Natures are present, but with their representatives, scientists who speak in their name. Societies are present, but with the objects that have been serving as their ballast from time immemorial. Let one of the representatives talk, for instance, about the ozone hole, another represents the Monsanto chemical industry, a third the workers of the same chemical industry, another the voters of New Hampshire, a fifth the meteorology of the polar regions; let still another speak in the name of the State; what does it matter, so long as they are all talking about the same thing, about a quasi-object they have all created, the object-discourse-nature-society whose new properties astound us all and whose network extends from my refrigerator to the Antarctic by way of chemistry, law, the State, the economy, and satellites. The imbroglios and networks that had no place now have the whole place to themselves. They are the ones that have to be represented; it is around them that the Parliament of Things gathers henceforth.

With the Parliament of Things in place, let me now provide a closer look into the processes of mediation in which hybrids get constructed as responsible actors. As I will show, Latour’s break with the deeply sedimented subject-object dichotomy and the hegemonic position of the human, as the sole acting/responsible subject, poses a radical question to jurisprudence. The question is, to put it boldly, and in the terminology of law, can the legal subject be replaced with a new concept of a legal hybrid? 3.1 Reconfiguring Responsibility In a paper from 1994, Latour proposes a theory of mediation that challenges the view that technology is a neutral tool through which humans interact with the world; a view that is not surprisingly very popular among leaders in the tech industry. When Mark Zuckerberg, for example, testified before the US Senate in 2018, he referred to Facebook as a “tool” 11 times, hence inserting himself in a 60-year Silicon Valley tradition of exempting those who make tools from any responsibility for their use; the user, and only the user, decides to use the “tool” for either bad or good purposes (Weigel, 2018). Latour, however, disagrees and provides a simple, yet illuminating, example through which he unfolds his theory of mediation. The example origins from the heated and tragically ever-present weapon debate in the United States, where opponents of unrestricted weapon sale unite under the slogan “Guns kill people”, while the National Rifle Association (NRA) answers with an equally concise catchphrase “People kill people, not guns”, resembling Zuckerberg’s insistence that Facebook is a mere tool that neutrally carries human will and intention. In the first account, the gun means everything and can transform everybody into a killer, while, in the second account, carrying a gun in the bag means nothing, as the potential killer would have been killed by other means, had she not carried it. Latour (1994, p. 31) uses the example to rhetorically ask: “Who or what is responsible for killing?” “Is the gun no more than a piece of mediating technology?” To answer this

372  Research handbook on law and technology fundamental question about responsibility, human will, and the neutrality of objects, he suggests four meanings of mediation. The first meaning, translation, involves that when the two actors, “human” and “gun” meet, they construct a third actor, who is neither transformed into a killer per se, as the gun opponents would have it, nor just rendered into a more effective killer, as in the NRA version. With a gun in the hand, a subject who intends to stop a fight might become a killer, while a hunting gun in the hand of an upset subject might end up as a murder weapon. As Latour (1994, p. 34) puts it: “It is neither people nor guns that kill. Responsibility for action must be shared among the various [actors]”. When subjects and objects meet, their purposes, goals and possibilities are thus translated into a hybrid that did not exist before: a gunman, a driver, a pilot, a Facebook user or a CEO. The second meaning of mediation is composition, with which Latour seeks to account for the composition of the manifold actions that are involved when we use technology. Writing a short post on Facebook, for example, involves the electricity network, data centers, and the air condition systems that cool them, the maintenance of various infrastructure, content moderators, ISO standards, national ,and international regulations, etc. Or, as Latour (1994, p. 35) catchphrases it: “B-52s do not fly, the US Air Force flies”. According to Latour, this means that action is not only the property of a single actor, but should be traced, instead, in the association of actors, humans as well as non-humans, to which action is distributed. Many of these actors, however, are blackboxed and in order to fully understand mediation, the analyst’s role is therefore to do reverse blackboxing, which involves tracing down the many actors that have been rendered invisible in the complex composition of mediation. Through this analytical procedure, it becomes possible to inspect what actors have played a role in carrying out an action, such as flying, posting on Facebook, or killing. Opening up a blackbox can, moreover, be compared to opening up a Russian doll, as the parts inside a blackbox are themselves blackboxes containing actions that are concealed but yet crucial for the mediating technology to work seamlessly. Indeed, when technologies break down, we are often doing reverse blackboxing intuitively through a process of troubleshooting, as in the following example: Why is my post not being published on Facebook? Has my internet provider shut down my internet? Am I too far away from the wireless router? Did I write something that might have been too offensive to the human or artificial content moderator? Did I hit the wrong button? The more questions, the more the network of actors, responsible for carrying out our actions, is rendered visible and we understand how even our most simple actions are dependent on the work of many other actors. However, according to Latour, we are most often not able to count the many actors involved in our actions and parts of the network of associations will therefore remain concealed. While we are constantly involved in such mundane processes of troubleshooting, they also mimic the more formal practice of cross-examination to settle, for example, liability lawsuits. In both cases, reverse blackboxing is a question of locating responsibility, but in the latter case, the responsible actor, or the legal subject, is almost always a human. The fourth and final meaning of mediation is translation. Latour’s (1994, p. 39) mundane but clear example is a speed bump, in which all kinds of actors’ interests and intentions have been translated into a carefully designed pile of concrete on the road. Speed bumps are thus not just made of concrete, but of traffic laws, political decisions to cut down expenses on law enforcement, and local wishes to protect children playing on the street. All technologies are, however, imbued with interests, and the more high-tech the technologies are, the more interests tend to have been translated into them. To stay with the example of Facebook, it is

STS jurisprudence 

373

well-known that the company, for example, wants us to stay as long as possible on the platform and to provide them with enough personal data, from which targeted advertisements from third-party companies can be generated. This interest is obviously shared by the thirdparty companies, but aligned with national and international regulators’ interests to protect the users’ data and privacy and with other users’ interest in communicating on the platform. While it would have been possible to account for many other interests, the point is here that when we use technologies, such as Facebook, we are under the influence of many actors’ interests, which have been translated into, and hidden under, their sleek, homogenous and hence less complex interfaces. Whether mediated by a speed bump on the road or a social media platform, we are not acting alone, but triggering or responding to a myriad of interests and intentions from actors that we might not even know. Latour’s ultimate goal could be described as developing a new sociology for a new society; a sociology that allows for understanding how non-human actors, made in concrete, metal, glass or silicon translate and are translated into social order, institutions, politics and similar subjects that “the old sociology” has been occupied with. This project implies nothing less than a break with the longstanding dualism between subject and object, which are unified in the “new society” as hybrids. The implications to legal thinking of this new ontology are still unclear, but, as mentioned earlier, non-human actors are increasingly gaining legal rights, and while hybrids are not yet challenging the autonomous legal subject, legal bodies commissioned to regulate technology are multiplying, while technology companies are still more often found in legal investigations, litigation and court cases. In the next section, Sheila Jasanoff’s notions of co-production, ontological surgery, and constitutionalism will take us even closer to the junction of law and technology, where more and more of the most crucial issues in society are formed and settled, whether it involves safeguarding the privacy of citizens and their democratic institutions, our relation to nature, or the working of the economy.

4. THE UNION OF IS AND OUGHT The overarching purpose of science and technology is ontological; they are producing knowledge about what the world is – this is a tiger lily, this is a mammal (not a reptile), that is Jupiter, this is the freezing point, etc. – and explore how they can be utilized, shaped, and combined to fit human needs, as well as creating new ones. Law, on the other hand, serves a normative purpose; it decides on how we ought to relate to one another and to the other beings that we encounter. Put differently, science and technology concern knowledge-making, truth, and utility, whereas law is about law-making, justice, and power. A trained lawyer, Jasanoff (2005) points out various recurrent conflicts that have kept law and science and technology apart. The notion of “law lag”, for example, orders science and technology and law in such a way that the latter are always lagging behind the novelties and breakthroughs of the former. Likewise, it is often claimed that the culture of law, which implies finding consensus between different values, and the alleged pure rational culture of science and technology are considered to be irreconcilable; gravity, for example, is not supposed to be a compromise. The question, however, is whether knowledge production and technological innovation have more in common with law-making than this division of labor suggests.

374  Research handbook on law and technology Following Jasanoff (2005, p. 51), the answer is clear. The institutions of science and law share many similarities and they are, moreover, often intersecting in various ways: Truth is found in each institutional setting by establishing a correspondence with some exogenous reality: a legally significant event in the law and a phenomenon of nature in science … Both courts and labs can thus be thought of as experimental spaces in which assertions about reality are constructed, presented, tested, held accountable to standards, and eventually determined to be reliable or unreliable.

Truth is thus being settled in both scientific and legal settings, but without establishing a waterproof boundary between the two, between the “is” and the “ought”. First, because science and technology are often employed in both law enforcement and court cases. Think, for example, of the use of breathalyzers to detect drivers under the influence, or the use of blood or DNA samples, or of forensic science in general, as crucial means of providing evidence in assault cases. Second, because law also interferes with science and technology, when it, for example, restricts to what extent stem cells from human embryos can be used in basic research and subsequent development of stem-cell treatments for various medical conditions. Or, when regulations, such as the European Union’s GDPR regulation, put restrictions on to what extent technology companies can use their users’ data. As Jasanoff (2008) puts it, “science and technology [does] not only assist in resolving legal disputes, but also participate in producing them”. Science and technology are, in this perspective, not autonomous, self-regulated entities, but deeply embedded in, and shaped by, the normative, legal orders that they work in, while these are, in turn, shaped by the constant development in science and technology. In STS, this fundamental dynamic has been conceptualized as co-production. In the influential volume, States of Knowledge, Jasanoff (2004, pp. 2–3) defines co-production as: … shorthand for the proposition that the ways in which we know and represent the world (both nature and society) are inseparable from the ways in which we choose to live in it … Scientific knowledge, in particular, is not a transcendent mirror of reality. It both embeds and is embedded in social practices, identities, norms, conventions, discourses, instruments and institutions … The same can be said even more forcefully of technology.

In an earlier work, Science at the Bar, Jasanoff presents rich empirical material that attests to the multi-faceted ways in which law and science and technology are co-producing social and scientific order. Most remarkable perhaps, is the book’s final example, where Jasanoff shows how law and science and technology interact in deciding on perhaps the most fundamental question of all, namely the question of life and death. She describes how the process of dying in the past century has moved from the private to the public sphere, from the home to hospitals and institutions, where advances in medical technologies, such as feeding tubes, life support, heart starters, dialysis, etc., have prolonged life or slowed down dying considerably. As such, these technologies have blurred the distinction between death and alive and transformed the meaning of being and non-being; concepts which have been loaded with religious and philosophical meaning for thousands of years. Is a person, for example, dead, when her brain has ceased to function and she has turned into an irreversible coma? Before life-supporting technologies, that question would have been nonsense, as no human body can work without either a functioning brain or life-supporting technologies. Her death would, in other words, have

STS jurisprudence 

375

been unquestionable. The example thus shows how the meaning of the human being has been fundamentally changed by technology. 4.1 Ontological Surgery and Constitutionalism But then what counts as a living human being and what expertise should be called upon to answer the question? Since legal rights normally follow living human beings, the seemingly mere ontological question also becomes a normative one, that is, what ought to be counted as a living human being. Pointing at the landmark decision of the Supreme Court of the United States, Cruzan v. Director, Missouri Department of Health, Jasanoff describes how the Court ruled against the parents of Nancy Cruzan’s request to remove the life support that artificially had kept her alive in seven years upon entering an irreversible coma. The explanation that followed the ruling was that canceling the life-supporting technologies would have required that Nancy Cruzan, while she was still conscious, had expressed a will to do so. Notice here how the ontological gray zone of irreversible coma, rendered possible through technology, enters the normative arena of law and gets connected with ideas of personal liberty and the freedom of choice. As Jasanoff puts it: “The Court had recognized for the first time that personal liberty includes the right not to perpetuate a cognitively empty life”. What becomes clear in the Supreme Court ruling, and the many “right to die” lower court cases that led toward it, is that technology fundamentally interferes with life itself. Here, technology is not just an app that offers to solve a practical problem, but a facilitator of new ways of (human) being, which were before unthinkable. In such cases, where technology redefines life, the constitution is rewritten to accommodate this new life. Technology and law are, in other words, co-producing what counts as a human being. While life-supporting technologies extend life, and hence legal entitlements, beyond consciousness life, Jasanoff (2011a) elsewhere describes how other technologies extend and reinscribe similar rights into prenatal life. With the advancement of biotechnologies, such as cloning and stem-cell technologies, developing around experiments on human embryos, the question of what counts as a human being repapered. In 1978 the world saw the first person, Louise Brown, to be conceived in the United Kingdom in a petri dish, or by in vitro fertilization, as it is formally called. By then, in vitro fertilization was not only a new scientific territory, but also a legal gray zone with almost no regulation. To fill this gap that new technologies had created, a committee, The Warnock Committee, was commissioned by the UK government to inquire into the impacts of embryological research. In their 1984 report, the committee recommended limiting the use of human embryos to the first 14 days after fertilization, as neurological structures would only become visible after the two first weeks of the embryo (Warnock, 1985). Later, in 1990, the recommendation would be rendered into law in the United Kingdom in the Human Fertilisation and Embryology Act. While the absence of neurological activity in the previous example did not strip comatose patients of their constitutional rights, neurological activity is here interestingly used as a demarcation line for what counts as life: The first “natural” phase of development recognized in UK law consists of a period up to fourteen days, when the conceptus is neither regarded nor treated as human. In the second part, after fourteen days, the developing entity is seen, and treated, as protohuman life, with corresponding moral entitlements. (Jasanoff, 2011a, p. 63)

376  Research handbook on law and technology What happened in the Warnock Committee’s divisions of prenatal life into two discrete periods, as well as in the Supreme Court’s granting of rights to post-conscious life, can be described as ontological surgery. That is, according to Jasanoff (2011a, p. 61), a procedure for “… deciding how to describe and characterize the problematic entities whose natures must be fixed as a prelude to ethical analysis”. Settling what counts as life is thus not only a question for biology or philosophy, but is rather decided on at the intersection between law and technology. Ontological surgery appears in many guises, but can, for example, also be found in current discussions of artificial intelligence, where sharp distinctions are made between human and artificial intelligence in order to secure the sovereignty, autonomy and rights of the former. In general, when technology discloses and enables new lifeforms, either by extending human life or by replicating it in artificial forms, a procedure of ontological surgery is initiated. However, the surgery is only completed and acknowledged when the normative status of the new forms of being that appear from it are settled and entitled to legal rights. It is through this interplay between technology and law that new forms of being enter into society. Technology both enables and restricts legal innovation, and law both enables and restricts technological innovation; they are mutually constitutive. According to Jasanoff (2011b), technologies should therefore be regarded as constitutional, as they often, and at least potentially, carry with them ontological transformations that “burrow so deep into the foundations of our social and political structures that they necessitate, in effect, a rethinking of law at a constitutional level”. As constitutions are attached to nationstates, national comparison has been a fruitful method for STS scholars to show how seemingly universal technologies often get nationalized through culturally bound styles of public reasoning, legislation and hence localized procedures of ontological surgery. In the United Kingdom, for example, life begins when the embryo is 14 days old; in Germany, at the moment when sperm penetrates the egg (Jasanoff, 2011a, p. 67). Or, to give another example, in the European Union, citizens are protected by far-reaching data and privacy laws (GDPR), while the data of American citizens flow much more unregulated and fragmented. The fundamental ways in which citizens relate to each other and to the intuitions that govern them are thus often settled at the nexus between law and technology.

5. CONCLUSION My ambition with this chapter has been to mobilize STS as a conceptual resource to better understand the complex relationship between law and technology. If there is one underpinning insight that runs through the various ideas that I in this chapter refer to as STS jurisprudence, it is STS’ stubborn insistence that law and technology should not be regarded as separate domains, but rather as mutually constitutive and overlapping fields that construct and reconstruct social order. The mistake of separating law and technology is present on both sides of this artificial divide. On the law side, the strong emphasis on individual rights and legal subjects prevents the inclusion of hybrids as fully-fledged members of society. That is, members are at the same time protected by legal rights and can be held responsible for their actions. Such inclusion would, for example, involve discussions of the responsibility of the gun industry in killings and the responsibility of all of us who use computers that depend on conflict minerals

STS jurisprudence 

377

extracted from the global South. On the technology side, law is often considered as either stifling innovation with too much regulation or hopelessly lagging behind. As I have shown, both sides are wrong, or rather, the separation is wrong. According to the STS jurisprudence that I have proposed in the chapter, technology and humans cannot be separated, just as technology is never just a neutral end to a means. Confronted with the new possibilities and combinations that are constantly opened by technology, humans and societies are compelled to choose what avenues to pursue and what doors to close. While it might be tempting to understand technological development in a deterministic manner as inevitable, such choices are always already normative and hence belong to legal thinking. On the other hand, technology always already operates in an established, complex infrastructure of norms and legislation that determine what is possible and what is not. Despite Jasanoff’s pioneering work, the intersection of STS is still an uncharted theoretical and empirical territory, particularly when it comes to how to put STS jurisprudence into practice. However, as I have attempted to show, a better understanding of how technology and law are co-producing social order, could lead to both better technologies and better legislation. As science and technology pervades evermore aspects of life and society, it becomes impossible, or at least careless, for STS scholars to ignore the legal aspects of their various subjects The most important lesson that lawyers can learn from STS, is that science and technology are probably closer relatives to their own field than they think. Like law and regulation, they shape the normative foundations of society and people’s lives; as they render invisible things visible and open possibilities for actions that were before unthinkable, they are in their widest sense constitutional. But law does not have to come after, or lag behind science and technology, such as the popular narrative would have it. There is no logic that prevents law from being just as visionary as science and technology, and thus proactively shaping and furnishing the societies we inhabit. Knowing that science and technology are not superior to law and not natural and guided by an inherent, deterministic logic, could set free the legal imagination and perhaps provide healthier, more innovative law-making.

REFERENCES Aristotle. (2019). Aristotle’ Art of Rhetoric. Translated and edited by R.C. Bartlett. Chicago, IL: University of Chicago Press. Aquinas, T. (1989). Summa Theologiae: A Concise Translation. Translated and edited by T. McDermott. Notre Dame, IN: Christian Classics. Bijker, W.E., Hughes, T.P. & Pinch T. (Eds.). (2012). The Social Construction of Technological Systems, Anniversary Edition. Cambridge, MA: MIT Press Bloor. (1991). Knowledge and Social Imaginary. Chicago, IL: The University of Chicago Press. Bush, V. (1945). Science the Endless Frontier, 2nd ed. Washington: United States Office of Scientific Research and Development. Carson, R. (2002). The Silent Spring. New York City: First Mariner Books. Collins, H.M. (1981). Stages in the empirical programme of relativism. Social Studies of Science, 11(1), 3–10. Fleck, L. (1979). Genesis and Development of a Scientific Fact. Translated by F. Bradley and T.J. Trenn. Chicago, IL: The University of Chicago Press. Haraway D. (1997). Modest_Witness@Second_Millenium. FemaleMan©_Meets_OncoMouse™. London: Routledge. Harman, G. (2009). Prince of Networks: Bruno Latour and Metaphysics. Melbourne, AUS: Re​.pres​s.

378  Research handbook on law and technology Jasanoff, S. (1997). Science at the Bar. Law, Science, and Technology in America. Cambridge, MA: Harvard University Press. Jasanoff, S. (Eds.). (2004). States of Knowledge: The Co-Production of Science and Social Order. London: Routledge. Jasanoff, S. (2005). Law’s knowledge: Science for Justice in Legal Settings. American Journal for Public Health, 95(1), 49–58. Jasanoff, S. (2008). Making Law and Science in Action. In E.J. Hackett, O. Amsterdamska, M. Lynch & J. Wajcman (Eds.). The Handbook of Science and Technology Studies (pp. 761–786). Cambridge, MA: MIT Press. Jasanoff, S. (2011a). Making the Facts of Life. In S. Jasanoff (Eds.). Reframing Rights. Bioconstitutionalism in the Genetic Age (pp. 59–84). Cambridge, MA: MIT Press. Jasanoff, S. (2011b). Introduction: Rewriting Life, Reframing Rights. In S. Jasanoff (Eds.). Reframing Rights Bioconstitutionalism in the Genetic Age (pp. 1–28). Cambridge, MA: MIT Press. Kochan J. (2010). Latour’s Heidegger. Social Studies of Science, 40(4), 579–598. Kuhn, T. (1996). Structures of Scientific Revolutions. Chicago, IL: The University of Chicago Press. Latour, B. & Woolgar S. (1979). Laboratory Life: The Construction of Scientific Facts. Princeton, NJ: Princeton University Press Latour, B. (1993). We Have Never Been Modern. Translated by C. Porter. Cambridge, MA: Harvard University Press Latour, B. (1994). On Technical Mediation. Philosophy, Sociology, Genealogy. Common Knowledge, 3(2), 29–64. Latour, B. (2010). The Making of Law: An Ethnography of the Conseil d’Etat. Cambridge: Polity Press Merton, R. (1979). The Normative Structure of Science. In N.W. Storer (Eds.). The Sociology of Science. Theoretical and Empirical Investigations. Chicago, IL: The University of Chicago Press. Nielsen, J.H. & Lindvall J. (2021). Trust in Government in Sweden and Denmark During the COVID-19 Epidemic. West European Politics, 44(5–6), 1180–1204. Pecharroman, L.C. (2018). Rights of Nature: Rivers That Can Stand in Court. Resources, 7(1), 23–37. Schiølin, K. (2012). Follow the Verbs! A contribution to the Study of the Heidegger-Latour Connection. Social Studies of Science, 42(5), 776–787. Schiølin, K. (2020). Revolutionary Dreams. Future Essentialism and the Sociotechnical Imaginary of the Fourth Industrial Revolution. Social Studies of Science, 50(4), 542–566. Turner, S. (2008). The Social Study of Science Before Kuhn. In E.J. Hackett, O. Amsterdamska, M. Lynch & J. Wajcman (Eds.). The Handbook of Science and Technology Studies (pp. 33–62). Cambridge, MA: MIT Press. Warnock, M. (1985). A Question of Life: The Warnock Report on Human Fertilisation and Embryology. Oxford: Blackwell. Weigel, M. (2018). Silicon Valley’s Sixty-Year Love Affair with the Word ‘Tool’. In The New Yorker (April 11). Retrieved 21.12.2022 from https://www​.newyorker​.com​/tech​/annals​-of​-technology​/silicon​ -valleys​-sixty​-year​-love​-affair​-with​-the​-word​-tool

23. An outsider’s view on law and technology Hans-W. Micklitz1

1. ON “WHY” When the editors approached me and asked me to contribute a chapter on an “outsider’s view on law and technology”, I asked myself: why have they thought of me as an outsider? I suspect they rightly started from the premise that I am neither born digital nor academically socialised in the digital world. The latter privilege is limited to those who grew up with the new technology as kids and whose academic life went in parallel with the steadily increasing importance of digitisation for the economy and society. I belong to the category of the latecomers who discovered the convenience of the internet for teaching, research, and communication but who did not build their career on computation and law. Neither was I one of the few lawyers who were well placed when this strand of legal scholarship met the reality through the emergence of big data analytics. My interest in the new technology increased when I realised that big data analytics is not only a risk for the autonomy of the individual but also an opportunity for countervailing power building. But I did not have the skills to translate an idea into reality. At that moment, I crossed paths with Przemek Pałka, Giovanni Sartor, and Francesca Lagioia. Together with Marco Lippi, we developed “Claudette”,2 a computational tool to automatically identify potentially unfair clauses in online platforms’ terms of services and later in privacy policies (Micklitz, Pałka & Panagis, 2017, pp. 1–22; Lippi, Contissa, Jablonowska, Lagioia, Micklitz, Pałka, Sartor & Torroni, 2020, pp. 169–190). The hope was that such a tool could serve consumer organisations and public agencies to improve consumer law enforcement in the digital environment. Therefore, my interest in new technologies, digitisation, big data analytics, machine learning, and, more generally, computer science is instrumental. I’m interested in what kind of added value it can bring to society and, from the theoretical-conceptual perspective, how these technologies affect the foundations of the private law system: person, contract, tort, property, and remedies. Przemek Pałka and I started thinking about this, and both of us engaged some foundational issues: Przemek Pałka with “contract” (Pałka, 2018, pp. 135–162) and myself with “person”.3 However, unfortunately, we have not yet managed to prepare a joint publication, which would have been an exercise merging the views of the outsider (me) and the insider (him). Therefore, it might be correct to classify me as an outsider, which means in reverse that the editors, Przemek Pałka among them, understand themselves as insiders.

1  The chapter received funding from Giovanni Sartor’s ERC grant on CompuLaw – Governance of Computational Entities through an Integrated Legal and Technical Framework, https://site​.unibo​.it​/ compulaw​/en ​/project 2  ​Www​.claudette​.eui​.eu 3  H.W. Micklitz, Deconstructing and Reconstructing the Person through EU Law, paper presented at the Universities of Amsterdam, Bologna, and Krakow, manuscript on file with the author.

379

380  Research handbook on law and technology In the following, I will deconstruct, step by step, the mandate given to me. First, I will consider what is, or how to understand, “a view”, what does a view on law “and” technology imply, and, last but not least, what is an “outsider”. Finally, in this last step, I ponder how the outsider can be distinguished from an “insider”, what roles does, or could, the distinction play in “society”, and what kind of “responsibilities” can be deduced from such statuses in relation to what might be one of the most critical challenges of our times, next to climate change. 1.1 On “View” What is “a view”? Why am I not asked to write about law and technology or the interaction between outsiders and insiders? The editors wanted to have an outsider’s “view”, luckily only a “view” and not a “critical” view, which would have forced me to think about what is “critical”. In my eagerness, I immediately thought that “view” is not associated with colloquial language but with the meaning given to it in computer science. I found two definitions: “A view is a subset of a database that is generated from a user query and gets stored as a permanent object”,4 and A VIEW is a virtual table, through which a selective portion of the data from one or more tables can be seen. Views do not contain data of their own. They are used to restrict access to the database or to hide data complexity. A view is stored as a SELECT statement in the database. DML operations on a view like INSERT, UPDATE, DELETE affects the data in the original table upon which the view is based.5

These two definitions might suffice to illustrate how colloquial language changes its meaning when transferred into a particular context, here computer science. Obviously, however, this is not the kind of “view” expected from me. That is why I returned to colloquial language. The Oxford Dictionary distinguishes between seven different meanings, out of which the first four seem to be relevant:6 (1) what you can see from a particular place or position, especially beautiful countryside, (2) when you are talking about whether you can see something or whether something can be seen in a particular situation; (3) a personal opinion about something; an attitude towards something; (4) a way of understanding or thinking about something. However, it seems the editors wanted to have “my opinion”, which does not render my task any easier, as “an opinion” too may have different meanings. In law, an “opinion” stands for a qualified and thorough analysis of a particular legal issue, such as the opinions of the Advocate General before the Court of Justice of the European Union. Whilst an opinion is always personal, lawyers, in whatever role they are writing such opinions, use a legal methodology to objectivise the personal dimension and embed it into a legal professional context. My view on law and technology combines the legal and the colloquial world. Why? The editors do not address me just as a colleague but also attribute a particular role to me, and they tie the role to certain expectations; the opinion should be the one of an outsider. It is therefore personalised, subjectivised, and individualised. Although there might be a long list of potential candidates for outsiders, depending on what an outsider might be, they contacted me. Thinking about the differences between personalisation, subjectification, and 4  5  6 

https://techpedia​.in/ ​https:/​/www​.examveda​​.com/ https://www​.oxf​ordl​earn​ersd​icti​onaries​.com​/definition​/english​/view​_1​?q​=view

An outsider’s view on law and technology  381 individualisation forms an integral part of my theoretical-conceptual interests, but this is not the place to go deeper.

2. ON LAW “AND” TECHNOLOGY The “law and …” debate dominated the 20th century. At the beginning of the 20th century, it was law and society – the origin of sociology of law or legal sociology – then law and economics between the two World Wars, and later law and psychology. As a result, the academic literature on “law and …” fills entire libraries.7 Today, the different strands run in parallel, currently with a strong emphasis on law, economics, and psychology, whereas law and society/ sociology suffers from a backlash. Law and technology is just another variant of the “law and …” saga. The invention of the steam engine, the railway, automobiles, and aircraft changed the relationship between law and technology dramatically. The need to manage human-made risks at the workplace and in society revolutionised tort law by introducing strict liability regimes (Josserand, 1936, p. 5).8 Further, the regulation of automobiles has been of paradigmatic importance for the relationship between law, technology, society and risk management. The invention of “locomotives” motivated the UK legislator to tame potential risks to pedestrians through the famous “red flag” regulation. The Locomotive Act of 1865 and the Highways and Locomotives (Amendment) Act of 1878 contained restrictive measures on the manning and speed of operation of road vehicles. The strictest restrictions and speed limits were imposed by the 1865 Act (the “Red Flag Act”), which required all road locomotives, including automobiles, to travel at a maximum speed of 4 mph (6.4 km/h) in the countryside and 2 mph (3.2 km/h) in the cities. They also required a man carrying a red flag to walk in front of road vehicles hauling multiple wagons. The 1896 Act removed some restrictions of the 1865 Act and raised the speed to 14 mph (23 km/h).9 This is not all. Psychology-oriented commentators raised doubts as to whether, and to what extent, human beings would be able to master an automobile moving faster than 20 or 30 kilometres per hour (Merki, 2002). What was taken very seriously 100 years ago sounds rather ridiculous today. One should keep this in mind when today we discuss the potential risks of the information society for the people. The story of the automobile seems to indicate that humanity can master the risks of a new technology, and turn the new technology into a “force for good” that serves the “mind”, the “body”, the “senses”, the “heart”, and the “soul”.10 But does the parallel hold? Two further historical developments should be recalled as they influence how the information society might and will be regulated by the European Union. The first relates to the way in which legislatures around the world managed to define safety standards able to guide the industry. Industrialisation led to the re-organisation of the production processes through standardisation. Industrialisation boosted technical standardisation. Technical knowledge was outsourced from the public administration, such as the ministries to private standardisation institutions: BSI, DIN, AFNOR, ISO, IEC, CEN, and CENELEC (Ladeur, 2011). From From an American perspective, (Nourse, Shaffer, 2009). Discussed in Micklitz (2021, pp. 272) and Brüggemeier (2020, pp. 339–383).  9  https://en​.wikipedia​.org​/wiki​/ Red​_flag​_traffic​_laws 10  See Lobel (2022). The quotes are the headings of the different chapters.  7   8 

382  Research handbook on law and technology its beginning, standardisation needed to include risks, particularly those resulting from steam engines and electricity use. The result of such a regulatory approach is the divide between the law on the one hand, and technical standards on the other, with the latter being effectively exempted from legal and judicial control, at least temporarily. The European Union internalised the lessons learnt at the beginning of the 20th century in its 1985 New Approach on Technical Standards, since 2012, renamed New Legislative Framework.11 The European Union’s legislation is limited to laying down binding legal requirements, inter alia on product safety, whilst the standardisation bodies are in charge of transforming these legal requirements into non-binding technical standards. Self-certification of compliance with the nonbinding technical standards suffices to be granted access to the EU Internal Market. Currently, the European Union is developing a broad legal framework for the regulation of the digital economy, which heavily relies on the New Approach type of thinking in dealing with the potential risks of artificial intelligence. With this move, the divide between law and technology is upheld. The legally binding EU regulations require respect for “mental health” and “fundamental rights”, the meaning of which should be concretised by the standardisation bodies (Micklitz, 2023). The risk-based regulatory approach developed for the old industries is being transferred to the digital economy. What does this mean for law and the lawyers, or for technology and computer scientists? The regulatory approach seems to contradict the mantra which dominates academic research. The advocated solution does not build on the divide between law and technology, but relies on the potential of mutual learning through intense cooperation. In a perfect world, such an approach would require that lawyers have at least a basic understanding of the technology. At the same time, computer scientists should get familiarised with legal thinking in order to be able to respect the legal requirements already at the stage of developing software and other systems. Applying the terminology developed in this chapter, one could see such forms of cooperation as reserved for the insiders, for those lawyers who speak the language of the computer scientists and who can follow their thinking. This does not entail understanding the intricacies of computer science in all detail but does require the ability to comprehend the basic direction. Similarly, only computer scientists who understand the basics of legal thinking would be the insiders (Gasser & Schmitt, 2020). The European Union’s approach, however, limits the role of law and lawyers in two ways. First, lawyers are not necessarily involved in the design and production of software or other systems (the equivalent of technical standards). The approach presupposes instead that computer scientists learn to respect the law. Secondly, lawyers and their expertise come in only ex-post. Once the technical standards are there, they must undergo an approval procedure, which brings the law back in, though not necessarily applied by lawyers.12

11  Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, OJ L 316, 14 November 2012, pp. 12–33. 12  The approval procedure of technical standards under Reg. 1025/2012 is subject to controversial debate between those who advocate the primacy of law and those who defend the primacy of technology, see Eliantonio, M. and Volpato, Annalisa, The European System of Harmonised Standards. Legal Opinion for ECOS (11 March 2022). Available at SSRN: https://ssrn​.com​/abstract​= 4055292 or http:// dx​.doi​.org​/10​.2139​/ssrn​.4055292 against Redeker/Sellner/Dahs, executed by Kathrin Dingemann, and Matthias Kottmann, Legal Opinion On the European System of Harmonised Standards Commissioned by the German Federal Ministry for Economic Affairs and Energy (“BMWi”) August 2020, https://

An outsider’s view on law and technology  383 Is this “good” or is this “bad”? Who is dominating whom? Does the law dominate the technology or is the technology the law? Do lawyers dominate the computer scientists or the computer scientists the lawyers? Or rather, is the difference between the academic mantra and the EU regulatory approach one of timing? The academic mantra advocates the involvement of lawyers in the development of technical standards, whereas the European Union’s regulatory approach trusts lawyers to exercise some sort of “legal control” only before the technical standards are published in the Official Journal. Both models require close cooperation, and both require that lawyers are competent enough to “understand”. in a deep sense of the word, the possible economic, political, and societal implications of a particular standard. Is this realistic? Is it legitimate? Is it good for the law and for lawyers, or should law and lawyers simply accept the divide between law and technology? In German, there is a saying: “Schuster bleib bei Deinen Leisten” (cobbler, stick to your last) – which means that each profession should stay with its particular skills and capacities. The consequences for the law and the legal system and – on a much deeper level – for the understanding between law and technology might be far-reaching, to say the least. Law would have to develop not only the criteria but also an appropriate methodology against which the technical standards, or even technology itself, including AI, can be measured. Does the “Eigen-Rationalität des Rechts” – the law’s own rationality – suffice to get to grips with a new technology? The question turns the century-long debate about “law and …” approaches upside down. The focus is no longer on proclaiming the need for the law to engage in dialogue with other disciplines, like sociology, economics, psychology, or technology.13 Rather, the current debates could be seen as proclaiming the need for technology – e.g. computer science – to engage with legal values. It is not so much on law and technology, but technology and law. The order matters. The sequence underlines who should be influenced by whom. This is most explicit in debates on AI ethics. Importantly, however, the experience gained in regulating financialisation does not raise much hope that it is possible for the law to handle the high degree of technicity and to develop appropriate rules for legal and judicial review (Pistor, 2020; Sedano Varo, 2023). One of the key barriers which seem to stand in the way of law’s own rationality is the limits on the cognitive perception which result from the transformation through technology. In its most radical form one might question whether lawyers have the cognitive capacity to be able to understand the technology. At the surface level, we – a group of researchers – are claiming that the information society triggers universal vulnerability in each and everybody. The digital architecture, this is the argument, is structural and relational, and is not understandable for 95% of the population. Only the small community of computer scientists and lawyers who are born digital and digitally socialised have the capacity to “understand” (Helberger, Sax, Strycharz & Micklitz, 2022, pp. 175–200). This finding throws us back to Walter Benjamin’s masterful “The Work of Art in the Age of Mechanical Reproduction”, written in 1936, which is sometimes understood as the beginning of a modern media theory. Benjamin is not concerning himself with the law but, as the title indicates, with “The Work of Art”. He argued that art and its reception are themselves subject to change, especially through the development of photography and film. In his view, this happens through the possibility of mass reproduction www​.bmwk​.de​/ Redaktion​/ EN​/ Downloads​/ L​/ legal​- opinion​- on​-the​- european​-system​- of​-harmonised​ -standards​.pdf?_​_blob​=publicationFile​&v=3 13  See the contributions in the special issue of the Kritsche Justiz dedicated to the 90th birthday of Wiethölter (2019, pp. 601–625).

384  Research handbook on law and technology on the one hand and, on the other hand, through a changed representation of reality, and thus through a changed collective perception. The work of art, he says, loses its aura as a result of these processes, which in turn change the social function of the media. The collective aesthetic that emerged through reproducibility offers the possibility of social emancipation but also harbours the danger of political appropriation of art. Benjamin understood his theory as being rooted in Marxist materialism which is meant to overcome the bourgeois understanding of art as the product of creativity, genius, and mystery. This chapter is neither the right forum to engage with the political intention behind his theory nor with the difficulties he faced to get his booklet published, nor with the critiques raised first by Horkheimer and then by Adorno.14 I am only interested in the question of whether his analysis of the (then) new technologies – the reproduction of film and photography – could still provide the analytical tools which could help understand the phenomena of digitisation, its potential impact on our cognitive capacities, or its emancipative potential. One might indeed feel tempted to draw a parallel between the film and the photography, and the vanishing divide between facts and fiction in the information society. Both affect our cognitive capacities. The transformation of the collective perception led Benjamin to the conclusion that the “contemplative observer”, which he saw as the bourgeois prototype, ceases to exist as the technology renders such contemplation impossible (Crary, 1996, p. 30). Again, Benjamin focuses on art, on film and photography, not on a lawyer who is “viewing” the interrelationship between law and technology. Nevertheless, when translated to the context of my mandate, and in light of universal digital vulnerability, one could read Benjamin’s analysis as raising the question of whether lawyers like me – who are not born digital nor digitally socialised – lack the cognitive capacity to understand and reflect on the phenomenon we, the lawyers, are supposed to analyse. The normative conclusion for the financialisation as well as digitalisation would then be that a technology which is not “understandable” and which therefore cannot be submitted to a legality or a fairness test, does not meet basic democratic standards. This is the first strand of Benjamin’s thoughts. The second looks equally ambitious. Benjamin understood his theory as a “tool in the fight” – he speaks of “Kampfmittel” – against the capitalist understanding of art. He saw the potential of social emancipation by overcoming the bourgeois understanding of art – as creativity, genius, and mystery – through the new forms of art promoted by, and, through technology. Horkheimer and Adorno rejected the emancipatory potential of new mass culture and the possibility of politicising aesthetics (Horkheimer, 1941; Adorno, 2003, p. 80). In their view, mass culture would only replicate the world as it was and thereby serve the capitalist economic interests. The student revolt of 1968 triggered a revitalisation of Benjamin’s theory and transformed it into an overall critique of capitalist aesthetics in culture. The revival of the political economy of the law supports the need to go back to Benjamin (Kjaer, 2020).15 My own take is that the new technologies have the potential to undermine the autonomy of the individual if they are not turned into “A Force for Good”. In so far I join forces with Adorno, without, however, denying the potential of the new technology for countervailing power building.

The information provided by Wikipedia is amazingly helpful. In philosophy R. Esposito is said to have built his theory of Biopolitics on W. Benjamin, R. Esposito, Bios, Biopolitics and Philosophy, 2008, http://faculty​.las​.illinois​.edu​/rrushing​/470k​/ewExternalFiles​/ Esposito​-Bios​%201​.pdf 14  15 

An outsider’s view on law and technology  385 That is why it seems legitimate to ask whether the project I helped to build, “Claudette”, could be understood as a form of, and an opportunity for, “social emancipation” in the meaning of Benjamin, provided the parallel between the 1930s and the 2020s is defendable. But where is the emancipation? From what and from whom shall the users be emancipated? From the GAFAs of our time, to regain autonomy? Cambridge Analytica seems to provide evidence for Horkheimer’s theory of manipulation and to downgrade any idea of using the new technology for emancipation that could, for instance, happen by breaking up the digital vulnerability and helping to restore the “integrity of the law”.16 The possible parallels between the technology of reproduction and the digitalisation of the economy and society in collective perception and its potential for reinvigorating law in contrast to technology awaits further deepening (Degli Esposti, Lagioia & Sartor, 2020).

3. ON SELF VS. FOREIGN PERCEPTION Who is an outsider? What makes one an outsider? There is or there might be a clash between self- and third-party perception. Who decides in the case of a conflict between the two? (Private) Law provides for an easy answer. (Private) Law relies on how you are perceived and not how you want to be perceived, let alone your feelings and your motives. This is written down into what German lawyers call “Rechtsgeschäftslehre” – juridical acts, but what is recognised more generally in all private law orders through rules on interpretation and the limited scope of avoidance. The individual and their emotions do not count, what counts is the behaviour on the market.17 The logic could equally be applied to how I am perceived. If I am perceived as an outsider by the legal community, I have to get acquainted with that status, whether I like it or not. Technology and computer scientists, however, are on their way to breaking down the boundaries between behaviour (how you are perceived) and individual motives and emotions (who you are as an individual). One of the first widespread commercial uses of big data analytics has been the collection of data on our market behaviour. Theoretically, it would be possible to analyse my writings in order to find out how I behave, how I am dealing with law and technology, and how I can be successfully profiled or categorised. Personalisation gradually changed the direction from behaviour towards emotions, feelings, towards “me” in contrast to my “alter ego” on the net. The closer information technology approaches “me” as an individual personality, the easier it is to compare data on me with data on my alter ego, disclose possible contradictions and maybe find out that I am an insider although I behave as an outsider, or vice versa. This is not all speculation. New business models of the major publishing houses build on the collection of data from users, on their motives and reasons why they are interested in this and that paper.18 It makes a difference in the marketing of scientific knowledge whether the outsider is a fake insider or a fake outsider.

L.H. Hart, Concept of Law, 1961 and the critique of Dworkin, R. Law’s Empire, 1985. DCFR Interpretation of other juridical acts II- 8:201 and the reasons in Ch v Bar and E Clive (eds.), Principles Definitions and Model Rules of European Private Law, Draft Common Frame of Reference, Volume I Sellier, p. 572. 18  HUM, Turn your disparate data into solid gold, 2022, https://www​.hum​.works​/whitepaper​-disparate​-data 16  17 

386  Research handbook on law and technology However, the problem remains that the distinction between insiders and outsiders is rather crude. In real life, there may be not only insiders and outsiders, but also hybrids such as inoutsiders and out-insiders. These mixed forms render the use of big data analytics difficult. The problem was encountered in the “Claudette” project, attempting to create metrics for assessing whether a particular privacy policy is transparent or not. In real life, there might be shades of transparency, just as there are shades of injustice or shades of lawfulness.19 In law, the uncertainties regarding who I am, an outsider, or an insider, or a hybrid, are discussed in the context of gender and identity. Do I have the right to be treated in the way I want to be perceived? As an outsider, as an insider, or even as somebody in between? Private law orders start from the person, a concept which covers all sorts of variations. The person, however, is an abstract configuration, which has to be distinguished from the individual with their emotions, feelings and needs. In the words of Guido Alpa:20 The person is an abstract, “disembodied” creation, which differs from the individual, from man, who, in his carnality, if he does not possess a status can be the subject of rights; the person-abstract concept, on the other hand, can only be the subject of rights. In short, person is confused with dignity, so that there are men who have no person, because they have no dignity: . Person therefore means having, not being.

Guido Alpa analyses the centuries-long development of the construction of the person from Roman law, over the Middle Ages, until its deconstruction in the 20th century. The latter process happened through the rise of labour law, later consumer law, but in particular through the constitutionalisation of private law as a kind of progress, in which the law comes gradually closer to the rights and needs of the “individual”. He writes: In identifying the discrimination to which the individual may be subjected, the normative texts set out the identifying criteria and yet in applying them, the courts extend them. This is a consideration that concerns all courts dealing with human rights and fundamental rights (Marshall, 2014, p. 33). This consideration unites jurists who come from very different normative and cultural backgrounds. There is a common denominator: the right to personal identity, understood in an even broader sense than the one recognised by our jurisprudence under this terminology, i.e. the right to be “reacknowledged”, i.e. the right to have the identity that a subject has constructed for himself and the right to be accepted as one is… Now we are discussing whether we can go a step further, recognising and guaranteeing also the right to be oneself, which means the right to control the techniques of identification, the external image constructed by the person and the information circulating about the person. (emphasis added)

This is amazingly strong language and deserves to be recalled: • • • •

the right to personal identity, the right to be re-acknowledged, the right to be accepted as one is, the right to oneself.

19  Lagioia et al., Jablonowska/Tagiuri, Rescuing Transparency in the Digital Economy: In Search of a Common Notion in EU Consumer and Data Protection Law, Yearbook of European Law 2023 forthcoming. 20  G. Alpa, The Right to be Oneself, on file with the author.

An outsider’s view on law and technology  387 It seems as if Durkheim’s Cult of the Individual (Marske, 1987, pp. 1–14) has found its legal counterpart. One might indeed assume that the deconstruction of the abstract legal person through the introduction of social-role-based concepts brings the law closer to the people – according to Guido Alpa a whole bunch of rights culminate in the right to be oneself. This would mean that I have a legal right to claim to be treated as an insider, as an outsider, or as a hybrid. Such a right transferred to the information society, however, presupposes that I know how I am treated, for instance, by the major publishing houses. If I do not know how I am classified, I am unable to ask to be reclassified in some other direction. Therefore, the right to know precedes the right to be oneself. De lege lata, however, such a right does not exist, at least not under the GDPR and in limits within national legal orders (Spindler & Seidel, 2018, pp. 2153–2157).

4. ON INSIDERS, OUTSIDERS, AND SOCIETY So far, I have looked at my topic through the lenses of an individual person, either an outsider or an insider, myself as a designated outsider. However, insiders and outsiders are societal categories. Specifically, insiders and outsiders are two different societal categories, two camps standing in opposition to each other. The liberal (in the American sense) society enables societal self-constitution and self-constitutionalism. Karl Heinz Ladeur (Ladeur, 2006) has convincingly shown that Böhm’s private law society (Böhm, 1966, pp. 75–151) is not one of individuals alone but, equally, one which enables, and is dependent on, collective self-organisation. The making and the building of an academic digital community underpins both the existence and the use of liberal freedoms in self-organisation. In the academic environment, community building goes hand in hand with the establishment of communication circles, typically via webinars or online meetings. Moreover, it happens together with the setting up of publication organs, whether digital or on paper, which serve as a window to the broader academic community, but which are typically monitored through an editorial team and an editorial board whose members are regarded to belong to the digital community. Becoming a member of the digital community requires a soft form of accreditation. The simple wish to join the digital community does not suffice. It requires societal and academic acceptance. The sentiment of belongingness holds the newly established community together. Speaking, using and developing a particular legal language belongs to the sense of togetherness. All these elements can easily be identified in the digital community. Community building, however, inherently requires the insistence on difference. In our case, it is the difference from the traditional community of lawyers. Interestingly, the latter is not homogenous, as the traditional community can be broken down into a private law community, an international private law community, a comparative law community, a financial law community, a consumer law community, and so on. The same subdivisions can be found in the field of public law. However, the distinguishing feature between the traditional community and the new community is the focus on digitalisation. The digital community sends the traditional community a clear-cut message: we are different from you, and we are the ones that have the competence to speak about digitalisation seriously. Therefore, establishing a new academic community happens together with a kind of intellectual rupture, inspired and motivated by otherness. Rupture does not necessarily mean that the digital age breaks up with the analogue age. Rupture means that a new way of thinking is needed, one which is not included in the traditional community of lawyers, one which is different from the outsiders. Typically,

388  Research handbook on law and technology a new academic community claims that everything is new, that a kind of revolution is taking place, which requires new legal categories and new legal thinking.21 The relationship between traditional and new communities demonstrates that the distinction between outsiders and insiders is one of perspective. The traditional community of lawyers regard the newcomers as outsiders, as they distinguish themselves from the then-dominating intellectual spirit. The traditional community is held together by exactly the opposite type of thinking: “nothing new under the sun”. The new technologies could be integrated into the current legal system, the thinking goes, given the strong conviction that the existing legal toolbox suffices to handle the new phenomena. There is no rupture, there is evolution, there is no new legal thinking needed, all that has to be done is to integrate the new technology into the existing body of law. The existing body of law managed to cope with the Industrial Revolution and the then new technologies, therefore the current set of rules is equally apt to handle the recent technological developments, perhaps subject to some minor adjustment, going along with the evolutionary character of the law.22 Once established, the new digital community draws a distinction between those who belong to them and those who belong to the old community. The new community then establishes a kind of insider thinking, which is perfectly documented in the mandate given to me. The editors understand themselves as insiders and me as an outsider, whereas the traditional community will look at the digital community as outsiders and they might look at me as an insider. The distinction and the insistence on otherness is constitutive. It holds both groups together: the new digital legal community and the old traditional legal community. Being termed “an outsider” is a form of stigmatisation. Again, this works in both directions, depending on perspective. The new legal community will regard the old community as outsiders, as not belonging to “them”, and the old legal community will treat the new legal community as outsiders, defending a legal thinking which prevents belongingness. The outsider, this is well-known and well-accepted in social science, is stabilising the group of insiders (Goffmann, 1963). Seen this way, the pejorative connotation of stigmatisation is fading away and replaced through a positive role associated with the outsider. The stigmatisation and the stabilising role work in both directions. The new digital community stabilises the traditional community and the traditional community strengthens the sense of belongingness in the digital community.

5. ON BEING AN OUTSIDER I am comfortable being treated as an outsider. This stigma is not negative, and I do not feel disqualified. I began my academic career at the University of Hamburg (then Hochschule für Wirtschaft, later integrated into the University of Hamburg) and in Bremen, at the Centre for European Legal Policies. These two institutions were treated by the traditional legal community 21  On rapture (Twigg-Flesner, 2016, pp. 21–48) and on rupture vs evolution in particular (Brownsword, 2020a). 22  Paradigmatic the opinion of F. Faust who had been given the mandate by the Deutscher Juristentag, which unites ca 5000 lawyers, attorneys, judges, students and academics, to investigate whether the German Civil Code needs adjustment. F. Faust claims that no major changes are needed and that the current toolbox is fit for purpose, subject to minor adjustments, Digitale Wirtschaft – analoges Recht: Braucht das BGB ein Update? Ch. Beck. 2016. In fact the German BGB is not undergone any amendment beyond the ones made necessary through the implementation of secondary EU law.

An outsider’s view on law and technology  389 as standing outside the mainstream due to their left-wing (red) orientation. The invitation to write an outsider’s view made me think again about the role and function of being treated as an outsider. I understood it as a life-long privilege. Insiders tend to claim that their legal community sets the discourse for the current times. Throughout my career, I found it academically more rewarding to swim “against the tide” and, together with Rob van Gestel, I defended this kind of thinking in our paper on “Why Methodology Matters” (Gestel & Micklitz, 2014, pp. 292–316). I would like to conclude my considerations on insiders and outsiders with Miłosz’s23 categorisation of the intellectuals in the Stalinist period, which I believe is still of relevance today. I am grateful to Przemek Pałka to have closed my knowledge gap. Miłosz distinguishes between Alpha – the moralist, Beta – the disappointed lover, Gamma – the slave of history, and Delta – the troubadour. Each of us belongs to our particular intellectual community. At some point in time, each of us has to clarify our role not necessarily within the particular community in which we are working, but with regard to our position in science. As I wrote elsewhere, I am sympathising with the role of the troubadour (Micklitz, 2020, pp. 205–227). At worst, the troubadour might be called an opportunist who changes sides, from the outsider to the insider and back if needed. At best, the troubadour is somebody who does not take himself too seriously, as he always keeps a certain distance from moral over-identification – Alpha, to overinvesting feelings and emotions – Beta, to over-relying on historical commitments – Gamma. In that sense, one might associate the position of a Troubadour with the one of an observer. I think this is a wonderful role.

BIBLIOGRAPHY Adorno, T.W. (2003). Über Jazz. In Derselbe: Musikalische Schriften: Moments musicaux. Impromptus (Band 17 der Gesammelten Schriften). Frankfurt am Main: Suhrkamp. Böhm, F. (1966). Privatrechtsgesellschaft und Marktwirtschaft. Jahrbuch für die Ordnung von Wirtschaft und Gesellschaft, 17, 75–151. Brownsword, R. (2020). Law 3.0 Rules, Regulation, and Technology. London: Routledge. Brüggemeier, G. (2020). The Civilian Law of Delict: A Comparative and Historical Analysis. European Journal of Comparative Law and Governance, 7, 339–383. Crary, J. (1996). Techniken des Betrachters. Sehen und Moderne im 19. Jahrhundert. Dresden: Verlag der Kunst. Degli Esposti, M., Lagioia, F. & Sartor, G. (2020). The Use of Copyrighted Works by AI Systems: Art Works in the Data Mill. European Journal of Risk Regulation, 11(1), 51–69. doi:10.1017/err.2019.56 Dingemann, K. & Kottmann, M. (2020). Legal Opinion On the European System of Harmonised Standards Commissioned by the German Federal Ministry for Economic Affairs and Energy. Retrieved from https://www​.bmwk​.de​/ Redaktion​/ EN​/ Downloads​/ L​/ legal​-opinion​-on​-the​-european​ -system​-of​-harmonised​-standards​.pdf?_​_blob​=publicationFile​&v=3 Dubber, M.D., Pasquale, F. & Das, S. (Eds.). (2019). The Oxford Handbook of Ethics of AI. Oxford: Oxford University Press. Retrieved from https://ssrn​.com​/abstract​=3378267 or http://dx​.doi​.org​/10​ .2139​/ssrn​.3378267 Eliantonio, M. & Volpato, A. (2022, March 11). The European System of Harmonised Standards. Legal Opinion for ECOS. Retrieved from https://ssrn​.com​/abstract​= 4055292 or http://dx​.doi​.org​/10​.2139​/ ssrn​.4055292 Gasser, U. & Schmitt, C. (2020). The Role of Professional Norms in the Governance of Artificial Intelligence. In M.D. Dubber, F. Pasquale & Sunit Das (Eds.). The Oxford Handbook of Ethics of AI (pp. 141–159). Oxford: Oxford University Press. 23 

See Cz. Milosz, The Captive Mind, 1953.

390  Research handbook on law and technology Goffmann, E. (1963). Stigma: Notes on the Management of Spoiled Identity. New York: Simon and Schuster. Helberger, N., Sax, M., Strycharz, J., Micklitz, H.W. (2022). Choice Architectures in the Digital Economy: Towards a New Understanding of Digital Vulnerability. Journal of Consumer Policy, 45(2), 175–200. Horkheimer, M. (1941). Art and Mass Culture. Studies in Philosophy and Social Science, 9(2), 290–304. Josserand, L. (1936). ‘L’évolution de la responsabilité (conférence donnée aux Facultés de Droit de Lisbonne, de Coimbre, de Belgrade, de Bucarest, d’Orades, de Bruxells, a l’institut franc ais de Madrid, aux centres juridiques de L’Institut des Hautes Études marocainesa Rabat et a Casablanca)’. In L. Josserand (Ed.). É volutions et Actualités: Conférences de Droit Civil. Paris: Receuil Sirey. Kjaer, P. (Ed.). (2020). The Law of Political Economy. Cambridge: Cambridge University Press. Ladeur, K.H. (2006). Der Staat gegen die Gesellschaft Zur Verteidigung der Rationalität der Privatrechtsgesellschaft. Tübingen: Mohr Siebeck. Ladeur, K.H. (2011). The Evolution of General Administrative Law and the Emergence of Postmodern Administrative Law. Comparative Research in Law & Political Economy. Research Paper, 16, 1–55. Lagioia, F., Jabłonowska, A., Liepina, R. & K. Drazewski. (2022). AI in Search of Unfairness in Consumer Contracts: The Terms of Service Landscape. Journal of Consumer Policy, 45, 481–536. Retrieved from https://doi​.org​/10​.1007​/s10603​- 022​- 09520-9 Lippi, M., Contissa, G., Jablonowska, A., Lagioia, F., Micklitz, H.-W., Palka, P., Sartor, G. & Torroni P. (2020). The Force Awakens: Artificial Intelligence for Consumer Law. Journal of Artificial Intelligence Research, 67, 169–190. Lobel, O. (2022). The Equality Machine: Harnassing Digital Technology for a Brighter, More Inclusive Future. Hachette Book Group. Marshall, J. (2014). Human Rights Law and Personal Identity. Abingdon: Routledge. Marske, Ch.E. (1987). Durkheim’s “Cult of the Individual” and the Moral Reconstitution of Society. Sociological Theory, 5, 1–14. Merki, Ch.M. (2002). Der holprige Siegeszug des Automobils 1895–1930. Zur Motorisierung des Straßenverkehrs in Frankreich, Deutschland und der Schweiz. Böhlau: Weimar. Micklitz H.-W. (2023). The Role of Technical Standards in the EU Digital Policy Legislation, Report Commissioned by BEUC and ANEC (to be published online in July 2023). Micklitz, H.-W. (2020). The Transformative Politics of European Private Law. In P. Kjaer (Ed.). The Law of Political Economy (pp. 205–227). Cambridge: Cambridge University Press. Micklitz, H.-W. (2021). Risk, Tort and Liability. In S. Grundmann, H.-W. Micklitz & M. Renner (Ed.). New Private Law Theory, A Pluralist Approach. Cambridge: Cambridge University Press. Micklitz, H.-W., Palka, P., Panagis, Y. (2017). The Empire Strikes Back: Digital Control of Unfair Terms of Online Services. Journal of Consumer Policy, 40, 367–388. Nourse, V. & Shaffer, G. (2009). Varieties of New Legal Realism: Can a New World Order Prompt a New Legal Theory. Cornell Law Review, 95(61), 61–138. Retrieved from https://scholarship​.law​ .cornell​.edu​/clr​/vol95​/iss1/8 Pałka, P. (2018). Terms of Service Are Not Contracts: Beyond Contract Law in the Regulation of Online Platforms. In S. Grundmann (Ed.). European Contract Law in the Digital Age (pp. 135–162). Cambridge: Intersentia. Pistor, K. (2020). The Code of Capital: How the Law Creates Wealth and Inequality. Princeton: Princeton University Press. Sedano Varo, E. (2023). Financialisation and the Vanishing of the Rule of Law (Doctoral dissertation, European University Institute). Retrieved from https://cadmus​.eui​.eu​/ handle​/1814​/75293 Spindler, G. & Seidel, A. (2018). Die Zivilrechtlichen Konsequenzen von Big Data für Wissenszurechnung und Aufklärungspflichten. Neue Juristische Wochenschrift, 70, 2153–2157. Teubner, G. (2019). Wirtschaftsverfassung I, II zum selbstgerechten Rechtsverfassungsrecht: Zur Kritizität von Rudolf Wiethölters kritischer Systemtheorie. Kritische Justiz, 52(4), 601–625. Twigg-Flesner, C. (2016). Disruptive Technology – Disrupted Law? How the Digital Revolution Affects (Contract) Law. In A. De Franceschi (Ed.). European Contract Law and the Digital Single Market (pp. 21–48). Antwerp: Intersentia. Van Gestel, R. & Micklitz, H.W. (2014). Why Methods Matter in European Legal Scholarship. European Law Journal 20, 292–316.

PART IV CHALLENGES

24. Autonomous weapons Magdalena Pacholska1

1. INTRODUCTION The term “autonomous weapons” is a commonly adopted shorthand for weapon systems with autonomous functionalities (AWS) (Work, 2021), also alarmingly referred to as “killer robots” (HRW & IHRC, 2012, p. 1). Despite lacking a universally accepted definition and in their full manifestation broadly considered not to be fielded yet, AWS have been subject to heated debate for over a decade. From the beginning, the discussion about the legality of AWS has been characterized by over-hyped narratives, highly polarized views, and conflation of ethical and legal standards. Misconceptions or misunderstandings of military decision-making processes and legal institutions coupled with definitional deficiencies of AWS often obfuscate the debate even further. The purpose of this chapter is to demystify AWS, present the landscape around them by identifying the main legal issues AWS might pose, and suggest solutions thereto whenever possible. While at times pointing out core arguments from other disciplines, the present analysis adopts an international law perspective; a multitude of technical,2 ethical (Asaro, 2020; Tamburrini, 2016), or political science-focused accounts (Marsh, 2022) continue to be presented elsewhere. Further, this chapter does not delve into the various domestic regulations of weapons development or export; it also steers clear of regional attempts to regulate various uses of artificial intelligence (AI) scrutinized in the preceding chapter. The chapter comprises six sections and proceeds as follows. To set the scene, Section 2 pinpoints and elucidates terms and concepts key to deciphering what AWS actually are. Section 3 provides an overview of the state-of-the-art of autonomy in military equipment against the backdrop of human-machine teaming, a component of modern warfare for over a century. The next two sections delve into the ongoing work of the group of governmental experts (GGE) on lethal AWS (LAWS) held under the auspices of the United Nations Convention on Certain Conventional Weapons (CCW) (1980), the main international political forum dealing with the matter at hand (Mauri, 2022).3 Section 4 offers a general outlook on the regulatory debate ongoing at the GGE on LAWS and its reception in legal scholarship, including its 2019 list of 11 Guiding Principles (Annex IV).4 Section 5 zooms in on the question of attribution of responsibility for internationally wrongful conduct resulting from the combat employment 1  This research received funding from the European Union Horizon 2020 research and innovation program under Marie Skłodowska-Curie Grant Agreement No 101031698. 2  See ICRC, Customary IHL Database. Retrieved from https://ihl​-databases​.icrc​.org​/customary​-ihl​ /eng​/docs​/v1 3  The United Nations General Assembly First Committee dealing with disarmament and international security had some talks on the issue of LAWS but the positions presented by the States generally echo those voiced at the GGE on LAWS. 4  See United Nations. (2019). Report of the 2019 session of the Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems with Annexes, CCW/ GGE.1/2019/3, September 25, 2019.

392

Autonomous weapons 

393

of AWS, considered by some as the pivotal legal peril ostensibly resulting from weapons’ autonomy. Section 6 concludes. One general remark needs to be made before proceeding with the analysis. International law, as it stands today, includes a combination of restrictions and prohibitions regulating the employment and use of weapons; in fact, it has long been recognized that “[t]he laws of war do not recognize in belligerents an unlimited power in the adoption of means of injuring the enemy.”5 Broadly speaking, means of injuring the enemy, in contemporary international humanitarian law (IHL) referred to as “weapons, means and methods of warfare,” are inherently unlawful if they fulfil at least one of the following conditions, stemming from treaty and/ or customary law: 1. They are explicitly banned by a specific legal instrument, such as biological or chemical weapons (Joyner, 2009); 2. They are of “a nature to cause superfluous injury or unnecessary suffering” (Additional Protocol I (1977) (API), Article 35(2)); 3. They are “intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment” (API, Article 35(3)); 4. They are by nature indiscriminate, that is “cannot be directed at a specific military objective or [their] effects cannot be limited as required by IHL” (Customary IHL Database, Rule 70). When an item of military equipment is classified as inherently unlawful, its mere employment constitutes a violation of international law; there is no need to assess the legality of particular instances of use. The permissibility of all other weapons depends on whether they are used in conformity with IHL. To lawfully use any weapon, whether autonomous or not, for the purpose of an attack, belligerents must follow IHL, including the obligations to: • • •

distinguish between military objectives and protected civilians and civilian property (distinction),6 evaluate whether the collateral civilian casualties or damage would not be excessive in relation to the anticipated concrete and direct military advantage (proportionality),7 and take all feasible precautions – including in the choice of means and methods of attack – to avoid, and in any event to minimize, collateral civilian casualties or damage (precautions).8

All states are responsible for ensuring that their armed forces, and the equipment they use, comply with IHL (Common Article 1).9 That obligation is operationalized in the form of the 5  See Brussels Declaration. (1874). Project of an International Declaration concerning the Laws and Customs of War. Brussels, August 27. 6  API, Articles 48, 51(2) and 52(2). 7  API, Articles 51(5)(b) and 57(2)(a)(iii). 8  API, Article 57(2) and 58. See Switzerland. (2017). A “compliance-based” approach to Autonomous Weapon Systems. Geneva: Meeting of Group of Governmental Experts on LAWS. November 10. 9  See Geneva Convention Relative to the Protection of Civilian Persons in Time of War, opened for signature August 12, 1949, 75 UNTS 287 (entered into force October 21, 1950); Poland. (2015). Meaningful Human Control as a form of state control over LAWS. Geneva: Meeting of Group of Governmental Experts on LAWS. April 13.

394  Research handbook on law and technology so-called “weapons’ review obligation” (Jevglevskaja, 2018), encapsulated in Article 36 of the API,10 pursuant to which: In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party. (emphasis added)

It is against this matrix of norms that the claims of the alleged (L)AWS illegality, purported by some stakeholders and commentators, should be assessed. To do so with reason rather than inclination, more light should be shed on what various terms commonly used in the debates on autonomy in weapon systems actually mean; this is what the next Part turns to.

2. AWS LEXICON: AN EMERGING SKETCH Expert commentaries on GGE on LAWS meetings often point out that the participants seem to be confused about the subject of their deliberations. Given that the term “autonomous robot” may conjure up images ranging from no-threat applications such as Roombas or ASIMO,11 to the nightmarish scenario of SkyNet-powered terminators roaming a post-apocalyptic hellscape, it is crucial to clarify the core terms to lay the groundwork for the subsequent analysis of AWS. To start with, in military parlance, a “weapon” denotes an item of military equipment designed or used to cause harm to persons and damage to objects, while a “weapon system” refers to “a combination of one or more weapons with all related equipment, materials, services, personnel, and means of delivery and deployment, if applicable, required for self-sufficiency” (NATO Term, Record 777). In other words, alongside the weapon itself, a weapon system is commonly understood to also encompass munitions, launch platforms, sensors, communication systems as well as the people operating it (Scharre & Horowitz, 2015a). The third semantic element of AWS – autonomy – evades an equally simple definition and carries disparate meanings in different disciplines (McFarland, 2020; Kowalczewska, 2021). Etymologically, “autonomous” is a conjunction of the Greek terms auto (self) and nomos (rule) connoting a form of freedom to govern itself without external control, which makes it almost indistinguishable from “automatic,” denoting self-thinking (gr. matos). Despite the increasing alarm evoked by the purportedly increasing level of autonomy in weapon systems, the automatization of military equipment has steadily progressed without much ado for over a century.12 The International Committee of the Red Cross (ICRC) round-table meeting of independent experts explicitly asserted that there was “no clear technical distinction between

10  While the customary nature of AP I, Article 36 remains debated, many States which have not ratified AP I comply with it as a matter of policy and/or have transpose its gist into their respective domestic legislations (Jevglevskaja, 2018). 11  ASIMO is a world famous first humanoid robot introduced by Honda in 2000. 12  The first automatic, meaning auto-reloading machine gun was invented by Hiram Stevens Maxim in 1884; a variation of it was used by all major powers in the First World War.

Autonomous weapons 

395

automated and autonomous systems” (p. 7).13 On the one hand, “[t]he term ‘automatic’ is often used to refer to systems that have very simple, mechanical responses to environmental input,” and “automated” denotes “more complex, rule-based systems” (Scharre & Horowitz, 2015a, p. 6). On the other hand, “autonomy” can be described as “a capability (or set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, ‘self-governing’” (p. 1).14 As rightly pointed out in recent scholarship, the reference to programmed boundaries is the crux to understanding the nature of autonomy in machines; “that a machine is able to operate without human interaction does not mean there are no human-imposed restrictions on the machine’s behaviour” (McFarland, 2020, p. 30). The range of the machine’s possible behavior depends on its complexity, and, if presented on a spectrum, would run from a simple on/off switch to the machine’s ability to “think” or “train” on its own. The latter, high end of the spectrum brings up a tandem of terms that are inescapable during any discussion on AWS, namely AI and machine learning (ML). AI, as deployed in military equipment, is often defined as “the ability of machines to perform tasks that normally require human intelligence – for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action – whether digitally or as the smart software behind autonomous physical systems” (p. 14).15 ML, in turn, is a sub-category of the most advanced AI using algorithms to automatically process large volumes of ostensibly unrelated information to increase its efficiency (and efficacy), in solving the assigned tasks.16 Some ML-based systems, as of now,17 constitute “black boxes” in the sense that neither their programmers nor users can fully understand how, and why, a given output was produced.18 That feature of ML systems raises some valid ethical concerns regarding their lack of explainability and predictability of their outputs (Asaro, 2020; Boutin & Woodcock, 2022). One additional concept, critical to understanding the following analysis, must be added to this already voluminous lexicon – targeting in the military context. Much of the scholarship on AWS gives the impression that targeting is a simple kinetic lethal action preceded by a binary target selection (target/not a target). This is incorrect. In modern advanced armed forces, targeting is a complex endeavor, commonly defined as “the process of selecting and prioritizing targets and matching the appropriate response to them, considering operational requirements and capabilities” (NATO Term, Record 17554). In the North Atlantic Treaty Organisation (NATO), for instance, targeting is a deliberate, interactive, and methodical decision-making cycle, consisting of six phrases, reflected in Figure 24.1. While its detailed examination can be found elsewhere (Ekelhof, 2018), its key takeaway is the fact that the

13  See ICRC. (2019a). Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control, August 2019. 14  See United States Department of Defense. (2012). Task Force Report: The Role of Autonomy in DoD Systems, July 2012, Washington D.C. 20301-3140. 15  See NATO. (2020). Science & Technology Trends 2020–2040. Brussels, Belgium: NATO. 16  See NATO. (2020). Science & Technology Trends 2020–2040. Brussels, Belgium: NATO. 17  As of 2022, attempts to create a separate algorithm capable of peering into such a black box and explain its inner workings are still ongoing but AI systems can already be trained to pinpoint critical factors or determinative features of a raw data set that made them produce a specific result. (Rudin et al., 2021). 18  See ICRC. (2019). Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control, August 2019.

396  Research handbook on law and technology

Source:   AJP-3.9, 1-14.

Figure 24.1  NATO’s targeting process development of feasible military objectives is triggered by the commander and controlled from the top down (Roorda, 2015). As the next section discusses, the ongoing outsourcing of certain elements of the targeting process to machines not only does not hinder the commander’s command and control (C2) authority but in fact enhances it.

3. AUTOMATION AND AUTONOMY IN EXISTING WEAPON SYSTEMS Arguments against AWS are often “made from the perspective of an idealized version of human control, divorced from the realities of war and how existing weapons use forms of autonomy” (Scharre & Horowitz, 2015a, p. 3). Autonomy, more precisely automation, has been introduced into weapons systems to augment human capabilities mainly in regards to three sub-phases of the targeting process – target identification, selection and engagement. Complex as the targeting cycle is, its core is fairly simple: knowing who and what to shoot in a way that provides the best “bang for the buck.” The ability to scout the positions of the enemy forces and their critical points has always been a commander’s concern. Scouting, which over

Autonomous weapons 

397

time evolved into intelligence, surveillance, and reconnaissance (ISR) – nowadays crucial for the commanders’ situational awareness – is a field of military practice probably most revolutionized by technological advances, starting in the Second World War with the advent of radar. Initial radars simplified threat detection by tracking aircraft and naval vessels at greater range and in worse conditions than visual identification, but still required manual transition (often into a physical map) and analysis of data. As technology progressed, the need for extensive human labor required for data analysis reduced. Modern systems not only collect existing data and translate it into a detailed picture of the battlespace but can also identify potential threats based on pattern or behavior recognition, further easing the burden caused by massive amounts of available real-time sensor data needing near-real-time analysis by humans. Airborne warning and control systems (known as AWACS) for instance, allow for massive collection of data, automated concatenation, and analysis of that data, automated tracking of potential threats and automated sharing of threat data with weapons systems deployed throughout a wide area. Alongside modernizing ISR, the military have automated elements of weapon systems related to target selection and engagement, as is best exemplified by the evolution from “dumb” bombs to guided and homing or loitering munitions (Work, 2021). While dumb bombs simply follow a ballistic trajectory, homing munitions can acquire targets based on an identifiable characteristic such as sound, sonar, or heat signatures and track them till destruction. Early incarnations, already allowing for extending target engagement beyond line of sight, included the 1979 CAPTOR deep water mine, designed to detect submarines (ignoring surface ships) with a sonar sensor; after being activated by a human operator, the mine would release an encapsulated torpedo that would follow and destroy the targets (Work, 2021). Homing missiles commonly utilize a variety of automated target recognition (ATR) systems, first developed back in the 1970s, which employ pattern recognition to identify targets – comparing a sound, heat, shape and height, or radio-frequency emission against a set, human-defined library of patterns that correspond to intended targets (Boulanin & Verbruggen, 2017). Such understood target “selection,” rudimentary forms of which emerged in advanced homing systems such as the CBU-105 Wind-Corrected Munition dispenser used during the 2003 Iraq invasion, was further developed in weapons known as bounded search weapons, sometimes referred to as loitering munitions (Work, 2021). Simply put, these munitions, capable of flying to a significant but pre-determined area and searching for a class of type of targets for a set amount of time, are typically used when the target location and the time the target will be there are not precisely known. Examples abound; one of the most known thereof, the Israeli Harpy, first developed back in the 1980s, once activated operates independently of direct human involvement and after flying to a predetermined location, uses its anti-radar seeker to select and engage targets emitting a matching radar signal (van den Boogaard & Roorda, 2021). As a type of munition capable of engaging targets belonging to a specific class or category, loitering munitions are prone to being described as “choosing” targets, and as such being truly autonomous; this is often misleading (McFarland, 2020). Recently, for example, much has been made of the Turkish STM-Kargu system used in Libya in 2020, as allegedly the first fully autonomous LAWS attack against human targets (Hayir, 2022). However, a closer look at the functioning of the system and its use reveals that it is a typical loitering munition using ATR to detect pre-identified targets once in the vicinity of pre-identified target coordinates, in this case logistics convoys accompanied by legitimate human military targets.19 19  See United Nations. (2021). Final Report of the Panel of Experts on Libya established pursuant to Security Council Resolution 1973 (2011), S/2021/229, March 8, 2021.

398  Research handbook on law and technology While bounded search weapons are considered the only offensive type of weapon with target “selection” option, the same technology is widely employed in close-in defense systems (CIWS) developed to provide both warning and interception capabilities. Scale-wise, CIWS range from being designed to defend a limited zone such as ship or military base (like the Dutch Goalkeeper or American Phalanx) to a large geographic area (Israeli Iron Dome or David’s Sling) but their underlying design is the same: they identify incoming projectiles and determine the optimal firing time to shoot them down in a manner maximizing the troops or population’s protection. Functionally, the most advanced CIWS are a combination of ISR, C2, and loitering munitions, and as such can be set by a human operator to operate in an autonomous mode in situations that require almost-immediate decisions of a volume and speed that would overwhelm human capacity (Work, 2021). Scrutiny of the actual operational capabilities of even the most advanced systems shows that current military applications of autonomy function in a bounded, deterministic way, executing portions of the kill chain, which makes them at best automated rather than truly autonomous, given that they are incapable of changing their own rules of operation beyond the framework imposed by human operators. This aspect is often obscured in legal debates about AWS, so it is worth spelling out as clearly as possible: the target “selection” often portrayed as a critical truly autonomous function is at its core an execution of pre-written instructions by a human. Even the most advanced ATR systems are based on a command like: If then else (McFarland, 2020) Until late 2022, relatively little has been done to integrate true autonomy (i.e., ML that could rewrite human-imposed bounds on-the-fly) into weapons systems. While much of the development of the technology at hand remains classified, there is no reason to assume that even the highest military echelons that are the most disconnected from combat realities would push for fielding unreliable systems, for reasons further explained in the following part.

4. INTERNATIONAL REGULATORY DEBATE While the debate on increasing levels of autonomy in the military context has been ongoing in professional circles since the mid-2000s (McFarland, 2020), the explosion of broad public interest in AWS can be traced back to 2010 when Philip Alston, then the Special Rapporteur on extrajudicial, summary, or arbitrary executions, raised a concern that “in the foreseeable future, the technology will exist to create robots capable of targeting and killing with minimal human involvement or without the need for direct human control or authorization” (p. 10).20 Not much later, the “Campaign to Stop Killer Robots,” a coalition of non-governmental organizations, was launched and started a vigorous lobby for a preemptive ban on AWS. The colloquial nature of the term “killer robots” and the often-facile narrative of their imminent operational capabilities set forth by the Campaign quickly captured the public imagination and has resonated in the general discourse ever since (van den Boogaard & Roorda, 2021; Jenks, 2016). They also hold some sway at the GGE on LAWS (Cherry & Korpela, 2019) formally established in 2017 by the State Parties to the CCW with a mandate to assess questions related to emerging technologies in the area of LAWS. 20  See Alston, P. (2010). Interim Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, UN Doc A/65/321, August 23.

Autonomous weapons 

399

Although the biannual meetings of State Parties to the CCW, signatories to the CCW and representatives of the civil society fulfill their purpose of serving as a forum to exchange views on LAWS, they are widely considered a failure by those expecting a regulation thereof. While some attribute such failure to the consensus principle under which the GGE operates, and/or the fact that a protocol on LAWS would regulate a prospective category of weapons (Vohs, 2022), it is worth bearing in mind neither of those predicaments thwarted the adoption of the Protocol on Blinding Laser Weapons (1995) within the same forum (Doswald-Beck, 1996). A closer look at the GGE on LAWS’s paper trail evinces that many of its participants keep talking past each other about both legal and technical aspects of the weapon systems at hand (Kraska, 2021; McDougall, 2019; Jenks, 2016). As a result, the GGE’s main achievement, the 2019 Guiding Principles in the Area of LAWS are often inchoate,21 and simply reiterate the applicability of already existing international obligations,22 or, at best, set the record straight on the most specious arguments hinted at during the debates on LAWS23 (Shereshevsky, 2022). Nonetheless, the Principles themselves and States’ reactions thereto indicate that various points of disagreement between the States coalesce along four mutually intertwined issues: the definition of (L)AWS, whether they should be subject to a new legal instrument, the need to ensure “meaningful human control” over (L)AWS, and the “responsibility gap” (L)AWS would ostensibly result in (Mauri, 2022). The latter concern is subject to closer scrutiny in Section 5, the first three will be further elaborated on here. The opening section of this chapter pointed out that there was no universally accepted definition of AWS; the same hindrance extends to LAWS. While differences of opinion on the scope of the key terms are inherent to any international negotiations, the discussion at the GGE on LAWS demonstrates that five years in, States continue to have widely disparate understandings of the subject matter of the deliberations (Kowalczewska, 2021). First of all, there is no clarity on what the “lethal” in “LAWS” is supposed to entail. Some actors seem to be using the “AWS” and “LAWS” as synonyms, prompting questions about whether there is any substantial difference between the two terms (Seixas-Nunes, 2022). Given that, as explained in Part 3, weapon systems with high degree of autonomy in executing force against materiel targets (i.e., objects) have been in use long before the issue of “autonomous weapons” gained prominence, and taking note of the ethical concerns raised by some against life-anddeath decisions24 being made by a machine (Heyns, 2016; HRW & IHRC, 2012),25 LAWS could be simply understood as anti-personnel AWS (Jenks, 2016). Conceptually coherent as such a reading might seem, an acute proposition advanced by the ICRC to explicitly focus the

21  E.g., Principles (b) “Human responsibility for decisions on the use of weapons systems must be retained.” 22  Principle (a) providing that “[IHL] continues to apply fully to all weapons systems, including the potential development and use of [LAWS]” is a truism, unquestioned on the CCW forum, while principle (e) practically mirrors Article 36 of AP I referenced in the Introduction. 23  Principles (b) “accountability cannot be transferred to machines” and (i) “In crafting potential policy measures, emerging technologies in the area of lethal autonomous weapons systems should not be anthropomorphized” are intended to quash most far-reaching assertions alluding to such a possibility. 24  Presumably of non-combatants, given that combatants on board of military vessels or airplanes have been losing their lives in result of machine actions for decades. 25  See ICRC. (2019). ICRC statement, CCW GGE “LAWS,” March 25. Retrieved from https://rea​ chin​gcri​ticalwill​.org​/images​/documents​/ Disarmament​-fora ​/ccw​/2019​/gge​/statements​/25March ​_ ICRC​ .pdf

400  Research handbook on law and technology discussion at GGE on LAWS on the “use of autonomous weapons systems to target human beings” (p. 1)26 has not been heeded thus far. A second impediment to the GGE forging at least a working definition of LAWS is a recurring misconception of autonomy as an all-or-nothing quality of a system, rather than a mode of operation of some of its functions (McFarland, 2020; Jenks, 2016; Scharre & Horowitz, 2015a). That, in turn, makes some equate (L)AWS with “fully autonomous weapons systems”; this is a position explicitly taken by, inter alia, the African Group27 and China. To the latter, “full autonomy” denotes the “absence of human intervention and control during the entire process of executing a task” including “impossibility for termination, meaning that once started there is no way to terminate the device” (p. 1).28 Others, like France, offer a more nuanced approach and distinguish “fully autonomous weapons systems,” understood as ML-based systems capable of changing their own rules of operation beyond a determined framework of use, particularly as regards target engagement, from “partially autonomous weapons systems” not having such capabilities. The French position is that the work of the GGE on LAWS should be limited to the former category (Vilmer, 2021). Unsurprisingly, those States that understand LAWS as fully autonomous weapons systems support a stringent ban prohibiting, at the very least, their use. An important remark should be stressed here – for military experts, fully autonomous weapon systems, that is systems capable of expanding their functions or modes of operation beyond a pre-determined scope, are a pure fantasy; not only would they put the entire targeting cycle on its head, but also, and perhaps more importantly, they would wreck the military chain of command, an institution fundamental for the functioning of all armed forces (van den Boogaard & Roorda, 2021).29 While it is virtually impossible that armed forces under responsible command would ever field “solitary entities carrying weapons and making life and death decisions free of human command” (Kraska, 2021, p. 408), let alone ones that cannot be controlled (Scharre & Horowitz, 2015b; Roorda, 2015), they have – and will continue to – fuse AI into the targeting cycle. Attempts to reflect such fusion in the definition of AWS have been made by, e.g., the ICRC and the United States, both of which consistently define AWS as weapons systems that, after being activated by a human operator, can select and engage/attack

26  See ICRC. (2021). ICRC Position on Autonomous Weapon Systems, May 12. Retrieved from https://www​.icrc​.org​/en​/document​/icrc​-position​-autonomous​-weapon​-systems. 27  See African Group. (2018). Statement by the African Group. Geneva: Meeting of Group of Governmental Experts on LAWS. 28  See China. (2018). Position Paper Submitted by China. Geneva: Meeting of Group of Governmental Experts on LAWS. According to the official Chinese position, fully autonomous weapons are also characterized by “indiscriminate effect, meaning that the device will execute the task of killing and maiming regardless of conditions, scenarios and targets” (China. (2018). Position Paper Submitted by China. Geneva: Meeting of Group of Governmental Experts on LAWS, p. 1) which makes them illegal anyway, as explained in the Introduction. 29  See NATO (2019). Allied Joint Doctrine for the Conduct of Operations. AJP-3 Edition C. Brussels, Belgium: NATO.

Autonomous weapons 

401

targets without “human intervention”30 or “further intervention by a human operator” (p. 13).31 Although exceptions exist, States sharing such an understanding of (L)AWS are generally less keen to support a preventive bar thereto (Kraska, 2021). The United States and the ICRC might both recognize that autonomy in functions related to target selection and engagement are at the core of AWS but this is exactly where the convergence between their respective positions ends. This is an important split, largely reflecting the two main approaches to AWS (Lieblich & Benvenisti, 2016) reflected in the submission to the GGE on LAWS, i.e., the utilitarian instrumentalist approach led by the United States, and the deontological one represented by the ICRC. The differences between the two positions are most conspicuous when it comes to (the need for) a requirement for human–machine interaction with respect to (L)AWS, and its theoretical underpinnings. For the ICRC, LAWS’ autonomy in “critical functions” of selecting and attacking targets raises a range of humanitarian, legal, and ethical considerations which can only be addressed by “enabl[ing] human judgment and control in relation to a specific attack.”32 In essence then, the ICRC is supporting the standard of meaningful human control (MHC) as construed in 2013 by Article 36, a British not-for-profit organization, and promoted heavily ever since by civil society as a new, rooted in moral agency of humans, legal standard that needs to augment the existing IHL.33 In its original sense, the MHC, exercised by the AWS operator, would require, at the very least, “‘human-on-the-loop’ supervision and ability to intervene and deactivate” and seen as a means “to retain human responsibility” (p. 4).34 The US position,35 in turn, derives from the premise that “human accountability for the results of engagements of weapons does not and should not necessarily mandate human oversight over every step of the kill chain” (Work, 2021, p. 9) or even various elements of the target verification process (McFarland, 2020). Consequently, the US considers the promulgation of

30  See ICRC. (2019a). Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control, August 2019. For a couple of years, the Campaign to Stop Killer Robots used to define fully autonomous weapons in exactly the same way, which further obfuscated the debate (HRW and IHRC, 2012, p. 1); ICRC. (2019b). ICRC statement, CCW GGE “LAWS,” March 25. Retrieved from https://rea​ chin​gcri​ticalwill​.org​/images​/documents​/ Disarmament​-fora ​/ccw​/2019​/gge​/statements​/25March ​_ ICRC​ .pdf; ICRC. (2021). ICRC Position on Autonomous Weapon Systems, May 12. Retrieved from https:// www​.icrc​.org​/en​/document​/icrc​-position​-autonomous​-weapon​-systems. 31  See United States Department of Defense. (2012b). Directive Number 3000.9: Autonomy in Weapon Systems, November 21, 2012 (Incorporating Change 1, May 8, 2017). The decade old definition as set forth in the United States. DoD (2012b) is one of the most frequently adopted working definitions of AWS in legal scholarship written in English. It should be hence noted that at the time of writing, the Directive is being revised and the new version is scheduled to be released in late 2022. 32  See ICRC. (2021). ICRC Position on Autonomous Weapon Systems, May 12. Retrieved from https://www​.icrc​.org​/en​/document​/icrc​-position​-autonomous​-weapon​-systems. 33  See Article 36 (2022). Statement by Article 36 at the June 17, 2022 meeting on the political declaration to protect civilians from the use of explosive weapons in populated areas. Geneva. Retrieved from https://article36​.org​/updates​/statement​-by​-article​-36 ​-at​-the​-17​-june​-2022​-meeting​-on​-the​-political​-declaration​-to​-protect​-civilians​-from​-the​-use​-of​-explosive​-weapons​-in​-populated​-areas/; Human Rights Watch. (2016). Killer robots and the concept of meaningful human control, Memorandum to the Convention on Conventional Weapons (CCW) Delegates. April 2016. 34  See ICRC. (2019a). Autonomy, Artificial Intelligence and Robotics: Technical Aspects of Human Control, August 2019. 35  The UK elaborated substantively the same position back in 2011 in its Joint Doctrine Note on Unmanned Systems (UK MoD, 2011).

402  Research handbook on law and technology the MHC standard redundant and impractical, and asserts that “rather than seeking to (…) set new international standards, States should seek to exchange practice and implement holistic, proactive review processes that, are guided by the fundamental principles of the law of war.”36 It further maintains that “the key issue of (…) LAWS is ensuring that machines help effectuate the intentions of commanders and operators” which is best done by a double-pronged approach, including, on the one hand, “taking practical steps to reduce the risk of unintended engagements,” and on the other “ensuring appropriate levels of human judgment over the use of force” (emphasis added). Practical measures ensuring that force is used to effectuate human intentions aim at making sure that systems “function as anticipated,” and entail “engineering weapon systems to perform reliably, training personnel to understand the systems, and establishing clear human-machine interfaces” (p. 3).37 “Appropriate levels of human judgment,” in turn, is conceived as a holistic and flexible standard accounting for “the totality of the circumstances in the employment of a weapon” (Kraska, 2021, p. 430). A clear-cut determination of which of the two positions has prevailed can be tricky: although the notion of MHC has taken the debates on human-machine interaction in LAWS by storm, its content has evolved so much over the years that by now it is often interpreted in a way that resembles the American position.38 The number and variety of official statements, policy papers, and academic commentaries on MHC proliferated so extensively between 2013–2021, that separate studies are being devoted to determining their common features and synthesizing them into a workable framework (Kwik, 2022). MHC’s potential to frame the debate on AWS has been noted early on by the UN Institute for Disarmament Research, which observed that “the idea of [MHC] is intuitively appealing even if the concept is not precisely defined” (UNIDIR, 2014, p. 2). In scholarly reflections on the topic, such MHC’s “intuitive appeal” has been attributed to its easily accessible nature (Amosoro & Tamburrini, 2021), constructive ambiguity (Crootof, 2016), and potential to serve as a frame for a future normative human rights-inclusive definition of (L)AWS.39 Yet, even the supporters of MHC acknowledge its nebulous nature and recognize that despite almost a decade-long debate “there is no consensus on who should exercise control (…) over what control is to be exercised (…) and how control can be achieved in practice” (Boutin & Woodcock, 2022, p. 5). The who facet of M(H)C ranges from the original AWS operator, to the commander and the fielding State.40 Over what dimension evolved from a specific attack, to AWS as such, as well as the entire lifecycle41 of a 36  See United States of America. (2018). Human-Machine Interaction in the Development, Deployment and Use of Emerging Technologies in the Area of LAWS. August 28. 37  See United States of America. (2018). Human-Machine Interaction in the Development, Deployment and Use of Emerging Technologies in the Area of LAWS. August 28. 38  See United Nations. (2019). Report of the 2019 session of the Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems with Annexes, CCW/ GGE.1/2019/3, September 25, 2019. 39  See Brehm, M. (2015). Meaningful Human Control (paper presented at the Informal Meeting of Experts on LAWS, CCW, Geneva, April 14). 40  Back in 2015, the Polish delegation to the CCW suggested MHC could be reframed into “meaningful state control” and include all stages of the lifecycle of AWS. See Poland. (2015). Meaningful Human Control as a form of state control over LAWS. Geneva: Meeting of Group of Governmental Experts on LAWS. April 13. 41  The period of time that starts with the assessment of the requirement for a system (…) and ends with its final disposal, encompassing a number of sequential phrases including design and development, production and operation (NATO Term, Record 27090).

Autonomous weapons 

403

weapon system. Finally, the how aspect varies from real-time approval of selected targets and/ or capacity to intervene and abort, all the way to overall supervision during pre-deployment stages and combat employment (Kwik, 2022; Boutin & Woodcock, 2022). A consensus on MHC might not yet be looming on the horizon, but cautious optimism seems warranted; as the debate continues it becomes more technically, operationally, and legally sound, or at least such voices become more discernible. From a technical perspective, an increasing number of commentators recognize that “machine autonomy is a form of control” (McFarland & Galliot, 2021, p. 51) and base their legal analyses on “the fact that even very complex programs are just sets of pre-defined (human) instructions” (McFarland, 2020, p. 34). Furthermore, as the discussion progresses, it also becomes more anchored in combat operational realities, as chiefly manifested by two shifts. First, the requirement of human control over every specific attack, in its original construct incorrectly centered on the kinetic understanding of targeting, has been aptly replaced with references to the targeting process or the entire weapon’s lifecycle (Kwik, 2022; Ekelhof, 2018).42 Second, the overall focus of the discussion developed, and rightly so, from wondering whether AWS are (or would be) able to adhere to the targeting law in the direction of ensuring that the effects of their operation are in compliance with IHL and applicable human rights law (van den Boogaard & Roorda, 2021; McFarland & Galliot, 2021; Work, 2021). These technical and operational understandings, in turn, feed into interpreting MHC as “a standard of implementation aiming at the proper application of [IHL]” (Marauhn, 2018, p. 217), which, under international law as it (already) stands today, ought to be done by the fielding State, and the military commander under whose command any weapon, be it a rudimentary trip wire or an AWS relying on most advanced ML, is deployed.

5. TAKING ATTRIBUTION SERIOUSLY: THE (OSTENSIBLE) “RESPONSIBILITY GAP” Accountability has always been at the core of the debates about autonomy. In a nutshell, the concern is that as the machines become more autonomous, a separation between the effects of their functioning and any proximate human would increase, making it impossible to hold anyone to account for those effects, should they be unlawful (McDougall, 2019). The impossibility of transferring accountability to machines combined with the absence of any moral agent to whom responsibility could be ascribed, would lead, so the argument goes, to a “responsibility gap” (Matthias, 2004) i.e., a situation when no entity could be held legally accountable for the wrong done. The AI on which some machines with autonomous functions rely is omnipresent in modern societies and continues to be incorporated in spheres ranging from cancer prevention to “self-driving” cars (Afina, 2022). Potential wrongdoings resulting therefrom would be subject to a variety of laws, and will most likely result in some jurisdictional hurdles, especially in legal regimes that require a certain mental element for the liability to be ascribed. Without questioning the severity of such hurdles, and refraining from making any generalized 42  See Netherlands. (2019). Statement of the Netherlands by Ms. Sandra de Jong: Agenda item 5(b). Geneva: Meeting of Group of Governmental Experts on LAWS. April 26; United Kingdom. (2020). Expert paper: The human role in autonomous warfare. Geneva: Meeting of Group of Governmental Experts on LAWS, November 18.

404  Research handbook on law and technology statements on accountability for AI at large, this contribution sides with those asserting that AWS would not result in the aforementioned gap at an international level (Kwik, 2022; van den Boogaard & Roorda, 2021; McFarland & Galliot, 2021; Work, 2021; McDougall, 2019). To explain this position, it is useful to examine the common arguments asserting a looming responsibility gap. These usually adopt one of three approaches; first, they are written from a philosophical angle and are concerned with the just apportionment of blame (Sparrow, 2007). As such, they can be thorough and even at times compelling but their ultimate findings should not and cannot be automatically transposed into international law; in fact, many legal institutions may be considered unethical or unfair but it does not make them ipso facto invalid. Second, some accounts claiming that the use of AWS would result in the “responsibility gap” are based on false premises, such as an assumption that deployment of AWS means “the removal of human operators from the targeting decision-making process” (Amoroso & Tamburrini, 2021, p. 253); as elucidated above such premises are divorced from reality making any analysis based thereof fallacious. Finally, there is a strand of scholarship that explores the question of the “responsibility gap” exclusively with regard to international criminal law (ICL). Some do it explicitly (Bo, 2021) and take due note of the existence of other forms of legal responsibility (McDougall, 2019), but many seem to completely overlook a set of different forms of legal responsibility (Amoroso & Tamburrini, 2021; Sparrow, 2007), in particular state responsibility and domestic military justice systems. This is not to say that establishing the mens rea required for the war crime of, for instance, indiscriminate attacks in cases involving the use of AWS would not be cumbersome, especially under the stringent rules of intent (dolus) under the Statute of the International Criminal Court (ICC). Rather, it is important to underline that the potential impossibility of being adjudicated by the ICC, while of concern, does not automatically render a certain conduct or method of warfare internationally unlawful. Those claiming otherwise seem to completely overlook domestic systems of administrative and disciplinary proceedings undertaken in response to violations of IHL, and the military doctrine of command accountability (including responsibility for weapon release) they rely on.43 Valid ethical questions can be asked about its fairness, especially in future cases relating to complex ML-based weapon systems difficult to comprehend even for trained technical experts (Boutin & Woodcock, 2022). Ethical reservations, however, do not alter the fact that the commander remains ultimately responsible for accepting risk and bears responsibility for employing weapons in accordance with IHL (Kwik, 2022; Kraska, 2021).44 As rightly stressed in recent scholarship, [t]he military commander’s decision to employ a certain weapon in a certain context is the ultimate failsafe; a lack of capacity to understand the possible effects of the use of a system, may be fully compensated by a full capacity to decide on its use. (van den Boogaard & Roorda, 2021, p. 433)

43  See NATO (2019). Allied Joint Doctrine for the Conduct of Operations. AJP-3 Edition C. Brussels, Belgium: NATO. 44  See NATO (2019). Allied Joint Doctrine for the Conduct of Operations. AJP-3 Edition C. Brussels, Belgium: NATO; United Kingdom. (2020). Expert paper: The human role in autonomous warfare. Geneva: Meeting of Group of Governmental Experts on LAWS, November 18; United States of America. (2017). Intervention on Appropriate Levels of Human Judgment over the Use of Force. Geneva: Meeting of Group of Governmental Experts on LAWS. November 15.

Autonomous weapons 

405

Even if some were to question whether domestic jurisdiction can be a remedy to the AWS “responsibility gap” due to its international nature, the international responsibility of States definitely is. The regime of State responsibility for internationally wrongful acts – complementary and concurrent to ICL (Nollkaemper, 2003) – and widely recognized to be customary in nature, is based on the fundamental premise that “[e]very internationally wrongful act of a State entails the international responsibility of that State” (Article 1).45 There are two, and only two, elements of an internationally wrongful act, that is human conduct consisting of either action or omission. First, the conduct must constitute a breach of an international obligation of a State, and second, it needs to be attributable to that State under international law (Article 2).46 Actions and omissions of State organs, the textbook example of which are State armed forces are always attributable to that State (Article 4).47 Consequently, the responsibility of a State fielding AWS is engaged in a two-fold manner in case AWS in combat leads to effects prohibited by IHL, such as making the civilians the object of an attack. First, if the wrongful conduct resulted from a systemic flaw in the AWS design, the State might bear international responsibility for failing to properly discharge the API, Article 36 weapon review obligation. Second, if the soldiers planning the attack “cannot foresee that an AWS will engage only legal targets, then they cannot meet their obligations under the principle of distinction” (McFarland & Galliot, 2021, p. 52). Both failures are unequivocally attributable to the State.48 One way or the other, responsibility of the fielding State precludes the “responsibility gap” on the international plane (Pacholska, 2023).

6. CONCLUDING REMARKS There is an immense intrinsic value in civil society and academia engaging in international debates on pressing IHL issues, as the recently adopted Political Declaration on Strengthening the Protection of Civilians from the Humanitarian Consequences arising from the use of Explosive Weapons in Populated Areas clearly demonstrates.49 However, to be successful, reiterating the humanitarian concerns of a specific weapon, means or method of warfare needs to be done without unnecessarily demonizing the technology at hand. Over-hyped narratives anthropomorphize AWS and examine how and why they would not be able to comply with the core IHL principles and hence lead to an “accountability gap” not only failing to advance the discussion but actually risking blurring the existing norms. Should the ongoing automation of military systems one day evolve into true ML-based autonomy in target selection and

45  See International Law Commission. (2001). Responsibility of States for Internationally Wrongful Acts, UNGA Res. 56/83, Annex, December 12. 46  See International Law Commission. (2001). Responsibility of States for Internationally Wrongful Acts, UNGA Res. 56/83, Annex, December 12. 47  See International Law Commission. (2001). Responsibility of States for Internationally Wrongful Acts, UNGA Res. 56/83, Annex, December 12. 48  See Switzerland. (2017). A “compliance-based” approach to Autonomous Weapon Systems. Geneva: Meeting of Group of Governmental Experts on LAWS. November 10. 49  See Article 36 (2022). Statement by Article 36 at the June 17, 2022 meeting on the political declaration to protect civilians from the use of explosive weapons in populated areas. Geneva. Retrieved from https://article36​.org​/updates​/statement​-by​-article​-36 ​-at​-the​-17​-june​-2022​-meeting​-on​-the​-political​-declaration​-to​-protect​-civilians​-from​-the​-use​-of​-explosive​-weapons​-in​-populated​-areas/

406  Research handbook on law and technology engagement, unlikely as it seems now, there will be plenty of difficult issues to address at that point. Until then, the existing international law – as long as it is interpreted in a technologyneutral and all-encompassing way – remains more than sufficient, if duly applied.

REFERENCES Afina, Y. (2022, July 14). Intelligence Is Dead; Long Live Artificial Intelligence. London: Chatham House. Retrieved from https://www​.chathamhouse​.org​/2022​/07​/intelligence​-dead​-long​-live​-artificial​ -intelligence Amoroso, D. & Tamburrini, G. (2021). Toward a Normative Model of Meaningful Human Control over Weapons Systems. Ethics & International Affairs, 35(2), 245–272. Asaro, P. (2020). Autonomous Weapons and the Ethics of Artificial Intelligence. In S. M. Liao (ed.). Ethics of Artificial Intelligence (pp. 212–236). New York: Oxford University Press. Boulanin, V. & Verbruggen, M. (2017). Mapping the Development of Autonomy in Weapon Systems. Stockholm: Stockholm International Peace Research Institute. Boutin, B. & Woodcock, T. (2022). Aspects of Realizing (Meaningful) Human Control: A Legal Perspective. ASSER Research Paper 2022-07, 1–18. Cherry, J. & Korpela, C. (2019, March 28). Enhanced Distinction: The Need for a More Focused Autonomous Weapons Targeting Discussion at the LAWS GGE. Humanitarian Law & Policy. Retrieved from https://blogs​.icrc​.org​/ law​-and​-policy​/2019​/03​/28​/enhanced​-distinction​-need​-focused​ -autonomous​-weapons​-targeting/ Crootof, R. (2016). A Meaningful Floor for “Meaningful Human Control”. Temple International and Comparative Law Journal, 30(1), 53–62. Doswald-Beck, L. (1996). New Protocol on Blinding Laser Weapons. International Review of the Red Cross, 36(312), 272–299. Ekelhof, M. A. (2018). Lifting the Fog of Targeting: ‘Autonomous Weapons’ and Human Control through the Lens of Military Targeting. Naval War College Review, 71(1), 1–34. Heyns, C. (2016). Autonomous Weapons Systems: Living a Dignified Life and Dying a Dignified Death. In N. Bhuta, S. Beck, R. Geiβ, H. Liu & C. Kreβ (eds.). Autonomous Weapons Systems Law, Ethics, Policy (pp. 3–20). Cambridge: Cambridge University Press. Hayir, N. (2022). Defining Weapon Systems with Autonomy: The Critical Functions in Theory and Practice. Groningen Journal of International Law, 9(2), 239–265. HRW & IHRC (2012). Losing Humanity: The Case against Killer Robots. https://www​.hrw​.org​/sites​/ default​/files​/reports​/arms1112 ​_ ForUpload​.pdf. Jenks, C. (2016). False Rubicons, Moral Panic, & Conceptual Cul-De-Sacs: Critiquing & Reframing the Call to Ban Lethal Autonomous Weapons. Pepperdine Law Review, 44(1), 1–70. Jevglevskaja, N. (2018). Weapons Review Obligation Under International Customary Law. International Law Studies, 94, 186–221. Joyner, D. (2009). International Law and the Proliferation of Weapons of Mass Destruction. Oxford: Oxford University Press. Kowalczewska, K. (2021). Sztuczna inteligencja na wojnie. Warsaw, Poland: Wydawnictwo Naukowe Scholar. Kraska, J. (2021). Command Accountability for AI Weapon Systems in the Law of Armed Conflict. International Law Studies, 97, 407–447. Kwik, J. (2022). A Practicable Operationalisation of Meaningful Human Control. LAWS, 11(3), 1–21. Lieblich, E. & Benvenisti, E. (2016). The obligation to exercise discretion in warfare: why autonomous weapons systems are illegal. In N. Bhuta, S. Beck, R. Geiβ, H. Liu & C. Kreβ (eds.). Autonomous Weapons Systems Law, Ethics, Policy (pp. 245–283). Cambridge, UK: Cambridge University Press. Marauhn, T. (2018). Meaningful Human Control – And the Politics of International law. In W. Heintschel, W. von Heinegg, R. Frau & T. Singer (eds.). The dehumanization of warfare: Legal Implications of New Weapon Technologies (pp. 207–218). Cham: Springer. Marsh, N. (2022). Autonomous Weapons Systems: Using Causal Layered Analysis to Unpack AWS. Journal of Future Studies, 26(4), 33–40.

Autonomous weapons 

407

Matthias, A. (2004). The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata. Ethics and Information Technology, 6, 175–183. Mauri, D. (2022). Autonomous Weapons Systems and the Protection of the Human Person: An International Law Analysis. Cheltenham: Edward Elgar Publishing. McDougall, C. (2019). Autonomous Weapon Systems and Accountability: Putting the Cart before the Horse. Melbourne Journal of International Law, 20(1), 58–87. McFarland, T. & Galliot, J. (2021). Understanding AI and Autonomy: Problematizing the Meaningful Human Control Argument against Killer Robots. In J. Galliott, J. D. Ohlin & D. MacIntosh (eds.). Lethal Autonomous Weapons (pp. 41–56). Oxford: Oxford University Press. McFarland, T. (2020). Autonomous Weapon Systems and the Law of Armed Conflict. Cambridge, UK: Cambridge University Press NATO. (2021). Allied Joint Doctrine for Joint Targeting. AJP-3.9 Edition B, version 1. Brussels, Belgium: NATO NATO Term, The Official NATO Terminology Database. Retrieved from https:// nso​.nato​.int ​/natoterm ​/content ​/nato​/pages​/ home​.html​?lg​=en Nollkaemper, A. (2003). Concurrence between Individual Responsibility and State Responsibility in International Law. International and Comparative Law Quarterly, 52(3), 615–640. Pacholska, M. (2023). Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective. Israel Law Review, 56(1), 3–23. Roorda, M. (2015). NATO’s Targeting Process: Ensuring Human Control over (and Lawful Use of) ‘Autonomous’ Weapons. In A. P. Williams & P. D. Scharre (eds.). Autonomous Systems: Issues for Defence Policymakers (pp. 152–168). Norfolk, VA: NATO ACT Publication. Rudin, C. et  al. (2021). Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges. Statistics Surveys, 16, 1–85. Scharre, P. & Horowitz, M. C. (2015a). An Introduction to Autonomy in Weapon Systems: A Primer. Washington, DC: Center for a New American Security. Scharre, P. & Horowitz, M. C. (2015b). Meaningful Human Control in Weapon Systems: A Primer. Washington, DC: Center for a New American Security. Seixas-Nunes, A. (2022). The Legality and Accountability of Autonomous Weapons Systems: A Humanitarian Perspective. Cambridge: Cambridge University Press. Shereshevsky, Y. (2022). International humanitarian law-making and new military technologies. International Review of the Red Cross, 104(920–921), 2131–2152. Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62–77. Tamburrini, G. (2016). On Banning Autonomous Weapons Systems: From Deontological to Wide Consequentialist Reasons. In N. Bhuta, S. Beck, R. Geiβ, H. Liu & C. Kreβ (eds.). Autonomous Weapons Systems Law, Ethics, Policy (pp. 122–142). Cambridge: Cambridge University Press. United Kingdom MoD. (2011). Development, Concepts and Doctrine Centre. Joint Doctrine Note 2/ 11: The UK Approach to Unmanned Aircraft Systems. Shrivenham: Ministry of Defence. UNIDR (2014). The Weaponization of Increasingly Autonomous Technologies, Geneva, Switzerland, UNIDIR. van den Boogaard & Roorda. (2021). ‘Autonomous’ Weapons and Human Control. In. R. Bartels et al. (eds.). Military Operations and the Notion of Control Under International Law (pp. 421–437). The Hague, the Netherlands: T.M.C. Asser Press. Vilmer, J. B. (2021). A French Opinion on the Ethics of Autonomous Weapons. War on the Rocks. Retrieved from https://warontherocks​.com​/2021​/06​/the​-french​-defense​-ethics​-committees​-opinion​ -on​-autonomous​-weapons/ Vohs, V. (2022, February 23). Seeking the Holy Grail? A Legally Binding CCW Protocol on LAWS. Völkerrechtsblog. Retrieved from https://voelkerrechtsblog​.org​/seeking​-the​-holy​-grail/ Work, R. (2021). Principles for the Combat Employment of Weapons Systems with Autonomous Functionalities. Washington, DC: Center for a New American Security.

25. Issues in robot law and policy A. Michael Froomkin1

1. INTRODUCTION Robots are, or soon will be, ubiquitous. Already robots can be found on the road, in the air, on the sea, in the home or office, in the warehouse, in the hospital, and on the battlefield. As robotics improve, we can only expect to see more robots, doing more complicated things. It follows that robots will intersect more with people, and with the law, at almost every level and subject. The physical instantiation of most robots, the fact that all but pure software robots have a body, means that robot legal issues and robot regulation has salience at every level of government from the most local to the most international. At least in the United States, most robot law either adopts existing law or consists of frequently unanswered questions: vast tracts of law are waiting to be decided and written. It is common to speak of robots as mechanizing the “OODA” (Observe, Orient, Decide, Act) Loop, and for current purposes, I will treat as a “robot” any device that is capable of affecting the world outside itself in response to some sensed trigger. This is a very broad definition, as it encompasses everything from simple devices like a programmable room thermometer that changes heating or cooling in response to a temperature measurement, to automated hammers that stamp down every time a sensor detects a nail on the assembly line, to complex softwareonly program trading algorithms that buy or sell shares in response to market movements, to complex military defense systems designed to shoot down incoming missiles. Many simple robots raise no unusual legal issues. A thermostat may be governed by product liability law; the assembly line hammer governed by workplace safety regulations and standard tort or workers’ compensation rules, but they each have those rules in common with other devices in the home, or on the shop floor. While even simple robots may occasionally raise complex issues of fact, normally there are no issues of theory unique to these robots as opposed to other neighboring products and devices. In contrast, as discussed further below, more complicated and interesting robots—not least those capable of emergent behavior—do raise difficult and often unsettled legal issues of responsibility and liability, of regulatory competence, subsidiarity, and jurisdiction, and a host of related ethical issues as to who should be responsible for robot harms, what indeed counts as a harm, and whether and when certain types of robots should be restricted or prohibited. Many of these complex questions overlap with, or form part of, “AI law” since complex and interesting robots commonly are controlled by what we call an artificial intelligence. That controlling AI may reside onboard the robot, or it may control the robot remotely. The internal/external location distinction can be hugely important in robotic design, e.g., for battlefield robots designed to operate in the face of enemy attempts to degrade communications, but

1  Thanks to Caroline Bradley, Benjamin Froomkin, David Froomkin, and to the chapter’s reviewers for helpful suggestions.

408

Issues in robot law and policy  409 commonly the location of the controller is not legally significant—although the location may make a legal difference in cases where the external controller is in a different jurisdiction from the physical part, or under the control of different persons.

2. GENERAL LEGAL STATUS OF ROBOTS 2.1 Questions of “Personality” and “Robot Rights” As the law currently stands in the United States and, as far as I know, nearly everywhere else, the law treats all robots of every type as chattel. (The sole possible exception is Saudi Arabia, which gave “citizenship” to a humanoid robot, Sophia, in 2017. It is hard to see this as anything more than a publicity stunt, both because female citizenship in Saudi Arabia comes with restrictions that do not seem to apply to Sophia, and because “her” “life” consists of … marketing for her Hong-Kong-based creators (Reynolds, 2018).) That is, in the words of Neil Richards and William Smart, Robots are, and for many years will remain, tools. They are sophisticated tools that use complex software, to be sure, but no different in essence than a hammer, a power drill, a word processor, a web browser, or the braking system in your car. (Richards & Smart, 2016)

It follows that robot personhood (or AI personhood) under law remains a remote prospect, and that some lesser form of increased legal protections for robots, beyond those normally accorded to chattels in order to protect their owners’ rights, also remain quite unlikely. Indeed, barring some game-changing breakthrough in neural networks or some other unforeseen technology, there seems little prospect that in the next decades machines of any sort will achieve the sort of self-awareness and sentience that we commonly associate with a legitimate claim to the bundle of rights and respect we organize under the rubric of personhood, although the possibility of machine rights has motivated both interesting thought experiments (e.g., Boyle, 2011) and vivid denunciations (e.g., Birhane & van Dijk, 2022). There are, however, two different scenarios in which society or policymakers might choose to bestow some sort of rights or protections on robots beyond those normally given to chattels. The first is that we discover some social utility in the legal fiction that a robot is a person. No one, after all, seriously believes that a corporation is an actual person, or indeed that a corporation is alive or sentient (although Stross, 2018, suggests we think of corporations as “Slow AIs”), yet we accept the legal fiction of corporate personhood because it serves interests, such as the ability to transact in its own name, and limitation of actual humans’ liability, that society—or parts of it—find useful. Although nothing at present suggests similar social gains from the legal recognition of robotic personhood (indeed issues of liability and responsibility for robot harms need more clarity, not less accountability), conceivably policymakers might come to see things differently. In the meantime, it is likely that any need for, say, giving robots the power to transact can be achieved through ordinary uses of the corporate form, in which a firm might for example be the legal owner of a robot. This has not, however, stopped speculation about how a robot or AI might own itself (Bayern, 2021, 2019; LoPuki, 2018). Early cases further suggest that US courts are not willing to assign a copyright or a patent to a robot or an AI even when it generated the work or design at issue. Here, however, the primary justification has been straightforward statutory construction, holdings that the relevant

410  Research handbook on law and technology US laws only allow intellectual property rights to be granted to persons, and that the legislature did not intend to include machines within that definition.2 Rules around the world may differ. For example, in Commissioner of Patents v. Thaler (2022), an Australian federal court ordered an AI’s patent to be recognized by IP Australia. Similarly, a Chinese court found that an AI-produced text was deserving of copyright protection under Chinese law (Sawers, 2020). The North American literature on these topics is already vast; good starting points are Ryan Abbott’s arguments in favor of AI patents (Abbott, 2016) and Annemarie Bridy’s early survey of the arguments relating to copyrights (Bridy, 2012); the latter recently triggered a particularly elegant response (Carig & Kerr, 2021). There is also a voluminous EU-based literature (e.g., Hugenholtz & Quintais, 2021). A more plausible scenario for some sort of robot rights begins with the observation that human beings tend to anthropomorphize robots. As Kate Darling observes, “Our well-documented inclination to anthropomorphically relate to animals translates remarkably well to robots” (Darling, 2016, p. 223), and ever so much more so to lifelike social robots designed to elicit that reaction—even when people know that they are really dealing with a machine (Nass & Moon, 2000). Similarly, studies suggest that many people are wired not only to feel more empathy toward lifelike robots than to other objects, but that as a result, harm to robots feels wrong (Darling, 2016, p. 223). Thus, we might choose to ban the “abuse” of robots (beating, torturing) either because it offends people, or because we fear that some persons who abuse robots may develop habits of thought or behavior that will carry over into their relationships with live people or animals, abuse of which is commonly prohibited. Were we to find empirical support for the hypothesis that abuse of lifelike, or perhaps humanlike, robots make abusive behavior toward people more likely, that would provide strong grounds for banning some types of harms to robots—a correlative to giving robots certain rights against humans (Darling, 2016, pp. 226–231). In Hohfeldian terms, if persons have a duty not to harm a robot, then, correlatively, the robot has the right not to be harmed by those persons (see Schlag, 2014, pp. 200–203). 2.2 Legal Issues in Human-Robot Interaction The human tendency to anthropomorphize robots—and, conversely, the tendency among some to place undue faith in technology—can allow robots to become a means to manipulate people who interact with robots. The hypotheticals—and most are, for now, just hypotheticals—are legion. Perhaps if people become attached to their robot pets or companions, the firms making or running the robots could use this emotional bond to extract payments, e.g., for upgrades, from users. Or maybe the robots could be used to sneak ads into the home, in the guise of ordinary conversation—a particular worry for robots designed for children. Perhaps they will ask questions designed to reveal personal information (Darling, 2016, p. 221). Less hypothetical is the observation that people not only tend to anthropomorphize robots, but also tend to ascribe gendered characteristics to them based on their functions, as well as to apply common social behaviors such as politeness and reciprocity (Nass & Moon, 2000). The issue of (over) trust gains salience from the reality that some robots are designed to collect information about users, such as for medical monitoring, although consent is required by the user or their guardian. Others record information about their environment, such as cleaning robots assembling floor plans, and then share that information with the company that 2 

See Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022).

Issues in robot law and policy  411 makes the cleaners (Kaminsky, 2015, discusses the risks with a focus on consent and disclosure issues). We know that some devices that rely on voice recognition have had conversations monitored by remote humans without notice to the humans. Indeed, potential robot fakery has multiple modes ranging from robots pretending to be people in screen chats to people pretending to be robots in order to make their product seem more sophisticated, or their advice appear scientifically based (Brennan-Marquez, Levy & Susser, 2019). All of these scenarios excite some appetite for regulation (see for example Hartzog, 2015), although little has come to pass as yet. In some special cases, such as in treatment or fiduciary relationships, these deceptions are either illegal or a violation of professional ethics and duties; but in other contexts, while immoral they may become actionable fraud only when someone can show injury—and the US Supreme Court has suggested in recent cases that privacy harms without some tangible monetizable consequences will not support a damages action, even if there is a law providing statutory damages.3 Regardless of their underlying legal status, some robots are capable of far more rapid reactions to stimuli than people. That can be extremely valuable, whether adjusting the trim of an aircraft, changing cooling settings in a nuclear power plant, or targeting incoming ordinance. But as sensors and programming are never perfect, rapid reactions also raise the specter of rapid errors, sometimes catastrophic, leading to suggestions that when a robot has the power to do significant harm, the law should require a “human in the loop” to reduce the risk of unwanted outcomes (Jones, 2015). One risk, however, is that inserting a human into the OODA loop will in the most critical cases mean that the system (robot + human) responds to danger more slowly, perhaps in some cases too slowly for safety or effectiveness. Madeline Elish describes another family of risks which she calls “moral crumple zones,” in which the human is assigned responsibility, and thus legal liability, without the practical means to shoulder it. For example, if the overall system is poorly designed, when a crisis occurs the human purportedly in the loop may never have enough time to evaluate the facts and make a good decision. A related failure mode becomes likely if the human’s monitoring job is ordinarily routine and boring, leading attention to flag, or if the human’s monitoring role involves overseeing so many robot tasks simultaneously that real oversight becomes practically impossible, as in for example a security guard tasked with monitoring a large number of security robots (Elish, 2019). If robots are not legal persons, it follows that a robot cannot be legally responsible for harms or crimes that it causes. Ordinarily one would expect robot harms and crimes to be charged to someone seen as responsible for the robot’s action. If, however, there is no “human in the loop” then who should that be? For civil damages, such as tort, commonly we seek to find the person whose actions were the proximate cause, or whose negligence permitted the harm to occur. Sometimes that may be clear: someone set the robot’s actions in motion, or someone failed to anticipate a foreseeable contingency in the robot’s construction or operation, or someone with a heightened duty of care negligently failed to monitor the robot’s actions. Other situations, however, will be considerably less clear, and then one may need to resort to more elaborate concepts of (sometimes gross) negligence. Crimes committed by (or is it “via”) robots present additional complexities. It is not hard to say that if a person were to program a robot to steal, that person bears criminal liability for the theft, just as they would with any other instrumentality. But what if the person owning 3 

See Spokeo, Inc. v. Robins, 578 US 330 (2016).

412  Research handbook on law and technology or operating the robot lacks mens rea, as in the actual case of the robot programmed to randomly order things online and have them shipped to an art exhibition—which then ordered some Ecstasy tablets (Power, 2014). Is mens rea present when the person who set the robot in motion neither intended nor foresaw the crime of illegal purchase of narcotics? Sometimes, as in the case of the shopping robot, it may be sufficient to say that the person operating the robot should have foreseen the action and guarded against it. Sometimes, however, that judgment may be quite problematic, especially if the crime is remote in time or probability. 2.3 Legal Issues from Emergent Behavior The most interesting, but also most legally difficult, robots may be those designed to learn from experience, ideally with relatively little human supervision. Suitably primed robots have taught themselves to walk (Wu et al., 2022). As robots interact increasingly with people, they will be called upon to take on, and in some cases to learn, increasingly complex tasks that cannot be easily specified in advance—or perhaps at all. Increasingly, therefore, designers will program robots to learn by doing. While this is flexible and often efficient, it also means that robots will inevitably learn to do things in unpredictable ways, and indeed learn to do unpredictable things altogether. This “emergent behavior,” that is, “behavior which is useful but cannot be anticipated in advance by operators” (Calo, 2014, p. 5) is very much a feature, not a bug, Both the complexity and the interactivity of robot systems capable of emergent behavior create opportunities for injury. In ordinary cases where poor design, poor manufacture, or a programming error, causes an automated tool to injure someone or something, it is not difficult (at least theoretically) to assign moral and legal responsibility. Similarly, in the absence of designer/manufacturer/programmer error, if a tool’s operator misuses it in some fashion, there too the liability seems clear. And, indeed, modern tort law also has a variety of means to share out liability in scenarios in which designer, operator, victim, and even bystander may all be partly responsible for a harm. But what is to be done when the emergent behavior is the result of multiple interactions with different people over the course of the robot’s existence? Is it right to blame the designer for including a feature that is normally not just benign but may be necessary for the robot to learn to undertake complex tasks? Perhaps we should blame the robot’s operator for failure to supervise it properly, but the nature of an emergent behavior is that it could manifest without warning. We do not at present have any consensus, and indeed little experience, as to how to assign liability in the case of injury caused by a robot’s emergent, unexpected, behavior. The problem is, however, certain to emerge.

3. AUTONOMOUS VEHICLES 3.1 Robot Cars Conversely, although we do have extensive experience as to how to assign liability for car accidents, and indeed how to regulate cars for safety, this experience turns out not to make the project of assigning liability for accidents involving robot cars (aka “self-driving cars” or “autonomous vehicles”) as easy as one might hope. Part of the problem is that in the United States, the regulation of motor vehicles is shared between national, state, and local entities.

Issues in robot law and policy  413 The federal National Highway Safety Administration (NHTSA) regulates many aspects of motor vehicle safety. NHTSA mandates minimal safety standards, e.g., for crashworthiness. It initiates recalls when it finds defects in the car manufacturing process. And it regulates or coordinates a number of traffic safety rules. States issue drivers’ licenses, subject to a number of federal rules, e.g., about acceptable forms of identification. Most traffic rules, such as speed limits (subject to a national cap set by Congress) and parking, are the domain of state governments, and often localities. Enforcement of traffic laws varies by state, but local police and traffic enforcement departments usually do most of it. Robot cars and their users are subject to all of these regulatory regimes. There is also an international aspect to robotic vehicle regulation. The World Forum for Harmonization of Vehicle Regulations is an intergovernmental platform, hosted by the United Nations Economic Commission for Europe (UNECE), that creates regulatory frameworks for the performance and safety of vehicles which it invites member states to adopt. For example, it recently extended the maximum recommended speed for land-based vehicles with automated lane-keeping systems from 60 km/h to 130 km/hr.4 Transnational issues will also arise when users seek to take a self-driving vehicle across national borders (see Smith, 2020). Many US states have enacted laws designed to encourage the testing and ultimate deployment of self-driving cars by creating exceptions to rules requiring drivers. Several firms have deployed small fleets of cars, some with monitors in the front seat, but also some without anyone but the passenger. For example, Cruise introduced a driverless taxi service in parts of San Francisco—but it then recalled all its robotaxis in the United States following a crash (Wessling, 2022). These and other experiments raise questions of both social policy and liability law. How law and policy will react will depend on how safe self-driving cars prove to be, and even more so on how they are unsafe and to whom they are dangerous. Society will benefit if self-driving cars prove to be, on average, safer than legacy cars with human drivers, and many have suggested that they should not be allowed to proliferate until this is proven to be the case. But even once robot cars clear this bar, we will need to know if the harms from their accidents replicate existing patterns, or if they instead have a tendency to cause a different sort of accident. For example, self-driving cars might be more likely than legacy cars to injure other vehicles, or swipe bystanders, instead of harming the passengers. Or if, for example, self-driving cars were generally more safe but more likely to run over children, that ought to give regulators pause (cf. The Dawn Project, 2022). More generally, the issue of how to tune the safety tradeoff between the passengers and others remains to be confronted. Some philosophers have suggested that cars should quiz passengers as to their driving preferences, e.g., whether to prioritize the safety of people in the car or outside it, or whether to drive quickly or carefully, in order to replicate human driving behavior (see, e.g., Millar, 2017; Millar & Kerr, 2016), but this seems unlikely to catch on—which is probably just as well. The safety tradeoff issue captured the popular imagination with Judith Jarvis Thompson’s “Trolley Problem” which explores the moral consequences of various actions and inactions by positing accidents with different sorts of casualties (Thompson, 1976). Aspects of the Trolley Problem were gamified in a website called the “Moral Machine” (Awad et al., 2018) that asked 4  See World Forum for Harmonization of Vehicle Regulations, Amendment to UN Regulation. United Nations: Economic and Social Council. (adopted June 22, 2022). Retrieved from https://unece​ .org​/fileadmin​/ DAM​/trans​/main​/wp29​/wp29resolutions​/ ECE​-TRANS​-WP29​-1140e​.pdf

414  Research handbook on law and technology participants to give their views about “moral decisions made by machine intelligence, such as self-driving cars.” The site attracted millions of participants from around the world, prompting a formidable philosophical critique that the “Trolley Problem is precisely the wrong tool” for thinking about how automated vehicles should make life or death decisions while driving: “[T]he Trolley Problem frames the question as if all we need to do is figure out what an individual ought to do while driving, and then make that the rule for autonomous vehicles” (Jacques, 2019). Crowdsourcing moral decisions to generate rules, Abby Everett Jacques argued, leads to repugnant results based on asking the wrong questions and providing the wrong, or vastly incomplete, information for decisions—not to mention that often the information provided on the website (e.g., the character or occupations of potential victims) is not information that would be available to a driver, whether human or mechanical. Even offering gross and visible information (baby in stroller vs. elderly victim) confuses an individual choice with a morally sensible policy or algorithm. Were we to rule that we must program our cars to deprioritize elderly jaywalkers over younger people, for example, that would in effect amount to legislating an increased chance of the death penalty for older jaywalkers (Jacques, 2019, pp. 7–8). Even if self-driving cars prove safer in every way than legacy cars, for the foreseeable future they will still have accidents. (It may also bear mention that the issues of liability and compensation for car and other accidents is particularly important in nations like the United States that do not have a functioning national health system and have high medical costs, since this can greatly increase the amount at stake in any subsequent dispute as to liability.) In the meantime, we have driver-assistance, and partially self-driving cars which return control to the operator when the onboard guidance system encounters something it cannot recognize or deal with. Drivers with “Level 3” conditional driver automation, as defined in the SAE International Taxonomy,5 are already notoriously inattentive when the guidance system is in control. As the level of automation increases to “Level 4,” high driving automation, and handoffs back to passenger control become less frequent, we can reasonably expect that drivers will be even less prepared for them. In short, absent judicial or regulatory intervention, drivers who purchase cars that are equipped with anything less than fully autonomous guidance systems are at risk of becoming the legal if not moral crumple zone (Goldenfein et al., 2020). In contrast, if the passengers never have control of a vehicle that causes an accident (nor any say in how aggressively it drives), then it would seem absurd to make them in effect the insurer of the robot car’s truly responsible party or parties. In the absence of an intentional harm, we commonly seek to put liability on the “least cost avoider” for an accident—the party who might have prevented it at the lowest social cost. Doing so, economic theory teaches, is most likely to align incentives to prevent accidents without creating too great a disincentive to undertake activities that might cause harms. Who, then, is the least cost avoider for an accident caused by a fully autonomous vehicle? Most likely it is its maker: the maker made the hardware and either created or chose the software running the vehicle, and thus is best positioned to choose how much to invest in testing and safety features. One thus would expect that the maker of a vehicle, or perhaps the provider of a taxi service (since they are free to choose what kind of robot taxi to acquire and provide), should become the holder of liability.

5  See J3016C Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. SAE International. (April 30, 2021).

Issues in robot law and policy  415 Yet, this is not clearly the answer to the liability question, at least not in the United States. In the absence of legislation changing the tort law defaults, how the United States determines liability for self-driving cars—and probably many other sorts of robots also—will likely turn initially on whether the robot presents as a product or a service. If the robot presents as a service, e.g., a taxi or other on-demand arrangement, then ordinary rules of negligence will apply to the consumer’s liability since the consumer is not the purchaser of the product; the service provider may be subject to product liability law as described below. That said, even for consumers, complexities abound as to whose negligence is at issue and against what standard of care it would be measured. For example, a robocar might be considered a product when sold. But if the software operating it were sold sufficiently separately, and continually updated, that software running the robocar would arguably be a separate service; on the other hand, if there is only one software choice for a given brand or model of robocar, then there would also be a good argument that the two are sufficiently inextricably bound, and the hardware predominates in the overall transaction, and thus the software would be lumped in as a product too. If end-users buy (or long-term lease) their vehicle, software included, as the majority of US drivers currently do, then US law presumably will apply product liability law to all of it. If there is a manufacturing defect, where due to some factory error the product diverged from the intended design, then the states are uniform in following the Restatement (Second) Of Torts § 402A (1965),6 and placing joint and several liability on the manufacturer and all the parties in the chain of sale to the end-user. If, however, the manufacturer produced the autonomous vehicle according to plan but there is an underlying design defect, the states are far from uniform as to whether a negligence or a strict-liability-like standard applies, and thus also not uniform as to the showings a victim, be it the end-user or a third party, must make in order to recover damages from the manufacturer. Under the Restatement 2d of Torts, as amended in the 1960s and then developed in a series of court decisions during the next decades, the majority rule for design defects became the “consumer expectations test” which makes the seller of a product liable if the product is in a defective condition unreasonably dangerous to the consumer.7 The trier of fact can infer the existence of a design defect if the product fails to meet the reasonable expectations of consumers. Thus, even where there is no evidence, direct or circumstantial, available to prove exactly what sort of manufacturing flaw existed, a plaintiff may establish her right to recover by proving that the product did not perform in keeping with the reasonable expectations of the user; for this reason and others, the consumer expectations test engenders critique (see Owen & Davis, 2020, at § 5:16). A product falls beneath consumer expectations when the product fails under conditions concerning which an average consumer of that product could have fairly definite expectations, which makes it in effect a strict liability rule once that line is crossed. Importantly, however, the consumer expectations test does not apply where technical and mechanical defects are See Restatement (Second) Of Torts § 402A (1965). Section § 402A states that anyone “who sells any product in a defective condition unreasonably dangerous to the user or consumer” is strictly liable for the damages. Id. Comment i defined “unreasonably dangerous” as being dangerous “beyond that which would be contemplated by the ordinary consumer who purchases it.” Id. at cmt. i, The modern test is sometimes traced to a deeply influential article, John Wade, On the Nature of Strict Tort Liability for Products, 44 Miss. L.J. 825, 837–838 (1973) (sixth factor of multi-factor test). 6  7 

416  Research handbook on law and technology alleged which require an understanding of precise behaviors of obscure components of products under complex circumstances of a particular accident.8 Exactly when robots move from machines with technical and obscure parts to devices as commonplace as cars, microwaves, or refrigerators, is an evolving and as yet largely unexplored question. Roombas are ubiquitous and surely fall into the ordinary consumer product category, as do perhaps hobby drones and robotic pets—but would the control software for a fully self-driving car, as opposed to a Tesla with Level 3 driver assistance? Or is the answer that Tessla might be estopped from arguing a lack of consumer expectations due to advertising the excellence of the software? For many years, critics of the “consumer expectation” test objected that plaintiffs could prevail without showing flaws in the design, and had no obligation to present a reasonable alternative design. Instead, if the defense wants to argue there was no better practical design, it bears the burden of production on this issue once the plaintiff has made a prima facie case on what consumers would expect. In response, the more recent Restatement 3d of Torts: Product Liability § 2 takes a very different view of how design defect claims and defenses should work. It abolishes the “consumer expectation” test and replaces it with a “risk-utility” test (see Twerski & Henderson, 2009). Under this test, to prevail the plaintiff must show that the risk of the design exceeded its value; commonly the only way to do this is to proffer a safer alternate design that is not (substantially) more expensive to produce and which would not have caused, or greatly reduced the chance of, the harm. Since the defendant has all the information about how the robot was designed and made, it will be hard for most plaintiffs to meet this burden. In most cases, it will require expensive experts, which makes it much harder to bring cases, and may require juries to hear very technical evidence. Critics of the 3d Restatement, including a substantial number of state supreme courts that have decided not to adopt this provision of the Restatement 3d, agree that it moves the liability standard away from strict liability by focusing on the foreseeability of the risk of harm, including a cost-benefit analysis. “Rather than focusing on the design of the product, it focuses on the conduct of the manufacturer” (see generally Owen & Davis, 2020, at §§ 5.6, 5.7).9 Thus, at present, liability in the United States for many robot-caused injuries presents up to three levels of ambiguity: 1) whether the robot presents as a product or service; 2) whether the type of robot or the relevant part of it is an ordinary consumer product about which consumers have grounded expectations; 3) whether the state whose law applies uses Restatement 2d strict liability for design defects or Restatement 3d more negligence-like principles. Liability may fall out differently in civil law regimes. For example, one analysis suggests that despite the EU Directive establishing strict liability for makers of defective products ((EU) Council Directive No. 85/374/EEC of July 25, 1985), in Germany the primary liability in practice will fall on the “keeper” of the vehicle due to the difficulty of proving fault against manufacturers (Ebers, 2022). As states are likely to be anything but uniform in their answers to these questions, the robot accident liability issue seems ripe for a uniform national solution, ideally one that would reduce the substantial costs of litigating each of these issues in every state. This might involve legislation, or even a national robot regulator (Calo, 2014) but in some industries such as selfdriving cars might also have a partial de facto solution if manufacturers choose to bundle 8  9 

See Soule v. Gen. Motors Corp., 882 P.2d 298, 305 (Cal. 1994). See Aubin v. Union Carbide Corp., 177 So. 3d 489, 506 (Fla. 2015).

Issues in robot law and policy  417 car insurance for passengers into every sale in order to trumpet the safety of their product. Examples of the extensive writing on insurance issues include Templeton (2020), Geistefeld (2017), and Lior (2022). In this vein, Kenneth Abraham and Robert Ravin argue powerfully that autonomous vehicles present a chance to move away from old tort paradigms for car accidents, and move to an administrative compensation scheme (Abraham & Rabin, 2019). It would be interesting to see how far this might be generalized to other robot-related accidents. 3.2 Drones (UAVs) Unmanned aerial vehicles (UAVs, or more colloquially, drones) are either remote-controlled or autonomous vehicles, and thus potentially candidates for a similar liability regime to terrestrial vehicles, but before taking to the air they must navigate a substantially different regulatory regime. First, the national regulator, the Federal Aviation Administration, has sole regulatory powers over “navigable airspace” (Dolan & Thompson II, 2013, at 2), although at times the FAA has suggested it has regulatory authority over all airspace.10 Second, the FAA has some concurrent authority with states for lower altitude flights, and it has not been shy about using it. Consequently, there is already a detailed set of national rules that drone manufacturers must follow to sell their products, and also significant rules for both commercial and hobbyist use of drones. The FAA’s primary concern has been safety, which it defines to mean that drones should not cause physical injury, should not interfere with flight operations, should not intrude into airspace defined as security areas (e.g., most of Washington, DC), and should carry identifying marks such that if a drone is misused, not least for terrorism, it should be possible to identify the owner. By mid-2021, users had registered well over a million recreational drones with the FAA (NCSL, 2021). The FAA has also been very cautious about allowing commercial drone companies to operate delivery drones outside the line of sight of the operator, although trials of more relaxed rules are in progress. Because they carry cameras into spaces people are accustomed to think of as private, ranging from remote wilderness to right outside windows on high floors of apartment buildings, drones create new privacy threats. Thomasen, 2018, suggests that drone regulation should be considered via a feminist lens. Indeed, there is extensive anecdotal evidence to support the claim that men use drones to spy on women much more than the reverse. Despite academic calls to consider privacy harms (e.g., Froomkin & Colangelo, 2020; Froomkin, Arencibia & Colangelo-Trenner, 2022), the FAA does not see its mission as extending to the ways in which drones can intrude on the personal privacy of those overflown.11 Drone trespassing and drone-enabled spying, voyeurism, and stalking are thus primarily an issue of state tort and, increasingly, statutory law. The trespass issue is complicated by uncertainty in most states as to the extent of the “vertical curtilage” and to what extent if any airspace above private property below the 400-foot line may be open to drone flights. As to spying, although torts such as intrusion upon seclusion are actionable nationwide, they remain 10  See FAA. (2018, July 20). FAA Statement-Federal vs Local Drone Authority. United States Department of Transportation. Retrieved from https://www​.faa​.gov​/newsroom​/faa​-statement​-federal​-vs​ -local​-drone​-authority. 11  See FAA. (2019, December 31). Remote Identification of Unmanned Aircraft Systems. Federal Aviation Administration.

418  Research handbook on law and technology exotic and rare claims, and damages can be very hard to monetize—without which there is no claim. Several states have legislated special protections against overflights for favored industries, sporting events, or police stations (Skorup, 2022); several more have enacted various limits on surveillance via drone by state and local law enforcement, although similar limits usually do not apply to manned vehicles such as helicopters and airplanes (McNeal, 2014). A few states have also legislated bans or limits on overflights of private property (Skorup, 2022, pp. 163–164). The extent to which landowners may engage in self-defense against drone overflights also remains largely untested (see Froomkin & Colangelo, 2015). Just as robots intrude into formerly private spaces, they also challenge our conception of what is public space and how one should regulate it. May terrestrial delivery robots use the sidewalk? May often small and slow delivery robots go on the road where there is no sidewalk? If they are UAVs, may they overfly streets, or private property? More generally, the combination of enhanced delivery services with on-demand robotic transport likely will enable and perhaps necessitate a rethinking of many zoning and urban planning rules. Discussions of the complexities include Marks, 2019, Thomasen, 2020, Woo, Whittington and Arkin, 2020, and Gilbert and Dobbe, 2021. 3.3 Autonomous Vehicles at Sea At present, the place where autonomous vehicles may have the freest rein is likely the ocean. Although weather and sea present challenges absent on paved roads or in airspace on a clear day, generally speaking there are fewer obstacles to maneuver around, and, at least away from ports and coastlines, the chance of running into another vehicle or a person is relatively low. Numerous trials of both civilian and military, and more or less autonomous, surface and subsurface robotic vessels are underway. One difficulty, however, is that the current international regime governing ships at sea assumes that “vessels” are ships with persons aboard. Vessels that fly the flag of their nation of registration enjoy many legal protections; indeed, to seize a manned vessel against the will of its crew is one of the oldest and universal international crimes—piracy. A self-propelled, self-guided, fully-functioning robotic ship with no one aboard does not fit either the classical definition of a “vessel” nor that of the United Nations Convention of the Law of the Sea Article 94, which presumes that a ship carries a “master and officers” (Brett, 2022). Indeed, some might suggest that it more closely fits the definition of “salvage”—property abandoned at sea and thus in theory available for any finder to acquire and keep. Undersea drones raise additional complexities, including potential privacy issues if they surveil passing ships or undersea installations (Brett, 2019). While unmanned vehicles may have relatively clear sailing on the seas, the legal regime governing sea-based robotic ships and drones will remain needlessly turbulent until maritime law is updated by international agreement (Brett, 2022).

4. BATTLEFIELD ROBOTS (LAWS) There is another, even more consequential, area where the governing law is at least as much international as national: the battlefield. On the one hand, the prospect of having robots fighting promises to reduce military casualties for those deploying the robots. On the other hand, reliance on so-called “killer robots” might make decision-makers more willing to engage

Issues in robot law and policy  419 in hostilities as reliance on robots would reduce risks to human troops. Either way, warbots raise difficult practical and legal issues relating to dangers to civilians, with opinions divided on whether they would be safer (Arkin, 2010 and, very optimistically, Lewis, 2020) or more dangerous (Jenks & Liivoja, 2018) for bystanders. Whether and how there can be “human in the loop” control is a critical question for military robots, as robots with autonomous capabilities, also known as Lethal Autonomous Weapons Systems (LAWS), create a tension with the doctrine of command responsibility that is fundamental to the modern laws of armed combat (LOAC) (Schwarz, 2018). For almost 70 years, if not more, it has been a cornerstone of both US and international law that an officer’s, and indeed, every combatant’s, responsibility for war crimes cannot be disclaimed by saying “I was just following orders.” The duty to not commit war crimes, e.g., by causing militarily disproportionate harm to civilians, is uniquely personal and may indeed require violating orders in extreme cases. On the other hand, a commander is usually not held responsible for things s/ he could not control except perhaps if it was highly foreseeable and could have been prevented. LAWS implicate command responsibility because they are so fast, and because they have “black box” elements that the people using them—often soldiers, sailors, airmen, or marines who may not have high rank—may not understand. Indeed officers may not understand or be able to control them either, and this undermines a duty that military officers commonly see as moral as well as legal. Some scholars argue that LAWS are per se illegal under international law, either under Article 36 of the Additional Protocol I (1977) of the 1949 Geneva Convention (see Press, 2017, pp. 1345–1347 for a summary) or, even more controversially, under the Martens Clause in the 1899 Hague Convention (II) (Asaro, 2016b); others have proposed various national and international civil remedies for harms from autonomous weapons (e.g., Crootof, 2016, 2018). Several international and national NGOs banded together in 2013 to form an international “Campaign to Stop Killer Robots,”12 a plea which has been endorsed by more than 20 countries, although by none of the major world military powers. The campaign has not stopped investment in robotic and autonomous military technology, although it should be noted that the US Department of Defense has also produced high-minded, if perhaps not completely constraining, “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence” (US Dept. of Defense, 2019).

5. POLICE ROBOTS (“ROBOCOPS”) “What develops first in the military often finds its way to domestic policing” (Joh, 2016, p. 528). Advances in military technology regularly filter back to the civilian economy, either directly or via laws that allow the US Defense Department to transfer military hardware to local law enforcement agencies (Dansby, 2020). Robotic policing (“robocops”) could have either appealing or horrifying implementations. In the appealing vision, mechanizing some policing functions would remove undesirable features of current policing practices. In theory, robocops could be programmed to treat like cases alike, to act consistently with everyone regardless of race and regardless of which robot is doing the policing. Human police will be safer; for example, since robots are not people, they could be sent into dangerous 12 

The group’s homepage is https://www​.stopkillerrobots​.org/.

420  Research handbook on law and technology situations, such as hostage-takings, or active shooters in schools and public places, in which they could collect information and engage in negotiations—and perhaps attempt to subdue suspects—without risk to police officers. Although robocops might be built to carry firearms or less-lethal tasers, they also could be limited to non-lethal force, although if the robot were designed to subdue armed suspects that would create a very substantial design, and if at all autonomous, programming challenge. And since robots can be set to record everything they do, audits will be easier and, in principle implementation of rule changes will be easier (just change the programming!) and instantly ubiquitous. But the reality does not, and in the foreseeable future is unlikely to, even come close to this idyllic vision: as legal scholars have demonstrated, police departments are buying expensive robot and AI technologies that do not work as advertised, rely on databases that replicate or exacerbate existing biases due to reliance on “dirty data” (Richardson, Schultz & Crawford, 2019), impose sometimes invisible surveillance on citizens—usually against communities that already suffer from over-policing (Joh, 2016, 2022)—and can lead to what Elizabeth Joh has called “unexpected consequences,” distorting the practice of policing (Joh, 2022). Furthermore, to the extent that robot selection and deployment decisions are made via procurement, they avoid structures designed to give citizens the right to seek judicial review of administrative regulations, including, often, systemic policing policies. These issues tend to be explored in the context of AI (see, e.g., Huq, 2020; Mulligan & Bamberger, 2019; Conglianese & Lehr, 2017), but they are equally applicable to robots, and not just because any but the simplest robotic policing system will rely on an AI to function. As Peter Asaro notes, policing involves many discrete activities. Just a simple stop-andfrisk can be divided into: 1) profiling, detecting signs of possible illegal activity and choosing who to investigate or stop-and-frisk); 2) implementation, doing the stop-and-frisk including deciding if there is a legal violation worthy of an arrest or citation; 3) “justice,” that is review, auditing, charging decisions, and in some cases trials. Each of these stages requires an algorithm, relevant data, and a feedback loop, which may involve a human in the loop during the stop-and-frisk and/or subsequent incorporation of new data from the encounter into the underlying database (Asaro, 2016c). In fact, none of these activities are easy to mechanize except perhaps the recording/surveillance/detection function. First, there are physical and mechanical issues with designing a robot capable of safely frisking the wide variety of sizes and shapes of persons ranging from children, to very large people, to people with medical conditions requiring casts, wheelchairs, or connected medical devices—and capable of restraining them with proportionate force if needed. Second, and even more difficult, is the programming problem involved in enforcing even a very simple statute. As an interdisciplinary group of engineers and lawyers demonstrated with a simple experiment, coding enforcement of even a simple speed limit is a surprisingly difficult problem that requires extensive interpretation of the legal rules by the coders, including judgments as to whether their goal is to enforce the law on the books, what they take to be the intent of the law on the books, or the law in action as they understand it (Shay et al., 2016b). The problems only get worse if one tries to automate more complicated aspects of law enforcement, especially areas where subjective judgments regarding large numbers of factors (what exactly is “suspicious activity”) may be needed (Shay et al., 2016a). Further, existing US law likely—but it is too early to say “certainly”—constrains robotic law enforcement in various ways. Some sense-enhanced detection technologies that one might

Issues in robot law and policy  421 wish to deploy on a robot may amount to an unconstitutional warrantless search if the technology is not in common use (see Kyllo v. United States, 533 US 27 (2001))—unless the fact that the detector is applied to targets in public places allows the courts to find that the contraband or other matter detected was effectively as exposed as if in plain sight. Similarly, at present, US legal doctrine justifies a number of exceptions to the ban on warrantless searches of persons and of the interior of vehicles on the grounds that police need to assure themselves that a person being stopped for questioning has no weapons that could be used to harm the officer. That safety justification for a large number of pat-downs and searches might vanish if the search is conducted by a machine rather than a person. In addition, if robotic cars meticulously follow traffic laws, the minor-violation justification for most stops and searches of cars may also vanish. Many other issues that are reasonably settled for human law enforcement, such as what constitutes reasonable force, will also be open to re-examination when the actor is a robot (Simmons, 2020). Early returns on robotic policing are not encouraging. In 1996, Dallas police strapped a bomb to a robot and used it to blow up a person who had barricaded himself second floor of a building after killing five police officers and wounding seven others (Peterson, 2016). More recently, in 2021, the New York police department leased a robotic dog from Boston Dynamics, and touted Digidog’s ability to “save lives.” However, deployment in a public housing building spurred claims that it symbolized overly aggressive policing of poor communities, and the NYPD terminated its contract (Zaveri, 2021)—only to welcome robotic dogs back early in 2023 (Rubenstein, 2023). There seems to be some tension between the literature that focuses on the human tendency to anthropomorphize and trust robots, and the strong reaction against some kinds of robot policing. On the other hand, we have not yet encountered much in the way of robots as criminal instrumentalities. Were robots to become the tool of choice for, say, bank robbery, one might reasonably expect that the public might accept, or even demand, anti-robot robotic police.

6. ROBOTS AND EMPLOYMENT If robots do not kill us, perhaps they will just take our jobs. The plethora of tasks that robots (and AIs) seem likely to be capable of—ranging from construction and warehouse work to driving trucks to highly skilled jobs in the financial, legal, and medical worlds—inevitably raises concerns about the possibility that robots will displace workers on a large scale. Indeed, there is evidence from around the world that firms seek to replace workers—not just industrial and retail workers but also professionals when possible—with robots since they can work 24/7, can all be taught new behaviors simultaneously, are resistant to pandemics (although vulnerable to their own kind of virus), and do not go on strike. Acemoglu and Restropo (2020) estimated that in the US one more reprogrammable industrial “robot per thousand workers reduces the aggregate employment-to-population ratio by about 0.2 percentage points and wages by about 0.42%” nationally, but in the area where the robot is deployed the employment-to-population ratio declines 0.77% and wages decline 0.77%. Every robot added to a commuting zone (a geographic areas used for economic analysis) reduces employment by six workers in that area. The study’s authors speculate that future effects will be larger as industry uses more robots. Meanwhile, Acemoglu, LeLarge and Restrepo (2020)

422  Research handbook on law and technology used French data to suggest that “firms adopting automation technologies reduce their costs and may expand at the expense of their competitors.” Will there be a race to the bottom as firms vie not to be eliminated for being the last to replace their workers with robots? There is at present no consensus as to whether the increasing use of robots (and AI) will cause substantial permanent unemployment, or just a significant but temporary effect due to skill mismatch. Even a “temporary” effect might be generational and severe, not just among industrial workers but also among other potentially replaceable lower-skilled workers, such as (in the United States) the approximately 1 million truckers who have few transferrable skills and the circa 10 million cashiers, retail salespersons, and first-line retail supervisors (Anderson, 2020). Many of the workers who are not displaced, including many in white-collar professions, may find themselves subjected to robot-administered monitoring and supervision where every action is tallied in search of maximum productivity (Kantor & Sundaram, 2022; Dzieza, 2020; Harwell, 2021). Robots require capital expenditures, changing the ratio of capital to labor; with fewer workers, the productivity rates of those who remain employed should increase. But capital cost is sensitive to macroeconomic factors such as interest rates, and to legal factors; among the legal factors are changes in worker protection (increases make robots seem more attractive), and tax law. There has been substantial scholarly attention to tax issues as they affect the decision to invest in robots, and also as to whether tax policy might be used to protect some jobs from robot displacement. For example, because wages are not deductible expenses, but capital investment in machinery is either a deductible cost or creates a depreciable asset, researchers suggest that the tax system creates an incentive to replace labor with capital, as does the legal obligation to make unemployment insurance and other payments for human workers—but not for robots (e.g., Abbot & Bogenschneider, 2018; Kovacev, 2020). And some, including some members of the European Parliament, have proposed a robot tax in order both to discourage that substitution and to create a fund that would help displaced workers, although the Parliament did not take up the proposal (Reuters, 2017). So far, the idea has foundered on the fear of discouraging innovation and the difficulty of measuring or defining what number or fraction of jobs a given machine has displaced. Proposals to tax robots generally also will run into issues of defining what would be a taxable robot, and how to decide what level of taxation per robot would be appropriate. The public has begun to take note of the employment threat that robots may present, although so far robots tend to be more unpopular in nations with higher inequality (Shoss & Cirlante, 2022). The academic debate, however, is overshadowed by an awareness that claims about the job-destroying effects of automation were prevalent in the 1960s—and proved utterly unfounded (Jaffe & Froomkin, 1968). Is this time different?

7. CONCLUSION One thing that is clearly not different this time is that every major new technology—be it electricity, railroads, the Internet, or robots—creates new sets of ethical, legal, and social problems. Currently we are only in the early days of roboticization: the quality, quantity, and variety of robots are each poised for rapid growth. This creates important and necessary work for legislators, regulatory authorities, judges, standards-makers, and academics who seek to maximize the benefits of robots while minimizing the harms. This brief survey, itself only

Issues in robot law and policy  423 a snapshot in time, leaves out much, but even so it demonstrates that the tasks before us are deeply consequential. New technologies also sometimes create opportunities for change in fundamental social arrangements. The roboticization of work, for example, offers the hope of freedom from dangers and drudgery, but could in some scenarios open the door to mass unemployment and immiseration. LAWS tantalize with the prospect of reducing human military casualties, but also present clear dangers of attenuation of command responsibility, civilian carnage, and that the prospect of war where aggressors suffer few casualties may prove too tempting to some. The odds that robots will become our masters in the foreseeable future seem negligible, but it is anything but inevitable that all robots-inspired changes will be benign, or that we will use robots to humanity’s best advantage. The challenges for lawyers, policymakers, ethicists, and indeed everyone, are upon us.

REFERENCES Abbott, R. (2016). I Think, Therefore I Invent: Creative Computers and the Future of Patent Law. Boston College Law Review, 57, 1079–1126. Abbott, R. & Bogenschneider, B. (2018). Should Robots Pay Taxes? Tax Policy in the Age of Automation. Harvard Law & Policy Review, 12, 145–175. Abraham, K. & Rabin, R. (2019). Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era. Virginia Law Review, 105, 127–171. Acemoglu, D., LeLarge, C. & Restrepo, P. (2020). Competing with Robots: Firm-Level Evidence from France. American Economic Association. Retrieved from https://doi​.org​/10​.1257​/pandp​.20201003 Acemoglu, D. & Restrepo, P. (2020). Robots and Jobs: Evidence from US Labor Markets. Chicago, IL: The University of Chicago Press. AI Principles: Recommendations on the Ethical Use of Artificial Intelligence (2019, October 31). Department of Defense, Defense Innovation Board. Retrieved from https://media​.defense​.gov​/2019​/ Oct​/31​/2002204458/​-1/​-1​/0​/ DIB​_ AI​_ PRINCIPLES​_ PRIMARY​_DOCUMENT​.PDF Anderson, D. (2020, September 8). Retail Jobs Among the Most Common Occupations. U.S. Census. Retrieved from https://www​.census​.gov​/ library​/stories​/2020​/09​/profile​-of​-the​-retail​-workforce​.html Arkin, R. (2010). The Case for Ethical Autonomy in Unmanned Systems. London: Journal of Military Ethics. Asaro, P. (2016a). “Hands Up, Don’t Shoot!”: HRI and the Automation of Police Use of Force. Journal of Human-Robot Interaction, 5, 55–69. Asaro, P. (2016b). Robotic Weapons and the Martens Clause. Cheltenham: Edward Elgar. Asaro, P. (2016c). Will #BlackLivesMatter to RoboCop? We Robot. Retrieved from https://robots​.law​ .miami​.edu​/2016​/wp​-content ​/uploads​/2015​/07​/Asaro​_Will​-BlackLivesMatter​-to​-Robocop​_ Revised​ _DRAFT​.pdf Awad, E., Dsouza, S., Kim, R., Schulz, J., Henrich, J., Shariff, A., Bonnefon, J. & Rahwan, I. (2018). The Moral Machine Experiment. Berlin: Nature. Bayern, S. (2021). Autonomous Organizations. Cambridge: Cambridge University Press. Bayern, S. (2019). Are Autonomous Entities Possible? Northwestern University Law Review, 114, 23–47. Birhane, A. & van Dijk, J. (2022, February 7). Robot Rights? Let’s Talk About Human Welfare Instead. New York: Association for Computing Machinery. Boyle, J. (2011, March 9), Endowed by Their Creator? The Future of Constitutional Personhood. Brookings. Retrieved from https://www​.brookings​.edu​/articles​/endowed​-by​-their​-creator​-the​-future​ -of​-constitutional​-personhood/ Brennan-Marquez, K., Levy, K. & Susser, D. (2019). Strange Loops: Apparent versus Actual Human Involvement in Automated Decision Making. Berkely Technology Law Journal, 34, 745–771. Brett, A. (2019). Secrets of the Deep: Defining Privacy Underwater. Missouri Law Review, 84, 47–92.

424  Research handbook on law and technology Brett, A. (2022). Regulating the Autonomous Ocean. Brooklyn Law Review, 88, 1–55. Bridy, A. (2012). Coding Creativity: Copyright and the Artificially Intelligent Author. Stanford Technology Law Review, 5, 1–28. Calo, R. (2010). People Can Be So Fake. Pennsylvania State Law Review, 114, 809–855. Calo, R. (2014, September 15). The Case for a Federal Robotics Commission. Brookings. Retrieved from https://www​.brookings​.edu​/research ​/the​-case​-for​-a​-federal​-robotics​-commission/ Citron, D. & Solove, D. (2022). Privacy Harms. Boston University Law Review, 102, 793–863. Conglianese, C. & Lehr, D. (2017). Regulating by Robot: Administrative Decision Making in the Machine-Learning Era. The Georgetown Law Journal, 105, 1147–1223. Craig, C. & Kerr, I. (2021). The Death of the AI Author. Ottawa Law Review, 51(1), 31–86. Crootof, R. (2016). War Torts: Accountability for Autonomous Weapons. University of Pennsylvania Law Review, 164, 1347–1402. Crootof, R. (2018). International Cybertorts: Expanding State Accountability in Cyberspace. Cornell Law Review, 103, 565–644. Dansby, J. (2020). Hammers and Nails: 1033 Program Reforms to Halt Police Militarization. Civil Rights Law Journal, 31, 109–134. Darling, K. (2016). Extending Legal Protection to Social Robots. In R. Calo, A. Froomkin & I. Kerr (Eds.). Robot Law. Cheltenham: Edward Elgar. Dolan, A. & Thompson II, R. (2013). Integration of Drones into Domestic Airspace: Selected Legal Issues 2. Congressional Research Service. Retrieved from https://fas​.org​/sgp​/crs​/natsec​/ R42940​.pdf Dzieza, J. (2020, February 27). How Hard Will Robots Make Us Work? The Verge. Retrieved from https://www​.theverge​.com ​/2020​/2​/27​/21155254​/automation​-robots​-unemployment​-jobs​-vs​-human​ -google​-amazon Ebers, M. (2022, February 5). Civil Liability for Autonomous Vehicles in Germany. SSRN. Retrieved from https://ssrn​.com​/abstract​= 4027594 or http://dx​.doi​.org​/10​.2139​/ssrn​.4027594 Elish, M. (2019). Moral Crumple Zones: Cautionary Tales in Human Robot Interaction, Engaging Science, Technology, and Society 5, 40–60. Froomkin, A. & Colangelo, Z. (2015). Self-Defense Against Robots and Drones. Connecticut Law Review, 48, 1–69. Froomkin, A. & Colangelo, Z. (2020). Privacy as Safety. Washington Law Review, 95, 141–203. Froomkin, A., Arencibia, P. & Colangelo-Trenner, P. (2022). Safety as Privacy. Arizona Law Review, 64, 921–987. Geistfeld, M. (2017). A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and Federal Safety Regulation. California Law Review, 105, 1611–1694. Gilbert, T. & Dobbe, R. (We Robot Conference Draft, 2021). Autonomous Vehicle Fleets as Public Infrastructure). We Robot 2021. Retrieved from https://werobot2021​.com​/wp​-content​/uploads​/2021​ /09​/Gilbert_​_et​_al​_ Autonomous​-Vehicle​-Fleets​.pdf Goldenfein, J., Mulligan, D., Nissenbaum, H. & Ju, W. (2020). Through the Handoff Lens: Competing Visions of Autonomous Futures. Berkeley Technology Law Journal, 835–910. Hartzog, W. (2015). Unfair and Deceptive Robots. Maryland Law Review, 74, 785–829. Harwell, D. (2021, November 11). Contract Lawyers Face a Growing Invasion of Surveillance Programs that Monitor Their Work. The Washington Post. Retrieved from https://www​.washingtonpost​.com​/ technology​/2021​/11​/11​/ lawyer​-facial​-recognition​-monitoring/ Hugenholtz, P. & Quintais, J. (2021). Copyright and Artificial Creation: Does EU Copyright Law Protect AI-Assisted Output? International Review of Intellectual Property and Competition Law, 52, 1190–1216. Huq, A. (2020). Constitutional Rights in the Machine-Learning State. Cornell Law Review, 105, 1875–1954. Jacques, A. (2019). Why the Moral Machine is a Monster. We Robot Draft. Retrieved from https://robots​ .law​.miami​.edu​/2019​/wp​-content​/uploads​/2019​/03​/ MoralMachineMonster​.pdf Jaffe, A.J. & Froomkin, J. (1968). Technology and Jobs: Automation in Perspective. New York: Praeger. Jenks, C. & Liivoja, R. (2018, December 11). Humanitarian Law & Policy, Machine Autonomy and the Constant Care Obligation. Humanitarian Law & Policy. Retrieved from http://blogs​.icrc​.org​/ law​ -and​-policy​/2018​/12​/11​/machine​-autonomy​-constant​-care​-obligation/ Joh, E. (2016). Policing Police Robots. UCLA Law Review Discourse, 64, 516–543.

Issues in robot law and policy  425 Joh, E. (2022, January 15). Reckless Automation in Policing. Berkeley Technology Law Journal Online. Retrieved from https://btlj​.org​/2022​/07​/reckless​-automation​-in​-policing/ Jones, M. (2015, January 13). Regulating the Loop: Ironies of Automation Law: Tying Policy Knots with Fair Automation Practices Principles. We Robot, Vanderbilt Journal of Entertainment & Technology Law, 77. Kaminsky, M. (2015). Robots in the Home What Will We Have Agreed To? Idaho Law Review, 51, 661–677. Kantor, J., Sundaram, A., Aufrichtig, A. & Taylor, R. (2022, August 15). The Rise of the Worker Productivity Score. The New York Times. Retrieved from https://www​.nytimes​.com​/interactive​/2022​ /08​/14​/ business​/worker​-productivity​-tracking​.html Kerr, I. (2004). Bots, Babes, and the Californication of Commerce. University of Ottawa Law and Technology Journal, 1, 285–324. Kovacev, R. (2020). A Taxing Dilemma: Robot Taxes and the Challenges of Effective Taxation of AI, Automation and Robotics In the Fourth Industrial Revolution. The Contemporary Tax Journal, 9, 23–49. Kyllo v. United States, 533 US 27 (2001). Lewis, L. (2020, January 10). Killer Robots Reconsidered: Could AI Weapons Actually Cut Collateral Damage? Bulletin of the Atomic Scientists. Retrieved from https://thebulletin​.org​/2020​/01​/ killer​ -robots​-reconsidered​-could​-ai​-weapons​-actually​-cut​-collateral​-damage/ Lior, A. (2022). Insuring AI: The Role of Insurance in Artificial Intelligence Regulation. Harvard Law & Technology, 35, 469–530. LoPuki, L. (2018). Algorithmic Entities. Washington University Law Review, 95, 887–953. Marks, M. (We Robot Conference draft, 2019). Robots in Space: Sharing Our World with Autonomous Delivery Vehicles. We Robot 2019. Retrieved from https://robots​.law​.miami​.edu​/2019​/wp​-content​/ uploads​/2019​/04​/ Marks​_Robots​-in​-Space​.pdf McFarland, M. (2022, September 1). Cruise Recalls Its Robotaxis After Passenger Injured in Crash. CNN Business. Retrieved from https://www​.cnn​.com​/2022​/09​/01​/ business​/cruise​-robotaxi​-recall​/ index​.html McNeal, G. (2014). Drones and Aerial Surveillance: Considerations for Legislatures. Brookings. Retrieved from https://www​.brookings​.edu​/research​/drones​-and​-aerial​-surveillance​-considerations​ -for​-legislatures/ Millar, J. (2017). Ethics Settings for Autonomous Vehicles. In P. Lin, K. Abney & R. Jenkins (Eds.). Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence. Oxford: Oxford University Press. Millar J. & Kerr, I. (2016). Delegation, Relinquishment, and Responsibility: The Prospect of Expert Robots. In R. Calo, A. Froomkin & I. Kerr (Eds.). Robot Law. Cheltenham: Edward Elgar. Mulligan, D. & Bamberger, K. (2019). Procurement As Policy: Administrative Process for Machine Learning. Berkely Technology Law Journal, 34, 773–851. Nass, C. & Moon, Y. (2000). Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56, 81–103. Owen, D. & Davis, M., Products Liability. Eagan: Thompson Reuters. Power, M. (2014, December). What Happens When a Software Bot Goes on a Darknet Shopping Spree? The Guardian. Retrieved from https://www​.theguardian​.com​/technology​/2014​/dec​/05​/software​-bot​ -darknet​-shopping​-spree​-random​-shopper Peterson, A. (2016, July 8). In An Apparent First, Dallas Police Used a Robot to Deliver Bomb That Killed Shooting Suspect. The Washington Post. Retrieved from https://www​.washingtonpost​.com​/news​/the​ -switch​/wp​/2016​/07​/08​/dallas​-police​-used​-a​-robot​-to​-deliver​-bomb​-that​-killed​-shooting​-suspect/ Press, M. (2017). Of Robots and Rules: Autonomous Weapon Systems in the Law of Armed Conflict. Georgetown Journal of International Law, 48, 1337–1366. Reuters (2017, February 16), European Parliament Calls for Robot Law, Rejects Robot Tax. Retrieved from https://www​.reuters​.com​/article​/us​-europe​-robots​-lawmaking​/european​-parliament​-calls​-for​ -robot​-law​-rejects​-robot​-tax​-idUSKBN15V2KM Reynolds, E. (2018, January 6). The Agony of Sophia, the World’s First Robot Citizen Condemned to a Lifeless Career in Marketing. WIRED. Retrieved from https://www​.wired​.co​.uk​/article​/sophia​-robot​ -citizen​-womens​-rights​-detriot​-become​-human​-hanson​-robotics Richards, N. & Smart, W. (2016). How Should the Law Think About Robots? In R. Calo, A. Froomkin & I. Kerr (Eds.). Robot Law. Cheltenham: Edward Elgar.

426  Research handbook on law and technology Richardson, R., Schultz, J. & Crawford, K. (2019). Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice. New York University Law Review, 94, 192–233. Rubenstein, D. (2023, April 11), Security Robots. DigiDog. GPS Launchers. Welcome to New York. New York Times. Retrieved from https://www​.nytimes​.com​/2023​/04​/11​/nyregion​/nypd​-digidog​-robot​ -crime​.html Sawers, P. (2020, January 10). Chinese Court Rules AI-written Article Is Protected by Copyright. VentureBeat. Retrieved from https://rai2022​.umlaw​.net​/wp​-content​/uploads​/2022​/02​/16​_Chinese​ -court​-rules​-AI​-written​-article​-is​-protected​-by​-copyright​.pdf Schlag P. (2014). How to Do Things with Hohfeld. Law & Contemporary Problems, 78, 185-234. Schwarz, E. (2018, August 29). The (Im)Possibility of Meaningful Human Control for Lethal Autonomous Weapon Systems. Humanitarian Law & Policy. Retrieved from https://blogs​.icrc​.org​ /law​-and​-policy​/2018​/08​/29​/im​-possibility​-meaningful​-human​-control​-lethal​-autonomous​-weapon​ -systems/ Shay, L., Hartzog, W., Nelson, J., Larkin, D. & Conti, G. (2016a). Confronting Automated Law Enforcement. In R. Calo, A. Froomkin & I. Kerr (Eds.). Robot Law. Cheltenham: Edward Elgar. Shay, L., Hartzog, W., Nelson, J. & Conti, G. (2016b). Do Robots Dream of Electric Laws? In R. Calo, A. Froomkin & I. Kerr (Eds.). Robot Law. Cheltenham: Edward Elgar. Shoss, M. & Ciarlante, K. (Summer 2022). Are Robots/AI Viewed as More of a Workforce Threat in Unequal Societies? Evidence from the Eurobarometer Survey. Technology, Mind, and Behavior. Retrieved from https://tmb​.apaopen​.org​/pub​/rv1x9zq4​/release​/2#:~​:text​=Utilizing​%20the​ %20Eurobarometer​%2087​.1​%20data​,threats​%20of​%20general​%20job​%20loss Simmons, R. (2020). Terry in the Age of Automated Police Officers. Seton Hall Law Review, 50, 909–953. Skorup, B. (2022). Drones, Airspace Design, and Aerial Law in States and Cities. Akron Law Review, 55, 157–186. Smith, B. (2020). New Technologies and Old Treaties. University of South Carolina School of Law: Faculty Publications. Stross, C. (2018, January 2). Dude, You Broke the Future! Charlie’s Diary. Retrieved from https://www​ .antipope​.org​/charlie​/ blog​-static​/2018​/01​/dude​-you​-broke​-the​-future​.html Templeton, B. (2020, September 21). What Happens To Car Insurance Rates After Self-Driving Cars? Forbes. Retrieved from https://www​.forbes​.com ​/sites​/ bradtempleton ​/2020​/09​/21​/what​-happens​-to​ -car​-insurance​-rates​-after​-self​-driving​-cars/​?sh​=2c9c4e9a5b97 The Dawn Project: In Scientific Test, Tesla “Full Self-Driving Technology Consistently Strikes ChildSized Mannequins (July 13, 2022). Dawn Project. Retrieved from https://rai2022​.umlaw​.net​/wp​ -content​/uploads​/2022​/08​/ The​_Dawn​_ Project__ ​_Tesla​_ FSD​_Test_ ​_8_​.pdf Thomasen, K. (2018). Beyond Airspace Safety: A Feminist Perspective on Drone Privacy Regulation. Canadian Journal of Law and Technology, 16, 308–338. Thomasen, K. (2020). Robots, Regulation, and the Changing Nature of Public Space. Ottowa Law Review, 51, 275–312. Thomson, J. (1976). Killing, Letting Die, and the Trolley Problem. The Monist, 59, 204–217. Twerski, A. & Henderson, J. (2009). Manufacturers’ Liability for Defective Product Designs: The Triumph of Risk-Utility. Brooklyn Law Review, 74, 1062–1108. Wade, J.W. (1974). On the nature of strict tort liability for products. Insurance Law Journal, 1974(3), 141–162. Wu, P., Escontrela, A. & Hafner, D. (2022, June). DayDreamer: World Models for Physical Robot Learning. Cornell University: arXiv. Retrieved from https://arxiv​.org​/pdf​/2206​.14176​.pdf Wessling, B. (2022, June 24). Cruise Hits Milestone By Charging For Robotaxis Rides in SF. The Robot Report. Retrieved from https://www​.therobotreport​.com​/cruise​-begins​-charging​-the​-public​ -for​-robotaxis​-rides/ Woo, J., Whittington, J. & Arkin, R. (2020). Urban Robotics: Achieving Autonomy in Design and Regulation of Robots and Cities. Connecticut Law Review, 52, 324–410. Zaveri, M. (2021, May 11). N.Y.P.D. Robot Dog’s Run Is Cut Short After Fierce Backlash. The New York Times. Retrieved from https://www​.nytimes​.com​/2021​/04​/28​/nyregion​/nypd​-robot​-dog​-backlash​ .html

26. Artificial intelligence and the law: can we and should we regulate AI systems? Riikka Koulu, Suvi Sankari, Hanne Hirvonen and Tatjaana Heikkinen1

1. INTRODUCTION In November 2022, the technology company OpenAI launched ChatGPT, a conversational chatbot utilising large-scale language models and machine learning techniques. The public reception of the launch was both excited and alarmistic: newspaper columns written with the chatbot’s help declared fundamental disruption of higher education, professional work, and the creative arts, researchers framing this as a breakthrough in the development of artificial intelligence (AI) systems for the general public. According to some, the comprehensive public demonstration of what general-purpose AI – of the large language model type – can do may also have disrupted the EU legislator (Volpicelli, 2023). The narratives that now highlight the significance of these developments in natural language processing are the latest in reflecting a long line of similar success stories: recurring hype, as in excitement and belief in cycles of AI development. Similar narratives emerged in 1997 when IBM’s chess-playing expert system DeepBlue claimed the first win by computer over the reigning chess champion Garry Kasparov under tournament conditions. Why are we making this comparison between a chatbot of 2022 and a chess programme of 1997? For three reasons. First, through a contingent example, to demonstrate that we are currently in the middle of an intensive AI development phase. The most recent disappointment and disillusionment cycles known as “AI winter” (MacCorduck, 2004) that followed the most recent AI hype is definitely over. This is to say that AI hype and disillusionment are a repeating pattern. We contextualise the particularities of the current AI debate in relation to the growing concern for the negative societal implications of AI deployment and the ensuing pressure to introduce new regulations to mitigate these risks – hype and disillusionment coinciding. Second, to highlight a diversity of methods, applications, and approaches that have become known as AI. AI can be many things but once successfully adapted to everyday life, we tend to find other words to describe these tools, a tendency called the “AI effect” in research on the

1  This research was funded by the Academy of Finland research projects ‘Potential and Boundaries of Algorithmic Transparency’ (AlgoT); ‘Before the code: Digital administration redesigned for everyone’ (DARE); and ‘Is this Public or Private? A Study on the Philosophical Foundations of European Privacy Regulation’ (POP?); as well as ‘The Automated Administration: Governance of ADM in the public sector’ (ADM-Gov) consortium project funded by Svenska litteratursällskapet i Finland r.f. (SLS). The authors wish to acknowledge the support and feedback received from colleagues at the University of Helsinki Legal Tech Lab during research and writing up the results.

427

428  Research handbook on law and technology history of AI (MacCorduck, 2004; Haenlein & Kaplan, 2019). Is the present focus of regulative activities really AI or computational AI-assisted information systems? Third, to draw attention to the historical contexts and continuums in the relationship between artificial intelligence and the law. The past has shaped and informs current AI approaches and should also inform our understanding of the relationship between AI and law in its many forms. For example, using AI in legal contexts at the present stage of technological development is very different from predicting the next move in chess, moreover, reactively focusing on the negative effects of AI is a very different approach from the optimistic and proactive one of earlier legal scholarship on machine-simulated intelligence. To weigh and balance the related pros and cons of regulating the use of AI, both positive and negative effects require attention. In this chapter, we hope to provide an overview of the multifaceted research endeavours and fields that have aimed to conceptualise the relationship between law and AI. As it is impossible to do justice to this broad range of prior research dating back decades and over continents, we approach the topic particularly from the perspective of AI regulation, focusing on the European Union’s much-debated upcoming Artificial Intelligence Act that introduces harmonised rules for placing on the market and on the use of AI systems across sectors.2 What we find interesting, particularly in relation to the historical and methodological complexity hinted at above, is the political, legal, and technical debate about the meaning of AI. We believe this to be a useful entry point to many of the field’s prevailing discussions. On one hand, it is argued in technical terms that AI cannot be defined and hence AI regulation is doomed to fail. On the other, legal certainty and clear scope of application are necessary requirements for any new law. These argumentation patterns and tensions exemplify a stance we call the definition dilemma (for an overview, see, e.g. Martin-Bariteau & Scassa, 2021). Another tension inherent in AI regulation can be located within the regulatory principle of technological neutrality, which requires policymakers to avoid favouring or discriminating against any particular use or form of technology over others (e.g. Briglauer, Stocker & Whalley, 2020). Hence the dilemma is: law regulating AI should be sufficiently neutral in terms of technology, yet exact enough to address the set of AI techniques or contexts of use deemed to be problematic. In any case, the definition dilemma may yet fade away with the creation of a legal definition by the AI Act, which will influence the AI industry as well as our conceptual understanding of AI well into the future – regardless of how well the definition succeeds in capturing the multifaceted phenomena. At this stage, a working definition of AI is needed for this chapter. We perceive AI to be a diverse set of computational procedures or techniques (such as machine learning algorithms) that based on data, perform tasks to varying degrees of autonomy that would be considered intelligent if performed by humans (Turing, 1950).3 As we will demonstrate, AI methods and approaches have changed over time, yet the concept that relates to technology simulating human intelligence has remained the same. Against this backdrop, it is possible to perceive the AI Act as being targeted towards the design and use of computational techniques in general, and not just narrow regulation of the most popular contemporary AI approaches. Proposal for an Artificial Intelligence Act (AIA) (COM/2021/206). We build in part on Turing’s (1950, p. 440) classic definition here: “AI could be identified if a human interrogating the responses from both a computer and another human were unable to determine which response was produced by the computer”. 2  3 

Artificial intelligence and the law  429 The European Union’s initiative to regulate AI within the Single Market is not isolated from the global geopolitics of AI development and deployment, in which particularly the United States and China have been forerunners. One aim of the EU Commission’s AIA proposal is to support the “objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence”, another is to encourage innovation.4 It is at least possible, if not even likely, that the AIA will influence AI development far beyond Europe, as companies comply with the standards set by the regulation to gain access to the economically desirable and (uniformly) regulated European market, as has happened with the General Data Protection Regulation.5 Furthermore, other jurisdictions are also considering AI regulation, e.g. Canada and the United States, and the European developments may yet produce a baseline if not the gold standard for other regulatory initiatives. Why do we need AI regulation? There is a growing consensus that the ubiquitous deployment of AI applications across societal fields has led to urgent legal problems such as algorithmic bias (Burkell, 2019), surveillance and predictive policing that target disproportionately exposed communities (Nachbar, 2020), hate speech on social media that limits the exercise of constitutional and democratic rights (Laukyte, 2023), unequal access to digital public services and healthcare (Toohey, Moore, Dart & Toohey, 2019; Veinot, Mitchell & Ancker, 2018), and insufficient liability regimes for AI-related harms (Wagner, 2019), to name but a few. None of these problems are completely new nor limited to only certain societies, yet many perceive that law is challenged in new ways due to the relatively autonomous nature of many AI applications and the difficulty of exerting human control or, in turn, allocating responsibility (e.g. Barfield & Pagallo, 2020; Beckers & Teubner, 2021). Ultimately, we hope to emphasise the importance of historical and societal contexts as well as sectoral laws and situated practices, because we fear horizontal AI regulation is in danger of forgetting context. The danger exists within the AI regulation debate to consider AI, like many other technological applications and the problems they have brought forth, as something inherently unprecedented and novel, something that emerges from technological progress without historical or social context. Within the social sciences, the assumption that technology is ahistorical, apolitical and value-neutral has been contested and the political dimensions of technology and its use have been widely acknowledged and discussed (e.g. Feenberg, 2017; Mumford, 1964; Winner, 1985). Technological products affect society and its different groups in various ways, but technology as a product is shaped by human choices, interests and values, including laws (for more, see Koulu, 2021). As noted in relation to ethical guidelines for AI, “decontextualization” of technology may limit the policy debate by obscuring human behaviour, actions, and social structures as potential objects of regulation (Koulu, 2020). This stance is echoed by law and technology scholars Bennett Moses and Gollan who fittingly argue, “it is where legal scholarship focuses closely on a particular technology that the risk of ignoring history and the broader context is greatest” (Bennett Moses & Gollan, 2016, p. 1). In addition to documenting the current phase of legal scholarly and regulatory development for posterity, we hope to contribute to a critical discussion on the (im)possibility

The AI Act (COM/2021/206), Recital 5 and, e.g. 75. GDPR (679/2016); As to the companies producing products or services, the Brussels effect further relies on the inelasticity of their target and non-divisibility of legal, technical and economic policies, according to Bradford (2012, 2020). 4  5 

430  Research handbook on law and technology of regulating AI by offering a contextualisation of the current policy debate around AI to earlier and current legal scholarship. Our task is divided into four steps. First, we draw on prior research on AI and law to provide a historical context to the current debate on AI regulation. By looking into prior research at the intersections of AI and law, we demonstrate a shift in perspective: today’s focus on regulation of AI by law is distinct from the focus in earlier research on applying AI methods in law. Yet these perspectives could contribute a depth of understanding to one another: for example, is a clear fixed definition of what constitutes AI is possible or necessary? Why have the open philosophical and economic questions related to simulating human intelligence remained? Second, our own targeted (non-systematic) literature review demonstrated – quite intentionally – the limitations of modern database searches for building an overview of the field. Critically analysing our sample against our expert knowledge of the field, it illustrates the difficulties anyone new to the field would face: less than perfect quality and hence limited value of results returned by databases using certain search criteria. We nevertheless analysed our sample to recognise trends and the current state of how legal scholars define AI and what AI-related problems they perceive as requiring new regulation. Third, in light of the history and state of the art we have laid out, we discuss the definition of AI, the regulatory objectives and choices made in the European Union’s AI Act, and how the European Union sought to reconcile the various objectives from market regulation to fundamental rights by a combination of definitions, forbidden AI practices, and requirements for high-risk AI systems. Finally, we provide some concluding remarks.

2. EARLIER RESEARCH ON THE INTERSECTIONS OF LAW AND AI 2.1 Origins of Definitional Difficulties It is our understanding that the current discussion about law and AI has a lot to gain from looking not only forward but also backwards to earlier research on law and technology, as this supports the endeavour to conceptualise and contextualise the challenge AI applications impose on law from a comprehensive perspective. By drawing on earlier research, we were able to shed light on changing concepts of AI and legal responses to the increasing use of computational techniques in everyday life, which we then compared first with current scholarship on AI regulation and then with the regulatory debate within the European Union. 2.1.1 From AI methods for law to law for AI Legal scholarship has a rich tradition of research related to machine-simulated intelligence, dating back at least to the late 1940s in Loevinger’s jurimetrics by which he referred to statistical analysis of law (Loevinger, 1949). A point of interest is the shift in focus. While the focus of the current legal debate is on regulating AI applications across sectors to mitigate harm, pioneers in the AI and law field from the 1980s onwards pursued a different line of inquiry by focusing on the development and application of AI methods within the legal domain, with particular interest in automating legal reasoning and decision-making. Legal informatics combined law and information science perspectives to address issues related to the use of information, data storage, and retrieval (on legal informatics, see Bing, 1990; Contissa,

Artificial intelligence and the law  431 Godano & Sartor, 2021; Pohle, 2021; Steinmüller, 1970). Early research traditions were often optimistic about the potential of computer applications in improving law and making it more objective, reasonable, and accessible (e.g. Blume, 1990; Mehl, 1958; Popple, 1990). More recent research on algorithmic regulation has been more critical, as negative consequences of AI deployment on the protection of legal rights have also become more explicit (e.g. Yeung & Lodge, 2019). Recently, computational legal theory and research on law, technology, and society as well as legally oriented science and technology studies have made important contributions to conceptualising the entanglements between law and data-driven technologies (Diver, 2021; Leuenberger & Schafer, 2017; Cohen, 2019; Hildebrandt, 2016; Jasanoff, 1997). Yet, despite that the histories of technology and of AI are established research fields, to our knowledge historical research into law and technology, including AI, remains underdeveloped. 2.1.2 Rule-based modelling of legal reasoning In the 1960s and 1970s, much of AI development focused on building rule-based systems in which the computer executes “if-then” commands pre-set by humans. Within law, the advocates of the rule-based approach saw their potential for modelling legal reasoning and ultimately automating decision-making. However, the application of rule-based modelling to law soon faced its limitations that resulted from the law’s characteristics, such as Hart’s description of legal concepts as an open texture that relies on interpretative flexibility of natural language, discretion, and human experience. Although the problem of law’s open texture was recognised by the pioneers, they were optimistic about solving it through theoretical analysis of legal concepts: The problem of making a “law machine” certainly involves a technical aspect. It will be necessary to find the type of machine capable of fulfilling this function, to determine the essential features of such a machine. However, any machine suitable for making selections will generally be suitable to a greater or lesser extent. The problem is thus essentially a theoretical and logical one. For solving it, we require more highly-evolved analysis of legal concepts than that to which we are accustomed, conducted in a different spirit, in some cases. It invites us to define new legal concepts which will combine easily and unequivocally. (Mehl, 1958, p. 758)

Time and time again, computer applications have revealed law to be more complex than anticipated. Also, Loevinger underestimated the challenge of developing computers to solve legal problems, which he believed already to be possible with the technology available in the 1940s. He considered the main obstacle to be the lack of suitable terms for computer processing of law: Machines are now in existence which have so far imitated “thought processes” that they can solve differential equations and other “logical” operations of equal or greater complexity. The machines can be constructed to solve equations with virtually any number of variables, and with large numbers of variables the operation is much faster than when performed by the human mind. Why should not a machine be constructed to decide lawsuits? The complexity of the problems presented, measured by the number of variables involved, is well within the limits of existing machines. The difficulty is that we have no terms to put into the machines, as the scientists have numbers and symbols. (Loevinger, 1949, p. 471)

From the 1980s onwards AI and law formed its own research field, which initially focused on rule-based expert systems and modelling of legal reasoning (Bench-Capon et  al., 2021; Branting, 2000; Gardner, 1987).6 In a retrospective review of the field, Bench-Capon et al. 6 

See International Conference on Artificial Intelligence and Law (ICAIL). (1987). Boston, MA.

432  Research handbook on law and technology considered one of its more important contributions to include bringing together rule-oriented logic-based systems and case-based approaches (Bench-Capon et  al., 2012). Reflecting on the field’s progress across decades, the authors attributed the increases in the availability and scope of AI applications to the development of the World Wide Web, reduction in the cost of data storage, and increase in computing power, yet maintained that the field stays relevant due to its interdisciplinarity that enables information exchange across legal and mathematical sciences (Bench-Capon et al., 2012). 2.1.3 The WWW and constitutional risks of coded architectures The late 1990s and early 2000s signified yet another shift in law and technology scholarship, which started to focus on issues loosely labelled as cyberspace law, following the prolific rise in the use and provision of online services once the World Wide Web enabled the public to access the Internet (For an overview, see e.g. Reed, 2012.) Professor Lessig provided one of the more influential examinations of the theoretical underpinnings of cyberspace in his 1999 book Code and Other Laws of Cyberspace (Lessig, 1999), as he drew attention to the regulatory dimension of programmed architectures and how computer code replaces other sources of regulation in online environments. Lessig made explicit the connection between code and constitutional values, while considering the emergence of cyberspace as a watershed in technology regulation: It will present the greatest threat to both liberal and libertarian ideals, as well as their greatest promise. We can build, or architect, or code cyberspace to protect values that we believe are fundamental. Or we can build, or architect, or code cyberspace to allow those values to disappear. There is no middle ground. (Lessig, 2006, p. 6)

Alongside the growing awareness of the constitutional risks, the popularisation of the internet contributed to lowering the costs of data transfer and storage and computational processing. In turn, these developments made possible data-driven AI approaches, such as machine learning, which are at the core of our contemporary understanding of AI. 2.1.4 Data-driven approaches to AI Machine learning and other data-driven approaches to AI originated side by side with logicbased AI systems in the late 1950s. The focus in early machine learning was to develop self-learning machines that would improve their performance in given tasks based on data. In the early days of the field, these approaches were side-lined from mainstream AI research due to difficulties in acquiring sufficient data and computational power needed to train the models. However, this all changed in the 2010s with big data and datafication, a term popularised by Cukier and Mayer-Schönberger to describe the large-scale translation of our everyday lives into quantifiable data for producing predictive analytics (Cukier & MayerSchönberger, 2013). With the expansion of data-driven AI techniques and their deployment across societies and sectors to predict and steer human behaviour, we have ended up with the current societal and legal concerns that will also shape the debates on AI regulation. As we describe in Section 4, the European Union’s AI Act hopes to regulate the design and use of such data-driven computational techniques and their social consequences which some scholars refer to as algorithmic forms of governance or as algorithmic regulation (Aneesh, 2009; Yeung & Lodge, 2019). In other words, the discussion on law and AI is fundamentally connected with broader issues

Artificial intelligence and the law  433 of technological change in our societies and the role of law and regulation in shaping these developments. 2.2 Persistent Issues of Law and Technology, or AI and Law The examples above illustrate the relevance of prior research for today’s debate on AI regulation. Despite the qualitative differences between rule-based and data-driven AI approaches, some concerns about AI have remained much the same despite significant changes in our societies and legal frameworks. 2.2.1 AI effect Lessons learnt from earlier research are particularly relevant for our current discussion due to the phenomenon colloquially called “AI effect”, in which the term AI refers to technological functionalities not yet achieved but once successfully deployed they are no longer considered AI (McCorduck, 2004). Simply put, the definition of AI is a moving goalpost that changes over time (cf., Aneesh, 2009; Gillespie, 2014; Koulu, 2021). This dynamic nature of the concept of AI imposes challenges for defining the scope of AI regulation. At the same time, there is interpretative flexibility to the concept, which may support the creation of regulation that lasts the test of time and is not rendered obsolete by any individual technological advances. Understanding the changing notions, plurality of approaches, and different iterations of AI helps the legally oriented reader to pinpoint those social and legal implications of AI development and deployment that require regulatory intervention. In a history of AI, McCorduck (2004) contextualised the 20th-century AI developments in relation to the long history of Western philosophical thought that has aimed to mechanise human thinking for centuries, if not millennia. A more recent milestone and commonly accepted starting point for AI as a scientific field is the Darthmouth Summer Research Project on Artificial Intelligence in 1956, organised by mathematician McCarthy to develop machines to simulate human intelligence, learning and problem-solving (McCarthy, Minsky, Rochester & Shannon, 1955). The summer workshop established AI at the intersections of computer science and human intelligence. This positioning made clear the philosophical dimension inherent in the endeavour to simulate human intelligence by machines. 2.2.2 Philosophical dimension to simulating human intelligence Despite changes in methods and techniques, the core understanding of AI has remained much the same: artificial intelligence aims to simulate human intelligence in a way that enables the replacement and/or augmentation of human cognitive tasks by computer processes. The early connection between the concept of AI and philosophy contribute to the definition dilemma. As the concept of AI is connected with the philosophical questions about the essence of cognition, intelligence and human behaviour that seems to indicate them, providing a clear-cut definition is not simply about capturing rapid changes in the technical methods and approaches. Also, in his influential work on early AI applications called legal expert systems, Susskind recognised the metaphorical dimension of simulating human intelligence, finding the concept to be useful despite its limited philosophical rigour (Susskind, 1987). By defining AI through human cognition, the question of developing AI applications for law meets the philosophical difficulty of understanding the foundations of legal thinking.

434  Research handbook on law and technology 2.2.3 The concern for human values One of the long-lasting narratives is the concern for the loss of human values that results from technology-driven fate beyond human control. Although authors perceive its likelihood and meaning differently, the fact that these arguments are repeated over time suggests a complicated relationship between law and technology, in which law and legal values are challenged by technological developments. Loevinger (1949) disparaged such concerns for the essence of law as being based on a false understanding of science. Law, technology, and society scholar Brownsword (2019, p. 9) summarised current scholarship by repeating similar concerns for legal values: The rise of technological management in place of traditional legal rules might give rise to several sets of concerns. Let me briefly sketch just four kinds of concern: first, that the technology cannot be trusted, possibly leading to catastrophic consequences, secondly, that the technology will diminish our autonomy and liberty; thirdly, that the technology will have difficulty in reflecting ethical management and, indeed, might compromise conditions for any kind of moral community; and, fourthly, that it is unclear how technological management will impact on the law and whether it will comport with its values.

Many scholars have elaborated on the negative consequences of datafication, quantification, and algorithmisation from a socio-legal perspective. Hildebrandt draws attention to the need for the protection of “what is uncountable, incalculable or incomputable about individual persons” (Hildebrandt, 2019, p. 83); Yeung emphasises the intergenerational harms and effects of algorithmisation, which law is poorly equipped to address (Yeung & Lodge, 2019). However, it would be reductive to assume that only law needs to react and adapt to technological change. The relationship between law and technology is of dynamic reciprocity, in which law shapes technological developments and is simultaneously shaped by them in continuous loops (Cohen, 2019; Koulu, 2020).

3. REVIEW OF RECENT LEGAL RESEARCH ON AI AND REGULATION 3.1 Prolific Rise in the Amount of Research In the last section, we drew attention to the origins of law and AI research and discussed the shift in focus from AI applications for law to the regulation of AI applications through law. This shift coincides with the prolific rise in research on the legal implications of AI. In this section, we present the results of a non-systematic literature review conducted from spring to autumn 2022 to assess the current trends in legal scholarship as to AI definitions and calls for regulatory intervention. Our final dataset included 71 articles based initially on two research database searches.7 We first narrowed down the refined dataset of 1375 articles to a sample of 150 articles by additional keywords and then complemented the sample with purposive sampling based on our knowledge and expertise in the field to represent seminal literature, 7  The initial search was concluded in March 2022 with key words “artificial intelligence” AND regulation in the HeinOnline and Web of Science databases (returning some 10,000 results). The search was limited to years 1990–2022 and only to refereed articles. Results included 1375 articles.

Artificial intelligence and the law  435 authors and journals missing from it.8 Based on abstracts, we selected 71 of the 150 articles for close reading, to find that a further 18 of the articles were irrelevant to the questions we posed. The initial searches demonstrated a literature explosion over the past few years;9 law and AI is a rapidly developing field resistant to overall synthetisation. We reluctantly noted that our initial search result reflected a classical database problem, as put by AI and law pioneer Susskind: database searches may “deliver excess of irrelevant documents and fail to produce the bulk of those relevant texts” (Susskind, 1987, p. 6), due to unsatisfactory criterion for relevance. Our review was guided by three main questions: how was artificial intelligence defined in the prior legal literature; where do scholars locate AI-related problems that require new regulation; and has current research engaged with the AIA? Answers to these questions, together with the theoretical background provided in Section 2, and the analysis of the European Union’s AI Act proposal in Section 4 will contextualise AI regulation and problems related to it. We decided not to conduct a literature review on law and AI based on established legal debates, fields and disciplines to avoid furthering disciplinary siloes and in order to follow the holistic horizontal approach that seems to inform many AI regulation initiatives (Yeung & Lodge, 2019). 3.2 Scholarship and Definition of AI As to defining AI, the foremost result of the literature review is that most often in our sample AI is not defined at all, and when it is, the definition is not clear-cut – no definition of AI in scholarship or policy document emerges as a shared point of reference.10 For example, Zuiderveen Borgesius, who argues for sector-specific further regulation, side-lines the definition of AI: “this paper sacrifices precision for readability, and uses ‘algorithmic  8  Terms used in search were: “regulating artificial intelligence”, “governing artificial intelligence”, “legislating artificial intelligence”, “AI Act”, “artificial intelligence act”, “AI policy” and “algorithm*”. To control for sample size, in the search we excluded the use of synonyms and close terms to AI and algorithms, such as intelligent systems, computer systems and automated or autonomous decision-making. Similarly, we did not use synonyms or close terms to regulation in the search to further limit the number of results. Due to the otherwise large number of hits, we decided to rely on these limited key words to present the current research at an informative level. Results were not specifically limited by publication years, but a considerable proportion of them had been published in the period 2018–2022. From the 150 articles, we selected 60 articles for further reading based on their abstracts. In addition, we searched articles from HeinOnline’s Law Journal Library (terms “artificial intelligence” AND regulat* and “PathFinder Subject: Science, Technology, and the Law”) and based on this search, we complemented the sample gathered from Web of Science, choosing 11 articles written by well-known researchers.  9  For example, typing “artificial intelligence” AND regulat* on the search bar in HeinOnline’s Law Journal Library emphasised the recent rather significant increase in AI related research, as the number of search results grew nearly tenfold between 2016 and 2022 compared to between 2010 and 2015. This is an indicative example from March 2022 as results were not limited to refereed articles2. However, the same rapid increase in search results in recent years was something we noted throughout our literature search. 10  Some focus on a narrower definition than AI, such as algorithm, e.g. “we define an algorithm as a mathematical formula implemented by technology: ‘a sequence of instructions that are carried out to transform the input to the output’” (Alpaydin, 2016, p. 16, as cited in Oswald, Grace, Urwin & Barnes, 2018).

436  Research handbook on law and technology decision-making’, ‘AI’, etc., without specifying whether the phrases refer to machine learning or another technology”.11 This signals to us that scholarly discussion on AI and law is possible without a distinct common definition of AI. Discussing various computational procedures bundled under reference to AI, scholarship devotes differing degrees of attention to various issues, such as presence of intelligence,12 data-driven approaches,13 and distinguishing between data-driven AI techniques and other computer programs.14 Practical examples of AI use include “home management systems integrated into household appliances; robots; autonomous cars; unmanned aerial vehicles” (Čerka, Grigienė & Sirbikytė, 2015, p. 378). Some scholars refer directly to AI definitions adopted in various policy instruments.15 Others, like Ashraf (2022), engage in defining and distinguishing between narrow and general AI and explain narrow AI by using concepts like machine learning, deep learning, and reinforcement learning. Several sources mention and reflect on the difficulty of defining AI. Whereas Koulu (2020, p. 13) describes that “the concept of artificial intelligence (AI) is ambiguous at best”, and Buiten (2019, p. 45) concludes that: […] in sum, there does not appear to be a clear definition of AI. The various definitions of AI used in the literature may be helpful to understand AI, but are unsuitable as a basis for new laws. While some uncertainty may be inherent to new technologies, it is problematic to centre laws and policies around the opaque concept of AI.

It seems that for scholars – as opposed to regulators – the key question in regulating AI is not to provide a comprehensive definition of AI. This approach can be considered problematic in scholarship that seeks to discuss the formulation of AI regulation and its scope – especially as the definition is something the EU legislator seems to struggle with currently.16 In sum, the 11  For Zuiderveen Borgesius (2020, p. 1573, quoting Dourish, 2016, p. 3), an algorithm is “an abstract, formalized description of a computational procedure” and the “decision” in algorithmic decision-making refers to the output of the computational procedure. 12  Zuiderveen Borgesius (2020, p. 1574) defines AI as “the study of the design of intelligent agents”, borrowing this computer science definition from Russell and Norvig (2016, p. 2) who in turn cite Poole, Mackworth, & Goebel, 1998, 1: “Computational Intelligence is the study of the design of intelligent agents”. 13  E.g. “AI techniques including machine learning and deep learning are involved” (Liu, Lin & Chen, 2019, p. 135); “AI, in the forms of machine learning, voice recognition and predictive analysis” (Lui & Lamb, 2018, p. 267); “The most important contemporary instantiation of artificial intelligence (AI) is machine learning” (Hacker, 2018 p. 1143); “[…] intelligent behaviours assist the AI to resolve issues at hand by providing a decision using reasoning derived from analysis of sample data which they have been fed” (Bishop, 2006, as cited in Lee, Karim & Ngui, 2021, p. 259). 14  “AI is different from conventional computer algorithms in that it is able to train itself on the basis of its accumulated experience” (Čerka, Grigienė & Sirbikytė, 2015, p 378). “The second wave of AI refers to computer systems that create their own rules” (Gacutan & Selvadurai, 2020, p. 195). Public policy scholars Matus and Veale (2021, p. 177) describe how “Machine learning systems, often colloquially called ‘algorithms’ or even ‘artificial intelligence systems, are a type of software distinguished by the way that they ‘learn from experience’”. According to Hacker (2018, p. 1143), “The most important contemporary instantiation of artificial intelligence (AI) is machine learning”. 15  Wojtczak & Ksiezak (2021, p. 2) refer to the Independent High-Level Expert Group on Artificial Intelligence definition of AI; Svantesson (2021, p. 4) refers to AIA, Annex I definition of AI; Gacutan and Selvadurai (2020, p. 215) refer to OECD’s definition of AI. 16  Beyond the AIA proposal, a list of AI definitions can be found in the Commission’s Joint Research Center’s publication (Samoili, Lopez Cobo, Delipetrev, Martinez-Plumed, Gomez Gutierrez & De Prato, 2021).

Artificial intelligence and the law  437 articles in our sample did not really help with the AI definition dilemma, but made it clear that this had not inhibited academic discussions. 3.3 Where Scholars Locate AI-Related Problems That Require New Regulation We asked our sample where scholars locate AI-related problems that require new regulation. Many AI-related regulative problems came up, yet few scholars wrote from the de lege lata perspective, seeking to suggest a solution based on existing law (for example, Hacker, 2017, examines the existing GDPR as a means to fight algorithmic discrimination). More often, the perspective was one considering the question of how to regulate AI (de lege ferenda) by suggesting a need for new legislation. However, only 14 of the relevant 53 articles in our sample clearly argued for new regulation, whereas most articles more indirectly discussed or mentioned the need for legal protection. Zuiderveen Borgesius (2020), for one, suggested a sector-specific rather than a general approach in giving additional regulation, to focus on other contexts affected by algorithmisation. Our sample did not include suggestions indigenous to scholarship on the general horizontal regulation of AI. Instead, the regulative suggestions arose from AI-related problems in alternative contexts. One was a liability and the law’s anthropocentricity. Lee, Karim, and Ngui (2021) argue that AI requires new regulation on liability, as current liability regulation based on anthropocentricity struggles with establishing causality in the AI context. They made this point on the challenge of focusing on individual persons in the context of liability, but it is also relevant in many other fields of law and can be generalised to one of the core problems in relation to the use of AI. Exerting human-centred concepts to machine action for liability can lead to a problematic practice of constructing a human scapegoat. Tamo-Larrieux (2021) explored the “unwanted side-effects” of automated decision-making (ADM) and made another important general observation by pointing out how current data protection rules that touch upon ADM focus on individuals, not on groups or collectives. According to Tamo-Larrieux (2021), with regard to AI and ADM, social harms and collectives should be considered more. However, because existing law operates around individuals’ rights and obligations instead of other or larger entities, an approach covering wider audiences, groups, and cumulated harm is much more difficult to put into action (on collectives, see also Smuha, 2021; Hakkarainen, 2021). As other courses of action, such as non-discrimination law will have only limited success with this challenge, it calls for new regulation. As to a proper regulative approach, questions of where control and power over AI use should lie were raised but scholars were not unanimous regarding these. For example, in the context of computational propaganda Dowdeswell and Goltz (2020, p. 211 [emphasis added]) propose that “power should be decentralized, and greater user control should be promoted”. In the data protection context, on the contrary, Hacker (2017, p. 285 [emphasis added]) argues that “EU data protection law suffers from an overreliance on control and rational choice that vulnerable users are unlikely to exert”. Studying platforms, De Gregorio (2018) brings up the possibility of giving public power to platforms but expects the use of it to be regulated. Thus, the reasons for and solutions to how to regulate AI seem to vary. However, one must keep in mind that the disciplines and exact problems vary from one article to the next. Nevertheless, old questions in relation to power and its limits and control mechanisms remain relevant in relation to regulating AI.

438  Research handbook on law and technology When it comes to regulatory choices, the sample presented several different ways forward. While some were mapping areas where certain AI systems should not be used at all (Oswald et al., 2018), others suggested a more policy-oriented rather than regulatory reform (Dowdeswell & Goltz, 2020). That is, instead of making proposals for new legislation, the course of action suggested was, for example, “enhance media literacy” (Dowdeswell & Goltz, 2020, p. 211) to counter computational propaganda. More comprehensively, Dowdeswell and Goltz (2020, p. 211) suggest: three organizing principles that can be used to direct regulatory responses to computational propaganda: 1) there should be informational transparency, including for algorithms and information flows; 2) power should be decentralized, and greater user control should be promoted; and 3) there should be a focus on efforts to improve media literacy, fact-checking and credibility. These principles are all linked, and together we propose that they are better able to deal with some of the specific harms of computational propaganda while also promoting democratic values, human rights, and civic engagement.

Our sample extending to 2022 included very few articles discussing the AIA specifically. Public policy scholars Matus and Veale (2021) advocate for the certification of machine learning systems as a form of private AI governance. They note that the AIA “proposes a certification scheme inspired by and connected to conformity assessment regimes for (primarily) product safety” (Matus & Veale, 2021, p. 192). In this sense, the AIA follows the same idea of certification, though it does not leave the choice to regulate or not for industry self-regulation like Matus and Veale do. Moreover, the suggestions by Buiten (2019) in the context of algorithmic transparency to focus attention on a range of aspects of algorithms (training data, testing the algorithm, and the decision model chosen) instead of AI seem to align with the approach of the AIA, although the AIA proposal is not mentioned in the article. Articles that discussed AIA did it mostly in comparative contexts. They assessed the extraterritorial impact of AIA; whether or not their jurisdiction (Australia, United States, China) should also regulate AI; and whether or not they should do it similarly to the European Union is doing with AIA (Gacutan & Selvadurai, 2020; Svantesson, 2022). The reason for limited results regarding articles on AIA may be due to timing. We expect to see more AIA-specific publications in the near future. Even though the AIA is still only an advanced legislative proposal, we next assess it in order to illustrate what the precise problems AIA aims to address are, and how these relate to the literature. Our literature review suggests neither a scholarly consensus on which AI-related concerns require regulation, nor a specific way to regulate AI, if considered necessary. We did not find a united front of scholars arguing for the European Union to adopt horizontal regulation such as the AIA proposal but did find some individual articles the suggestions of which align with some central AIA ideas.

4. ATTEMPTS AT REGULATION: THE EUROPEAN UNION’S AI ACT PROPOSAL As we hope to demonstrate throughout this chapter, when opting to regulate AI systems horizontally, finding reasonable definitions for the target “AI” and “system” becomes a core challenge. This defines the scope of application of the said horizontal legislation and should provide legal certainty. What AI is considered to be should at the same time be precise and

Artificial intelligence and the law  439 general enough to capture the complex and diverse applications and do justice to the multitude of various technological approaches. The importance of technological neutrality as a regulatory technique is acknowledged and considered vital for “future-proofing” legislation. The Commission’s AI Act proposal strives for a balance between technological neutrality and specificity through its regulatory architecture, where AI is defined in broad terms as software developed according to certain techniques and approaches, which are then listed and updated in one of the annexes. However, above we demonstrated how scholarship has neither needed nor suggested such a definition, and next we question whether it is central for the AIA either. For reasons explained in more detail above and below, regulating artificial intelligence has become a pressing policy question within the European Union (on policy development, see Ulnicane, 2022). In April 2021, the EU Commission introduced its proposal for AI regulation,17 following the political guidelines established by Commission President Ursula von der Leyen in 2019. Regulation on AI has been on the European Union’s Digital Single Market agenda since 2017, when the European Council called for urgent action to ensure data protection, digital rights, and ethical standards.18 If the Council and the European Parliament agree on the text of the AIA and it is consequently enacted, the regulation would become applicable law in all EU Member States. The AIA is an ambitious legislative proposal. It builds on horizontal rather than sectoral regulation of AI. It contains mandatory requirements for the design and development of AI systems and harmonises controls for these products on the internal market. At the time of writing this chapter, adapting various aspects of the AIA proposal was ongoing (for more, see Bertuzzi, 2022) after its introduction almost two years earlier. Several amendments to and compromise proposals on the AIA have been published, but the fate of the AIA has not yet been decided on by the European Union’s co-legislators.19

17  See European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Luxembourg: Office for Official Publications of the European Union. 18  European Council meeting 19 October 2017, conclusions (EUCO 14/17). 19  The Slovenian presidency (2021) made a partial compromise proposal, then the French presidency (2022) and, at the time of writing this, the Czech presidency (2022) released theirs (see, respectively Council of the European Union. (2021). Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Presidency compromise text. Interinstitutional File: 2021/0106(COD). Luxembourg: Office for Official Publications of the European Union; Council of the European Union. . Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Progress report. Interinstitutional File: 2021/0106(COD). Luxembourg: Office for Official Publications of the European Union; Council of the European Union. . Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts – Second Presidency compromise text. Interinstitutional File: 2021/0106(COD). Luxembourg: Office for Official Publications of the European Union).

440  Research handbook on law and technology 4.1 The Three-Pronged Approach of Forbidden, High-Risk and Low or MinimalRisk AI Systems The AIA’s chosen legislative strategy builds on a risk-based approach similar to that of the General Data Protection Regulation (679/2016) but departs, first, from its legal basis in the Treaty on the Functioning of the European Union (TFEU) and, second, from its blanket regulation approach, in which all data processing activities are subjected to the same requirements. Whereas the legal basis for GDPR is Art. 16 TFEU on protecting personal data, the AIA’s legal basis is in part Art. 16 TFEU but mainly Art. 114 TFEU on a functioning internal market. Different legal bases suggest different main purposes of legislation. Both instruments are part of the Commission’s Digital Single Market package, but unlike the GDPR, the AIA approach largely follows that of existing EU product safety and market surveillance legislation (including “new legislative framework”, NLF).20 Instead of blanket regulation, the AIA proposal establishes obligations depending on the categorisation of the risk level of the AI system at hand: it differentiates between AI uses that create either unacceptable risk, high risk, and low or minimal risk. Certain manipulative AI uses are entirely prohibited as unacceptable risk, whereas high-risk applications (products) need to comply with new mandatory requirements on documentation, transparency, and human oversight,21 before they are permitted on the internal market. In contrast, low-risk AI applications that fall outside these obligations can follow voluntary codes of conduct (i.e. industry self-regulation). However, as to its scope of application, this is a blanket proposal, as it covers all AI systems22 and is arguably an instrument of maximum harmonisation (Veale & Zuiderveen Borgesius, 2021), that does not just set specific requirements for low or minimal-risk AI systems. On one hand, should maximum harmonisation be at hand here, the Member States can no longer regulate any class of AI systems. On the other, should the AIA not maximum harmonise, it would not maximally prevent market fragmentation (especially in low and minimal-risk AI systems). This is the general aspect of the scope of application of the AIA: which emerging technology or machine-based systems fall within its scope and who has the power to regulate them or not? A specific aspect of the AIA’s scope of application is the risk-based classification internal to the AIA. Many of the high-risk applications concern the use of public, democratic or political importance, such as critical infrastructure, education, public services, law enforcement, and administration of justice.23 In other words, the proposal touches upon key areas of public life where AI systems are in use already. In practice, determining the scope of application of the AIA proposal – which rules apply, product by product – can be problematic not just for the 20  NLF is the European Union’s internal market approach to ensuring conformity of products with applicable EU legislation when placed on the EU market, in effect the approach was rebranded in 2008, previously known as new approach since the 1980s. 21  See European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Luxembourg: Office for Official Publications of the European Union. 22  The AI Act (COM/2021/206. Art. 1) “lays down: harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union”. 23  See European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Luxembourg: Office for Official Publications of the European Union, Annex III.

Artificial intelligence and the law  441 difficulties related to defining what AI is, but also for deciding which AI systems should be considered high-risk or not. 4.2 AIA Objectives: Protecting Fundamental Rights and Facilitating Investment, Innovation and Development in the Single Market For an instrument that states in its name it is “laying down harmonised rules on artificial intelligence”, the AIA proposal does not directly seek to address many of the problems the literature has addressed as having been caused or exacerbated by the increasing use of algorithms or AI. The approach is more indirect: regulating the development, marketing and use of AI: it is largely directed at the designers and manufacturers of non-forbidden types of AI systems. The goal seems to be to encourage addressing risks at source. This means that the effects and especially unwanted effects on society and legally protected rights would already be considered at the stage of developing technology (the “by design” ideology). With the AIA, the Commission essentially seeks to set standards for AI use within the European Union and to an extent, the world.24 The explanatory memorandum of the AIA proposal [emphasis added] explicitly identifies four objectives: • • • •

ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; ensure legal certainty to facilitate investment and innovation in AI; enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.25

The objectives of the AIA proposal emphasise both competitiveness of the internal market and European values (i.e. fundamental rights and freedoms) in the design and use of AI systems. Similar to the General Data Protection Regulation (GDPR), the AIA aims to further rights protection and prompt worldwide standards and entails the possibility of posing fines for noncompliance.26 Specifically on protecting fundamental rights, the AIA proposal’s explanatory memorandum states that it “aims to address various sources of risks through a clearly defined risk-based approach”.27 The means to this end listed in AIA’s preamble is market regulation: to 24  On “the Brussels effect”, i.e. EU regulation’s extraterritorial effects outside the European Union though its regulation is not unilaterally directed to third countries, see Bradford (2020). 25  See European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Luxembourg: Office for Official Publications of the European Union p. 3. 26  See European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Luxembourg: Office for Official Publications of the European Union, Arts 71–72. 27  See European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Luxembourg: Office for Official Publications of the European Union, p. 11.

442  Research handbook on law and technology improve the functioning of the internal market; pursue a number of overriding reasons of public interest (protection of health, safety and fundamental rights); and ensure free movement of AI-based goods and services cross-border.28 The means chosen to follow from the logic and tradition of AIA’s chosen legal basis (Art. 114 TFEU), i.e. European Union’s legislative competence – instead of purely societal needs, historical experience or academic research. Many of the human or fundamental rights protecting elements of the AIA proposal may concern its justification more than content, bordering on window-dressing and emitting a false sense of tangible rights protection (lacking an individual complaint mechanism). However, if considered as a means to that end, it nevertheless will protect public interest directly by forbidding certain AI systems and indirectly by regulating the development, marketing and use of AI. By comparison, the AIA’s legally binding prohibitions (compared to many other approaches around the globe, see Geist, 2021, it explicitly bans certain AI systems) and requirements (for high-risk AI systems) are certainly a stronger form of protecting fundamental rights than non-binding ethics guidelines and policy recommendations on AI.29 The effectiveness of this protection hinges on the effective enforcement of the AIA which, in turn, may turn out to be problematic. Considering the aim of protecting fundamental rights, the regulative architecture based mainly on industry self-assessment against harmonised standards and common specifications (NLF) while lacking any direct complaint or redress mechanisms for individuals may seem odd (for further criticism, see Smuha, 2021; Smuha et al., 2021; Veale & Zuiderveen Borgesius, 2021). However, viewing the AIA as a piece of EU product safety and market surveillance legislation, it is less peculiar. The NLF regulatory approach builds on increasing compliance with requirements, such as existing law, through industry standards that are mandatory, or not. In a similar way, corporate social responsibility, sustainability and areas of environmental law operate with more or less voluntary standards and certificates intended to increase compliance (see, e.g. Matus & Veale, 2021; Smuha, 2021). In this sense, considering that the AIA proposal regulates the development, marketing and use of AI30 – which in turn is intended to ensure that AI systems are safe and lawful – NLF as an approach may fit. The AIA cannot protect the world from harm caused by AI, but it can go some way to increase compliance with existing law. However, the Commission-developed required harmonised standards or common specifications as well as industry-developed voluntary codes of conduct mainly do not yet exist (see Mc Fadden, Jones, Taylor & Osborn, 2021; Veale & Zuiderveen Borgesius, 2021), and effectively legislating through private standards-setting bodies or industry self-regulation suffers from democratic deficit (see, e.g. Cantero Gamito & Micklitz, 2020; Van Gestel & Micklitz, 2013). Several further points of criticism remain. First, the relative ineffectiveness of the earlier new approach or NLF to protect the internal market from non-compliant products is 28  See European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Luxembourg: Office for Official Publications of the European Union, e.g., Recital 1. 29  See High-Level Expert Group (HLEG) on AI, 2019b, 2019c. 30  See European Commission. (2021b). Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. COM(2021) 206 final. Luxembourg: Office for Official Publications of the European Union, e.g., Recital 1.

Artificial intelligence and the law  443 well-established31 and has not escaped the Commission.32 Second, the adverse societal effects (on human rights, democracy, rule of law) of developing, marketing and using AI systems are difficult to conceptualise: hence such harm is hard to pinpoint or quantify, and address, which has been criticised by legal scholarship (Smuha, 2021). This dissonance has not escaped the legislator either.33 Risk for the AIA relates only to classifying the object of regulation (risky AI systems) and harm relates only to individual or collective harm. Moreover, the AIA does not allocate responsibility for more than individual harm (Smuha, 2021). Resolving these problems is not listed in the AIA proposal as its objectives. To conclude, the AIA proposal seems to provide an example of what Bennett Moses described as technology being used as justification for new regulation, where the mixed rationales for regulating (market failure, rights protection, social solidarity and democratic governance) have little or nothing to do with technology (Bennett Moses, 2016). To a large extent, the AIA seeks to enhance enforcement by requiring existing law also to be observed in the context of AI development and deployment. To an extent, the definition of AI and harmonised or industry standards share a challenge: they may require constant change as technology develops.

31  The proportion of non-compliant products has varied between 5–53% of products in different market segments, according to Commission impact assessment inception document on the initiative “Internal Market for Goods – Enforcement and Compliance”, 13 May 2016, based on several inspections and studies cited, see European Commission. (2017). Proposal for a Regulation of the European Parliament and of the Council laying down rules and procedures for compliance with and enforcement of Union harmonisation legislation on products and amending Regulations (EU) No 305/2011, (EU) No 528/2012, (EU) 2016/424, (EU) 2016/425, (EU) 2016/426 and (EU) 2017/1369 of the European Parliament and of the Council, and Directives 2004/42/EC, 2009/48/EC, 2010/35/EU, 2013/29/EU, 2013/53/EU, 2014/28/EU, 2014/29/EU, 2014/30/EU, 2014/31/EU, 2014/32/EU, 2014/33/EU, 2014/34/ EU, 2014/35/EU, 2014/53/EU, 2014/68/EU and 2014/90/EU of the European Parliament and of the Council. COM (2017) 795 final. Luxembourg: Office for Official Publications of the European Union. 32  See European Commission. (2017). Proposal for a Regulation of the European Parliament and of the Council laying down rules and procedures for compliance with and enforcement of Union harmonisation legislation on products and amending Regulations (EU) No 305/2011, (EU) No 528/2012, (EU) 2016/424, (EU) 2016/425, (EU) 2016/426 and (EU) 2017/1369 of the European Parliament and of the Council, and Directives 2004/42/EC, 2009/48/EC, 2010/35/EU, 2013/29/EU, 2013/53/EU, 2014/28/EU, 2014/29/EU, 2014/30/EU, 2014/31/EU, 2014/32/EU, 2014/33/EU, 2014/34/EU, 2014/35/ EU, 2014/53/EU, 2014/68/EU and 2014/90/EU of the European Parliament and of the Council. COM (2017) 795 final. Luxembourg: Office for Official Publications of the European Union and European Commission. (2015). Commission Staff Working Document, A Single Market Strategy for Europe – Analysis and Evidence, Accompanying the document, Upgrading the Single Market: more opportunities for people and business. SWD(2015) 202 final. Luxembourg: Office for Official Publications of the European Union. 33  One of the Commission’s independent Regulatory Scrutiny Board (RSB) critiques on the regulatory impact assessment of the AIA proposal was “it remains difficult to quantify the real magnitude of the risks to fundamental rights”, see European Commission. (2021a). Commission Staff Working Document, Impact Assessment, Accompanying the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. SWD(2021) 85 final. Luxembourg: Office for Official Publications of the European Union.

444  Research handbook on law and technology 4.3 Solving the AI Definition Dilemma in AIA? The definition of AI systems in the AIA proposal has undergone several rounds of changes that are visible in the Czech Council Presidency’s compromise text.34 The AIA proposal definition of an AI system differs from those presented in the legal literature (see above) as well as from the updated definition of AI systems provided by the European Union’s High Level Expert Group on AI set up by the Commission.35 Much ink has been spilled producing newspaper and scholarly articles debating the definition dilemma. Among others, we suggested above that pinning down a clear definition of AI or AI system is necessary in order to decide the scope of application, which in turn secures more adequate enforcement. However, we also suggest that the definition dilemma can be approached as a false dilemma. First, and in general, the law has more and less successfully dealt with vague definitions before. The law will manage, and clarity will increase via interpretation of the law. With the AIA particularly, once it passes, the regulative design might still include giving the Commission the power to amend the technical definition of AI systems via implementing acts, considering market and technological development as Article 4 of the original Commission proposal suggested. In that case the AIA would include a general more permanent definition as well as a more technologically precise and more easily adaptable implementing acts defining AI systems. Second, and more specifically, it may not matter all that much what the exact definition of AI or AI system is, if the main purpose of the AIA is understood as AI systems complying with existing law and additionally with some requirements as to making notes while developing a system and complying with product standards. To decide whether an AI system is prohibited, high-risk or not (hence determining the applicability of the AIA and regulative authority), even our own definition of AI for this chapter might suffice: “We perceive AI as a diverse set of different computational procedures or techniques (such as machine learning algorithms) that based on data perform tasks to varying degree of autonomy that would be considered intelligent if performed by humans”. What really matters, also for the AIA, is the use of that technology – for good or for bad – as the recent ChatGPT-fuelled discussion points out (Helberger & Diakopoulos, 2023).

5. CONCLUSIONS Above we have provided the reader with first, an overview of the historical and current developments in research on law and AI and second, described how the upcoming EU AIA intends to bring the design and use of AI systems under a harmonised legal framework. With our literature review, we drew attention to the growing scholarly interest in AI and law as well as the limitations of database searches in gaining an understanding of the field. Notions of AI AIA, Council of the European Union, 2022b, Art. 3. “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions”. High-Level Expert Group (HLEG) on AI, 2019a, p. 6 (footnote omitted). 34  35 

Artificial intelligence and the law  445 have changed over time, the focus of legal research has shifted, and current research reflects a broad understanding of AI as computational procedures and techniques. In a context in which the main goal of the new legislation is to increase compliance with existing law without hindering innovation or competitiveness, the vagueness of central concepts such as AI and AI systems – the definition dilemma – is perhaps not a welcome discovery for all. However, for law, it is not necessarily a problem in need of a solution either. Historically, law is well equipped to work with vague concepts, even in the context of law and AI. In terms of research, instead of focusing on the definition dilemma, present AI and law research might benefit from a stronger focus on the social and legal contexts in which computational techniques are embedded. The legislator will provide a definition for AI systems. Regardless of the content of that definition, the law will provide mechanisms to guide how computational techniques can be exploited and instructions on developing AI. Empirical legal research is required to analyse how well regulating AI systems works in action, in the interaction between law, technology, and society.36

REFERENCES Alpaydin, E. (2016). Machine Learning: the new AI. Cambridge, MA: MIT Press. Aneesh, A. (2009). Global Labor: Algocratic Modes of Organization. Sociological Theory, 27(4), 347–370. Ashraf, C. (2022). Exploring the impacts of artificial intelligence on freedom of religion or belief online. International Journal of Human Rights, 26(5), 757–791. Retrieved from https://doi​.org​/10​ .1080​/13642987​.2021​.1968376 Barfield, W. & Pagallo U. (2020). Advanced Introduction to Law and Artificial Intelligence. Cheltenham: Edward Elgar. Beckers, A. & Teubner, G. (2021). Three Liability Regimes for Artificial Intelligence: Algorithmic Actants, Hybrids, Crowds. Oxford: Hart Publishing. Bench-Capon, T.J.M., Araszkiewicz, M., Ashley, K.D., Atkinson, K., Bex, F., Borges, F., Bourcier, D., Bourgine, P., Conrad, J.G., Francesconi, E., Gordon, T.F., Governatori, G., Leidner, J.L., Lewis, D.D., Loui, R.P., McCarthy, L.T., Prakken, H., Schilder, F., Schweighofer, E., Thompson, P., Tyrrell, A., Verheij, B., Walton, D.N. & Wyner, A.Z. (2012). A history of AI and Law in 50 papers: 25 years of the international conference on AI and Law. Artificial Intelligence and Law, 20(3), 215–319. Bennett Moses, L. (2016). Regulating in the face of sociotechnical change. In R. Brownsword, E. Scotford & K. Yeung (Eds.). Oxford Handbook of Law Regulation and Technology (pp. 573–596). Oxford: Oxford University Press. Bennett Moses, L. & Gollan, N. (2016). The illusion of newness: The importance of history in understanding the law-technology interface. In A. George (Ed.). Flashpoints: Changing Paradigms in Intellectual Property and Technology Law. New Orleans, LA. Retrieved from https://papers​.ssrn​ .com ​/sol3​/papers​.cfm​?abstract​_id​=2697311 Bertuzzi, L. (2022, June 22). Czech Presidency sets out path for AI Act discussions. Euractiv. Retrieved from  https://www​.euractiv​.com ​/section ​/digital ​/news ​/czech​-presidency​-sets​- out​-path​-for​-ai​-act​ -discussions/

36  In addition, we searched articles from HeinOnline’s Law Journal Library (terms “artificial intelligence” AND regulat* and “PathFinder Subject: Science, Technology, and the Law”) and based on this search, we complemented the sample gathered from Web of Science, choosing 11 articles written by well-known researcher.

446  Research handbook on law and technology Bing, J. (1990). Three generations of computerised systems in public administration and some implications for legal decision‐making. Ratio Juris, 3(2), 219–236. Retrieved from https://doi​.org​/10​ .1111​/j​.1467​-9337​.1990​.tb00059.x Bishop, C. (2006). Pattern Recognition and Machine Learning. New York: Springer. Blume, P. (1990). The communication of legal rules. Statute Law Review, 11(3), 189–210. Retrieved from https://doi​.org​/10​.1093​/slr​/11​.3​.189 Bradford, A. (2012). The Brussels effect. Northwestern University Law Review, 107(1), 1–68. Bradford, A. (2020). The Brussels Effect: How the European Union Rules the World. Oxford: Oxford University Press. Branting, K. (2000). Reasoning with Rules and Precedents: A Computational Model of Legal Analysis. Dordrecht, Netherlands: Springer. Brownsword, R. (2019). Law, Technology, and Society: Re-Imagining the Regulatory Environment. Oxon: Routledge. Buiten, M.C. (2019). Towards intelligent regulation of artificial intelligence. European Journal of Risk Regulation, 10(1), 41–59. Retrieved from https://doi​.org​/10​.1017​/err​.2019.8 Briglauer, W., Stocker, V. & Whalley, J. (2020) Public policy targets in EU broadband markets: The role of technological neutrality. Telecommunications Policy, 44. Retrieved from https://doi​.org​/10​.1016​/j​ .telpol​.2019​.101908 Burkel, J. (2019). The Challenges of Algorithmic Bias. Working paper, Law Society of Ontario Special Lectures. Ontario: The University of Western Ontario. Retrieved from https://ajcact​.openum​.ca​/files​ /sites​/160​/2020​/08​/ The​-Challenges​-of​-Algorithmic​-Bias-​.pdf Cantero Gamito, M. & Micklitz, H.-W. (Eds.). (2020). The Role of the EU in Transnational Legal Ordering: Standards, Contracts and Codes. Cheltenham: Edward Elgar. Čerka, P., Grigienė, J. & Sirbikytė, G. (2015). Liability for damages caused by artificial intelligence. Computer Law & Security Review, 31(3), 376–389. Retrieved from https://doi​.org​/10​.1016​/j​.clsr​.2015​ .03​.008 Cohen, J. (2019). Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford: Oxford University Press. Contissa, G., Godano, F. & Sartor, G. (2021). Computation, Cybernetics and the Law at the Origins of Legal Informatics. In S. Chiodo & V. Schiaffonati (Eds.). Italian Philosophy of Technology (pp. 91–110). Cham: Springer. Cukier, K. & Mayer-Schoenberger. (2013). The rise of big data: how it’s changing the way we think about the world. Foreign Affairs, 92(3), 28–40. Retrieved from https://www​.jstor​.org​/stable​/pdf​ /23526834​.pdf De Gregorio, G. (2018). From constitutional freedoms to the power of the platforms: Protecting fundamental rights online in the algorithmic society. European Journal of Legal Studies, 11(2), 65-103. Retrieved from https://ssrn​.com​/abstract​=3365106 Diver, L. (2021). Digisprudence: The design of legitimate code. Law, Innovation & Technology, 13(2), 325–354. Retrieved from https://doi​.org​/10​.1080​/17579961​.2021​.1977217 Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. Big Data & Society, 3(2), 1–11. Retrieved from https://doi​.org​/10​.1177​/2053951716665128 Dowdeswell, T.L. & Goltz, N. (2020). The clash of empires: Regulating technological threats to civil society. Information & Communications Technology Law, 29(2), 194–217. Retrieved from https://doi​ .org​/10​.1080​/13600834​.2020​.1735060 Feenberg, A. (2017). Critical theory of technology and STS. Thesis Eleven, 138(1), 3–12. Retrieved from https://doi​.org​/10​.1177​/0725513616689388 Gacutan, J. & Selvadurai, N. (2020). A statutory right to explanation for decisions generated using artificial intelligence. International Journal of Law and Information Technology, 28(3), 193-216. Retrieved from https://doi​.org​/10​.1093​/ijlit​/eaaa016 Gardner, A. (1987). An Artificial Intelligence Approach to Legal Reasoning. Cambridge, MA: MIT Press. Geist, M. (2021). AI and international regulation. In F. Martin-Bariteau & T. Scassa (Eds.). Artificial Intelligence and the Law in Canada (pp. 367–395). Toronto: LexisNexis. Gillespie, T. (2014). The Relevance of Algorithms. Media Technologies 167. Cambridge, MA: MIT Press.

Artificial intelligence and the law  447 Hacker, P. (2017). Personal data, exploitative contracts, and algorithmic fairness: Autonomous vehicles meet the internet of things. International Data Privacy Law, 7(4), 266–286. Retrieved from https:// doi​.org​/10​.1093​/idpl​/ipx014 Hacker, P. (2018). Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law. Common Market Law Review, 55(5), 1143–1186. Retrieved from https://doi​.org​/10​.54648​/cola2018095 Haenlein, M. & Kaplain, A. (2019). A brief history of artificial intelligence. California Management Review, 61(4), 5–14. Retrieved from https://doi​.org​/10​.1177​/0008125619864925 Hakkarainen, J. (2021). Naming something collective does not make it so: algorithmic discrimination and access to justice. Internet Policy Review, 10(4). Retrieved from https://doi​.org​/10​.14763​/2021​.4​ .1600 Helberger, N. & Diakopoulos, N. (2023). ChatGPT and the AI Act. Internet Policy Review, 12(1). Retrieved from https://doi​.org​/10​.14763​/2023​.1​.1682 High-Level Expert Group (HLEG) on AI. (2019a). A Definition of AI: Main Capabilities and Scientific Disciplines. Retrieved from https://digital​-strategy​.ec​.europa​.eu​/en​/ library​/definition​-artificial​ -intelligence​-main​-capabilities​-and​-scientific​-disciplines High-Level Expert Group (HLEG) on AI. (2019b). Ethics Guidelines for Trustworthy AI, 8 April 2019. Retrieved from https://digital​-strategy​.ec​.europa​.eu​/en​/ library​/ethics​-guidelines​-trustworthy​-ai High-Level Expert Group (HLEG) on AI. (2019c). Policy and Investment Recommendations for Trustworthy AI, 26 June 2019. Retrieved from https://digital​-strategy​.ec​.europa​.eu​/en​/ library​/policy​ -and​-investment​-recommendations​-trustworthy​-artificial​-intelligence Hildebrandt, M. (2016). Smart Technologies and the End(s) of Law. Cheltenham: Edward Elgar. Hildebrandt, M. (2019). Privacy as protection of the incomputable self: From agnostic to agonistic machine learning. Theoretical Inquiries in Law, 21(1), 83–121. Retrieved from https://www​.degruyter​ .com ​/document ​/doi​/10​.1515​/til​-2019​- 0004​/ html​?lang​=en Jasanoff, S. (1997). Science at the Bar: Law, Science and Technology in America. Cambridge, MA: Harvard University Press. Koulu, R. (2020). Human control over automation: EU policy and AI ethics. European journal of legal studies, 12(1), 9–46. Retrieved from https://doi​.org​/10​.2924​/ EJLS​.2019​.019 Koulu, R. (2021). Crafting Digital Transparency: Implementing Legal Values into Algorithmic Design. Critical Analysis of Law, 8(1), 81–100. Laukyte, M. (2023). 9 Artificial Intelligence and Hate Speech. Minorities, Free Speech and the Internet. London, New York: Routledge. Lee, Z.Y., Karim, M.E. & Ngui, K. (2021). Deep learning artificial intelligence and the law of causation: Application, challenges and solutions. Information & Communications Technology Law, 30(3), 255– 282. Retrieved from https://doi​.org​/10​.1080​/13600834​.2021​.1890678 Lessig, L. (1999). Code and Other Laws of Cyberspace. New York: Basic Books. Lessig, L. (2006). Code and Other Laws of Cyberspace. Version 2.0. New York: Basic Books. Leuenberger, S. & Schafer, B. (2017). The Whole Truth About the Law: Reasoning About Exceptions in Legal AI in 20th International Legal Informatics Symposium (IRIS 2017), Salzburg, Austria, 23-25 Feb 2017 Austrian Computer Association (pp. 131–138). Liu, H., Lin, C. & Chen, Y. (2019). Beyond State v Loomis: Artificial intelligence, government algorithmization and accountability. International Journal of Law and Information Technology, 27(2), 122–141. Retrieved from https://doi​.org​/10​.1093​/ijlit​/eaz001 Loevinger, L. (1949). Jurimetrics – The Next Step Forward. Minnesota Law Review, 33(5), 455–493. Lui, A. & Lamb, G.W. (2018). Artificial intelligence and augmented intelligence collaboration: regaining trust and confidence in the financial sector. Information & Communications Technology Law, 27(3), 267–283. Retrieved from https://doi​.org​/10​.1080​/13600834​.2018​.1488659 Martin-Bariteau F. & Scassa T. (Eds.). (2021). Artificial Intelligence and the Law in Canada. Toronto: LexisNexis. Matus, K.J.M. & Veale, M. (2021). Certification systems for machine learning: Lessons from sustainability. Regulation & Governance, 16(1), 177-196. Retrieved from https://doi​.org​/10​.1111​/rego​ .12417 McCarthy, J., Minsky, M.L., Rochester, N. & Shannon, C.E. (1955). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence, August 31, 1955.

448  Research handbook on law and technology McCorduck, P. (2004). Machines who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick, MA: A. K. Peters. McFadden, M., Jones, K., Taylor, E. & Osborn, G. (2021). Harmonising Artificial Intelligence: The Role of Standards in the EU AI Regulation. Oxford: Oxford Internet Institute. Mehl, L. (1958). Automation in the legal world: From the machine processing of legal information to the ‘law machine’. In Mechanisation of Thought Processes: Proceedings of a Symposium Held at the National Physical Laboratory on 24th, 25th, 26th and 27th November 1958 (Vol. II. pp. 755–787). London: Her Majesty’s Stationery Office. Mumford, L. (1964). Authoritarian and democratic technics. Technology and Culture, 5(1), 1–8. Nachbar, T. (2020). Algorithmic fairness, algorithmic discrimination. Virginia Public Law and Legal Theory Research Paper, 11. Oswald, M., Grace, J., Urwin, S. & Barnes, G.C. (2018). Algorithmic risk assessment policing models: lessons from the Durham HART model and ‘Experimental’ proportionality. Information & Communications Technology Law, 27(2), 223–250. Retrieved from https://doi​.org​/10​.1080​/13600834​ .2018​.1458455 Pohle, J. (2021). “Eine juristische Disziplin der Zukunft” – An der Schnittstelle von Recht und Informatik. In J. Pohle & K. Lenk (Eds.). Der Weg in die “Digitalisierung” der Gesellschaft. Was können wir aus der Geschichte der Informatik Lernen? (pp. 263–294). Marburg: Metropolis. Poole, D.L., Mackworth, A.K. & Goebel, R. (1998). Computational Intelligence: A Logical Approach. New York, NY: Oxford University Press Popple, J. (1990). Legal Expert Systems: The Inadequacy of a Rule-Based Approach. Paper presented at the 13th Australian Computer Science Conference (ACSC-13), Monash University, Melbourne 7–9 February 1990. Retrieved from https://papers​.ssrn​.com ​/sol3​/papers​.cfm​?abstract​_id​=2542072 Reed, C. (2012). Making Laws for Cyberspace. Oxford: Oxford University Press. Russell S.J. & Norvig P. (Eds.). (2016). Artificial Intelligence: A Modern Approach (3rd ed.). Upper Saddle River, NJ: Prentice Hall. Samoili, S., Lopez Cobo, M., Delipetrev, B., Martinez-Plumed, F., Gomez Gutierrez, E. & De Prato, G. (2021). AI Watch. Defining Artificial Intelligence 2.0. Publications Office of the European Union, Luxembourg. Retrieved from https://doi​.org​/10​.2760​/019901 Smuha, N.A. (2021). Beyond the individual: governing AI’s societal harm. Internet Policy Review, 10(3), 1-31. Retrieved from https://doi​.org​/10​.14763​/2021​.3​.1574 Smuha, N.A., Ahmed-Rengers, E., Harkens, A., Li, W., MacLaren, J., Piselli, R. & Yeung, K. (2021). How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act. SSRN. Retrieved from https://papers​.ssrn​.com​/sol3​/ papers​.cfm​?abstract​_id​=3899991 Steinmüller, W. (1970). EDV und Recht – Einführung in die Rechtsinformatik. Berlin: J. Schweitzer Verlag. Susskind, R. (1987). Expert Systems in Law: A Jurisprudential Inquiry. Oxford: Clarendon Press. Svantesson, D. (2021). The European Union artificial intelligence act: Potential implications for Australia. Alternative Law Journal, 47(1), 4–9. Retrieved from https://doi​.org​/10​.1177​/1037969X211052339 Tamo-Larrieux, A. (2021). Decision-making by machines: Is the ‘Law of Everything’ enough? Computer Law & Security Review, 41, 1–20. Retrieved from https://doi​.org​/10​.1016​/j​.clsr​.2021​.105541 Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. Retrieved from https://doi​.org​/10​.1093​/mind​/ LIX​.236​.433 Toohey, L., Moore, M., Dart, K. & Toohey D. (2019). Meeting the access to civil justice: digital inclusion, algorithmic justice, and human-centered design. Macquarie Law Journal, 2019(19), 133–156. Ulnicane, I. (2022). Artificial intelligence in the European Union: Policy, ethics and regulation. In T. Hoerber, G. Weber & I. Cabras (Eds.).The Routledge Handbook of European Integrations (pp. 254–269). London & New York: Routledge. Van Gestel, R. & Micklitz, H.-W. (2013). European integration through standardization: How judicial review is breaking down the club house of private standardization bodies. Common Market Law Review, 50(1), 145–181. Veale, M & Zuiderveen Borgesius, F.Z. (2021). Demystifying the draft EU Artificial Intelligence Act – Analysing the good, the bad, and the unclear elements of the proposed approach. Computer Law Review International, 22(4), 97–112.

Artificial intelligence and the law  449 Veinot, T.C., Mitchell, H. & Ancker, J.S. (2018). Good intentions are not enough: How informatics interventions can worsen inequality. Journal of the American Medical Informatics Association, 25(8), 1080–1088. Wagner, B. (2019). Liable, but not in control? Ensuring meaningful human agency in automated decisionmaking systems. Policy & Internet, 11(1), 104–122. Retrieved from https://doi​.org​/10​.1002​/poi3​.198 Winner, L. (1985). Do artifacts have politics? In D. MacKenzie & J. Wajcman (Eds.). The Social shaping of technology (pp. 121–136). Buckingham: Open University Press. Wojtczak, S. & Ksiezak, P. (2021). Causation in civil law and the problems of transparency in AI. European Review Of Private Law, 29(4), 561-582. Retrieved from http://dx​.doi​.org​/10​.54648​/ ERPL2021030 Volpicelli, G. (2023). ‘ChatGPT Broke the EU Plan to Regulate AI’ Politico 3.3.2023. Retrieved from https://www​.politico​.eu​/article​/eu​-plan​-regulate​-chatgpt​-openai​-artificial​-intelligence​-act/ Yeung, K. & Lodge, M. (2019). Algorithmic Regulation. Oxford: Oxford University Press. Zuiderveen Borgesius, F.Z. (2020). Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24(10), 1572–1593. Retrieved from https://doi​.org​/10​.1080​/13642987​.2020​.1743976

27. Machine learning and law Andrzej Porębski1

1. INTRODUCTION In recent years, machine learning has achieved several triumphs. These achievements are by no means limited to sensational events, such as the 2011 victory of the widely used machine learning system Watson on the Jeopardy! television quiz show (Ferrucci, 2012), the 2016 defeat of the masters of the extremely difficult game of Go by the deep-learning-based AlphaGo programme (Silver et al., 2017) or the 2022 approval of a conditionally autonomous (level 3, see: SAE International, 2021) Mercedes S-Class car by Germany (Harley, 2022). Above all, the achievement of machine learning is a general trend, which clearly indicates a rapid development of this class of technology (Maslej et al., 2023; Zhang et al., 2022; Pugliese, Regondi & Marini, 2021; Deloitte, 2019; Anthony, 2021). At the same time, the machine learning approach can be used in a wide range of fields (e.g. Sarker, 2021). Therefore, unsurprisingly, machine-learning-based solutions are increasingly being developed in the legal domain2 (Montelongo & Becker, 2020), both by legal scholars for academic purposes and by external companies for use by legal practitioners and law application institutions. Machine learning is, in fact, a vastly advantageous technology with enormous potential, as evidenced by both its sensational successes and its rapid deployment. Today, the products of machine learning algorithms are (often unknowingly) being used by almost everyone accessing search engines (Nayak, 2022) or taking photos with a smartphone (Morikawa et al., 2021; Tsai & Pandey, 2020). However, the high potential and widespread use of machine learning may give the misleading impression that the technology has almost unlimited possibilities and can quickly automate or improve various activities, including in the legal domain. A considerable number of myths, sometimes very far-fetched, have emerged around machine learning (see, e.g. de Saint Laurent, 2018; Floridi, 2020; Natale & Ballatore, 2020).3 These affect the quality of the ongoing discussion about the possibilities and framework of its application, the regulation of machine learning and other related issues. 1  This research was funded by the National Science Centre, Poland, and is the result of research project no. 2022/45/N/HS5/00871. The publication has been supported by a grant from the Doctoral School in the Social Sciences under the Strategic Programme Excellence Initiative at Jagiellonian University. 2  When I use the term “legal domain” in this chapter, I have in mind both legal scholarship and the practice of law in its broadest sense. 3  In this text, I will radically avoid the term “artificial intelligence” (except for the “eXplainable Artificial Intelligence” trend name) because of its vagueness, the heterogeneity in its definition and the problem related to its perception in non-technical janucircles: it bears more reference to science fiction and pop culture than the actual state of technological development. However, it is important to be aware that technologies referred to as artificial intelligence will very often be based on machine learning. The denotation of the term “machine learning” will either be subordinate to the denotation of the term “artificial intelligence” or crossed with it. Therefore, in practice, many of the myths about artificial intelligence will project themselves onto machine learning.

450

Machine learning and law  451 One may be concerned that many legal scholars ignore the seriousness of the topic and overlook the need for an intensified discussion about how machine learning should be used in the legal domain and which uses would be detrimental to it. This is probably due to the frequent belief that humans are irreplaceable in the execution of the law and that law is that particular area of human activity to which algorithmisation has no access. These are misconceptions. Earlier, I cited examples of the penetration of machine learning in spheres of life that only a few decades ago seemed to be completely inaccessible to autonomous machines. These examples should serve as a warning to those lawyers who would like to live in the belief that the stronger involvement of this technology in the legal domain will remain only in the realm of theory. They should also create the need to discuss this topic early enough so as not to be powerless when instances of the technology in question start to be implemented on a (too) large scale in the application of law. To balance the different perspectives on the application of machine learning in the legal domain and to avoid misconceptions about this technology, this chapter begins by providing a general overview of machine learning. Selected applications in the legal domain (and other reviews of these applications) are then highlighted. Finally, a detailed overview of widely discussed problems relating to machine learning and law – (in)comprehensibility and lack of transparency – is presented. It should be noted that the selection of the problems and technologies described here is non-exhaustive, as neither can be exhaustively discussed in a short review. However, I have tried to select the referenced themes and problems in such a way that they reflect the most relevant areas of discussion on this topic.

2. MACHINE LEARNING – AN INFORMAL INTRODUCTION “Machine learning” is an ambiguous term that typically refers to a field of computer science focused on creating programmes directly based on empirical data, which enable the prediction of states of affairs in cases not covered by the data (cf. e.g. Janiesch, Zschech & Heinrich, 2021; Subasi, 2020, pp. 91–202; Lee, 2019). It can also refer to a group of technologies that the field is developing. In the machine learning paradigm, a computer programme, as it were, follows from the data. This means that its creation does not involve programmers simply defining a priori all computational functions, logic rules and relationships between variables (as is the case in traditional programming). Instead, the program is created (or, at least, concretised) based on data and an automated algorithm for processing them (a machine learning algorithm), which allows optimal computational functions to be found and inference rules to be induced. The program created by machine learning is, thus, based on data because its parameters are derived from data. Such a programme can (potentially) automatically update its computational functions, “tuning” itself to new (changed) data. It can even “learn” to behave effectively in a predefined environment of its operation by autonomously generating data from its interactions with the environment.4

4  This is evident in reinforcement learning, which will not be discussed in this chapter, as it is the most difficult and least used branch of machine learning algorithms in the legal domain; a practical presentation of reinforcement learning is provided by Winder (2021), and Lapan (2020) can help teach the basics of such methods.

452  Research handbook on law and technology Therefore, what features of machine learning make it a popular choice? Why is it worth using this technology? In the most general terms, machine learning allows one to discover the structure of the data, particularly the relationships between the variables considered and to be able to generalise the resulting conclusions. Thus, machine learning enables one to achieve objectives that would be difficult to achieve using classical methods, which often require starting from extensive knowledge of the phenomenon, e.g. a ready-made model of it. For instance, if we wanted to use classical programming methods to create a program that recognises images, for a satisfactory result, we would have to top-down define a model that includes many formally captured graphical features of images representing a given object. It would be necessary to define the features of the sets of pixels that the programme would treat as indicative of the presence of, for instance, an eggplant in the image and then do the same for all objects to be recognised. Such creation of an exhaustive set of if-then rules (if a certain set of pixels is present, then treat it as an eggplant) is impossible, as there are no established generalised rules for inferring that an object can be seen (the appearance of eggplants may differ based on lighting, maturity, size, etc.). When machine learning is applied, such a task can be performed without a top-down model, by “learning” to recognise images based on a categorised (described) and appropriately large set of them. Thus, machine learning makes it possible, based on empirical data, to achieve rules that one would not be able to come up with and give to the programme a priori. This is why image recognition technologies have only recently become widespread, as they began to be based on machine learning. The objectives (or tasks) that machine learning can serve are very different. However, usually, they will be a more or less sophisticated form of one of the four: • • • •

Classification: determining whether an object belongs to a given group (“is this object X or Y?”); predicting a label. Example: determining whether the person will be able to repay the loan. Regression: estimating what the value of a numerical characteristic of some object is (“how much/how much of something is there?”); predicting a quantity. Example: predicting the inflation rate in the following month. Clustering: grouping a data set, dividing it into dissimilar, possibly homogeneous subsets (“how are the objects under study organised?”). Example: dividing the store’s customers into groups with similar shopping preferences. Other data processing oriented towards the discovery of data specificity: for instance, (1) simplifying the data while preserving maximum information value (dimensionality reduction) or (2) detecting outliers to eliminate them or highlight their presence (anomaly detection).5 For instance, (1) finding from a large database of products, including hundreds of their features, a few product quality indicators that will best summarise the information (dimensionality reduction example) and (2) finding the strangest, most unusual transactions in the banking system for further scrutiny for potential fraud (anomaly detection example).

5  Some might say that this stage is not a separate goal. However, sometimes, the discovery of a simpler data structure or outlier observations can be a goal in itself, such as in fraud detection (Vanhoeyveld, Martens & Peeters, 2020).

Machine learning and law  453 How are these tasks carried out? They are performed through automated operations on a collected dataset (empirical data), aimed at creating such a model (i.e. a formal record of the relationships between variables) that fits their structure (i.e. is confirmed by the existing data and is matched to them) and can be generalised (i.e. will fit and function well not only for the existing data but also for other, new data concerning the considered issue). In practice, the general form of the model (i.e. the fact that some dependencies are present between certain variables) is predefined, and empirical data are used to determine its detailed form (to concretise the strength and direction of the proposed dependencies) and possibly test the correctness of the proposed general form (e.g. by recognising that the dependency between certain variables is so weak as to be negligible and, thus, does not deserve to be included in the assumed form of the model). Having written that, I would like to make it explicit that any intuitions about the general lack of human influence in the process of creating machine learning models (“the computer learns itself”) are wrong. Machine learning is a way of identifying (albeit often tacitly to the user, see Section 4.1) what relationships between variables arise from the data held and making practical use of this information. It cannot abstract from certain assumptions. Even highly automated approaches to model development that will automatically select variables, consider different classes of models, compare them with each other and select the best one (automated machine learning, AutoML) are based on a number of assumptions (Hutter, Kotthoff & Vanschoren, 2019). The assumptions are about, e.g. what variables are included, how the data were prepared, what classes of models are considered and by what criteria the models are compared. Even if, from a certain point in the process, the subsequent model-building steps are automated, key decisions are made by humans because they both create and constrain the space of possibilities in which the algorithm revolves. The “intelligence” of machine learning is, therefore, a myth, if it were to involve going beyond a given set of possibilities. It is true, however, that some classes of machine learning (particularly, deep learning) can analyse and search a given set of possibilities extremely thoroughly. Now, consider, hypothetically, legal examples of how machine learning might be applied. Imagine Mr Jones, a barrister who once worked in data science, supplementing his current legal practice with his hobby, i.e. the application of machine learning. In the course of his work, he has painstakingly collected and stored data on criminal cases handled by himself and his partners in a spreadsheet. In doing so, he created a database of many thousands of cases characterised in terms of several dozen variables. This database is his empirical data. Mr Jones has included numerous variables in the data, of which it is possible to identify the key ones for him and his clients (those whose values they would most like to know in the future): type of sentence (qualitative variable, acquittal vs conviction), type of punishment (qualitative variable, imprisonment vs restriction vs fine) and amount of sentence (quantitative variable, the amount of fine imposed or the number of months of restriction or imprisonment). The other variables convey information about the various circumstances of the case, e.g. who was the judge, what (potential) crime the case concerns and the characteristics of the defendant. Mr Jones is a person with an eminently pragmatic approach to the law, recognising that as various features of litigation processes are observable, data may be employed to predict them. Mr Jones concludes that he can use his database to create machine learning models aimed at solving different problems (performing different tasks). The first possible task is predicting whether a conviction or acquittal will occur under the circumstances and, if conviction, the type of punishment to be imposed (an example of a

454  Research handbook on law and technology classification task).6 Another task is to predict how severe the punishment of a given type will be for an object characterised by certain characteristics: what will be the amount of the fine or how many months of imprisonment will be imposed (an example of a regression task). Importantly, in these tasks, at the model-building stage, a set of pairs is the core concern. Such pairs are composed as follows: a set of data describing a certain object (features or predictors; in this case, it is the characteristics of a certain trial, the variables describing it; statistically speaking: explanatory variables or independent variables) and the desired value of the predicted feature of this object (label; in this case, e.g. the fact of acquittal/conviction; statistically speaking: response variable or dependent variable). The label for the previously observed data is known, but in model applications, it will not be known; it will have to be predicted from the remaining data. Teaching of such models can, therefore, be considered as providing the algorithm with data and instructing it, such as for the first trial under consideration, characterised by the values of the variables x1,1, x1,2, ... you should return the value y1: acquittal; for the second trial under consideration, characterised by the values of the variables x2,1, x2,2, ... you should return the value y2: conviction and so on for all n trials. Such learning is called supervised learning, and supervision is precisely about “telling” the algorithm what label (what output) is expected when certain features (inputs) occur. By including many (n) such pairs of inputs (features, i.e. descriptions of objects) and outputs (labels, characterising the values of the variables to be predicted) in the learning process, the model created becomes adapted to the varied data and capable of determining the expected output values (labels) in different cases. The algorithm seeks to maximise the level of correspondence between the created model and the data. It does so by optimising the model parameters to achieve the most favourable values of the established fit measure (e.g. maximising the goodness of fit measure or minimising the error measure), varying according to the model and the methods used to build it. Supervised machine learning models can be of different quality, depending on, e.g. what class they are (what type of algorithm they are using), what data they are based on and how long they have been trained. Various measures of the quality of such models exist, but their general assumption is very simple: in a classification task, the better the model is, the more accurately it identifies objects as belonging to the correct (real-world) class; in a regression task, the better the model is, the closer its estimates of variables are to the correct (real-world) values. Mr Jones can, therefore, focus on creating machine learning models aimed at predicting specific, distinguished variables, which will be based on a set of known variable values (a supervised learning model). However, what is important to him is not only the predictions that can be obtained but also the categorisation of the data he already has and an understanding 6  More precisely, in a classification task, usually the first step is to estimate the probability values of individual events (e.g. belonging to a particular category). Then, the so-called cut-off thresholds are determined, and based on them, the system concludes that the object belongs to a certain category. A probability of belonging to a certain category higher than 50% does not necessarily result in an object being classified in that category (and conversely, a probability lower than 50% does not necessarily exclude its classification in the corresponding category). Sometimes, the risks associated with decisions (e.g. declaring someone guilty or declaring a person fit to leave the hospital when they were ill with a very serious disease) can require very careful decision-making about assignments to such sensitive categories (“guilty”; “to be discharged from the hospital”). In such use cases, the system can be tuned so that only when the probability of belonging to such groups is very high will it make such decisions.

Machine learning and law  455 of the main types of criminal cases that can be distinguished from them so that when a new case comes to him, he will immediately obtain initial intuitions about its characteristics (clustering task). Human curiosity also makes Mr Jones want to distinguish, out of the thousands of cases he has collected, the most unusual ones that deviate from the average (anomaly/outlier detection task). Finally, the number of process-descriptive variables included in the data sometimes seems overwhelming to Mr Jones. He is, thus, keen to know if it can be reduced to a smaller number of variables summarising the pre-existing ones without losing too much information (dimensionality reduction task). When Mr Jones builds models to solve the problems indicated, he will not use supervised learning, because in these cases the model is not expected to determine an adequate output value (i.e. an adequate label) from the input values. Appropriate processing of the input data is sufficient to gain a better understanding of their structure, organisation, or complexity. For such tasks, distinguishing labels and comparing them pairwise with features is not necessary, which is why such learning is termed unsupervised and the data used in them unlabelled. Unsupervised machine learning detects patterns in data by computing appropriate metrics on the data directly (e.g. distances in clustering) or subjecting the data to algebraic transformations (e.g. singular value decomposition in dimensionality reduction). The dataset is analysed without reference to variables external to it; thus, no “supervision” is performed on the algorithm. Supervised learning is undoubtedly the most widespread group of machine learning methods (Loukides, 2021, pp. 10–11), and it currently seems to be the most relevant for most legal applications (see Section 3). Thus, it is worth mentioning the main classes of supervised machine learning models. For the purpose of this text, they can be divided according to the ability to understand the result of an algorithm without any further transformations, to understand the algorithm itself, to trace the calculations made and to hypothetically perform the entire calculations by human – into “simpler” and “more difficult” ones. This division will be used in Section 4 of the text. “Simpler” models classes include regression models, such as linear (Schroeder, Sjoquist & Stephan, 2017), logit (Menard, 2010), probit (Garson, 2012) and negative binomial models (Hilbe, 2011); decision trees (Rokach & Maimon, 2014); Bayesian models, such as the naïve Bayes classifier (Rish, 2001) and Bayesian networks (Koski & Noble, 2009); and generalised additive models (Wood, 2017). “More difficult” classes of models include primarily the different types of artificial neural networks (Gurney, 1997; Aggarwal, 2018); multiple classifier systems, such as random forests (Genuer & Poggi, 2020); and support vector machines (Cristianini & Shawe-Taylor, 2000). Gareth, Witten, Hastie, and Tibshirani (2017) provided a brief and relatively informal description of the “simpler” methods, while a more technical description, which includes some of the “more difficult” methods and which remains concise, can be found in Hastie, Tibshirani and Friedman (2009). Unsupervised learning algorithms are more difficult to use in the most crucial legal applications but still hold promise. They are also less common than supervised learning, but most professional data science practitioners encounter them anyway (Loukides, 2021, pp. 10–11). It is, therefore, worth citing some of the most basic ones, those related to clustering and dimensionality reduction. Among the algorithms for clustering, the hierarchical and combinatorial algorithms are the most common. In machine learning, due to the large size of the data, combinatorial methods are much more typically used. Among the most popular hierarchical methods is Ward’s agglomerative (Porębski, 2021), and among the combinatorial ones, the most commonly used is the k-means method (Wu, 2012). A very sound, albeit heavily formal

456  Research handbook on law and technology and technical, study of the topic is presented by Wierzchoń and Kłopotek (2018). The key methods for dimensionality reduction are principal component analysis and exploratory factor analysis. To understand dimensionality reduction, the work of Jollifee (2002) on principal component analysis is to be consulted, as the methods are similar (although they should not be equated, see Jollifee, 2002, pp. 150–166). Additionally, it is worth referring to a group of unsupervised learning methods not included in the earlier description, but often very useful, namely association rule learning (Kumbhare & Chobe, 2014).

3. SELECTED APPLICATIONS OF MACHINE LEARNING IN THE LEGAL DOMAIN Following a discussion of the characteristics of machine learning technology, an overview of its applications in the legal domain will be presented. This section is not an exhaustive overview. Rather, it outlines the field of applications with concrete examples instead of providing a detailed exploration of all commercially and non-commercially available legal applications of the technology. Should one wish to read a typical comprehensive literature or market review on machine learning applications in the legal domain, the following works should be consulted: Surden’s general reviews (2014, 2019, 2021); a review of programs used in criminal justice in European countries (Fair Trails, 2021), which is unique for its comprehensiveness; a text by Sil, Roy, Bhushan and Majumdar (2019), which reviews more than a dozen programs; Reiling’s (2020) court application-oriented review; a text by Bansal, Sharma and Singh (2019) on deep learning applications; a review by Rosili et  al. (2021), which covers applications focused on judgement prediction; a bibliometric review by Montelongo and Becker (2020), providing a rich bibliography of articles on deep learning in the legal domain; and a detailed description by Stern, Liebman, Roberts and Wang (2021) of a Chinese experiment with big data and machine learning, which, although not intended as a literature review, can serve as a rich source of knowledge about various ideas for the application of machine learning and related technologies in the Chinese judiciary. 3.1 COMPAS and Other Risk Assessment Systems Probably the greatest example in the Western world of the use of machine learning models in the legal domain is Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) (Equivant, 2019), which is used by the courts to determine how likely an offender is to re-offend. Hence, the output of the COMPAS system can significantly influence sentencing. This programme has received extensive consideration in the literature (among the countless, e.g. Dressel & Farid, 2018; Angwin, Larson, Mattu & Kirchner, 2016, with polemics in Dieterich, Mendoza and Brennan (2016)), some of it strictly critical (e.g. Schwerzmann, 2021). Some works have directly referred to the problems of algorithmic bias (e.g. Angwin et al., 2016) or variously understood transparency. For instance, Rudin, Wang and Coker (2020) explicitly stated that the biggest problem with COMPAS is its lack of transparency (“black box” problem) and that it is, therefore, irrelevant to focus on a discussion about its fairness (this work has been heavily criticised by Jackson and Mendoza (2020)). In turn, Angelino, LarusStone, Alabi, Seltzer and Rudin (2018) proposed an algorithm called Certifiably Optimal RulE ListS (CORELS), which had similar accuracy to COMPAS but, according to the authors, full

Machine learning and law  457 interpretability. It shows that the main issues discussed when it comes to the practical application of machine learning in the legal domain are transparency and bias. It should be emphasised that COMPAS is in some ways the most influential on legal scholarship and is, therefore, described here, but it is not the only realistically applicable system of its kind. For instance, the Offender Assessment System (OASys) used in the United Kingdom by prison and probation services (Moore, 2015) is based on a number of – sometimes surprisingly simple (and seemingly overfitted to the learning data and thus poorly generalisable) – supervised learning models. Another example is the system used by the Dutch police for risk profiling of children, the ProKid 12-SI, which faced criticism (La Fors, 2020). However, the literature on these programs is much less frequent and has much less impact, presumably because they occur on a non-judicial level and can be perceived as “for internal use”. It is not necessarily a good thing that such applications of machine learning escape global science, as their analyses can provide just as important lessons and cautions about the framework for using machine learning as the most high-profile applications (such as COMPAS) can. Finally, although COMPAS and related systems have been indicated as a separate phenomenon, it is software that needs to be looked at in the broader context of the application of machine learning known as predictive justice. Let us now take a broader look at this stream. 3.2 Predictive Justice Predictive justice could be described as the use of models (usually machine learning models) to predict and possibly automate or assist in judicial decisions (Khanna, 2021; Scherer, 2019; Queudot & Meurs, 2018). For obvious reasons, the automation of certain legal decisions seems extremely attractive: the enormous efficiency and resource savings it would entail bring the hope that decisionmaking in the law application context would become efficient and that decision-makers would finally have adequate time to reflect on more difficult cases. These obvious hopes are matched by equally obvious concerns: fairness, the de-individualised approach, etc. The topic of predictive justice is generating discussion on constitutional and human rights aspects (Brenner et al., 2020; Završnik, 2020). There are even extreme approaches to the use of machine learning models designed to predict judgements around the world: on the one hand, the very enthusiastic approach in China, which is potentially dangerous to human rights, primarily due to the weakening of the judicial authority (Stern et al., 2021; Papagianneas, 2022), and, on the other, the ban on the creation of such models by private entities passed in 2019 in France (Artificial Lawyer, 2019). The ban led to a vigorous discussion, including strongly critical comments, for instance, suggesting that this ban undermines judicial accountability (Schonander, 2019). What is worth bearing in mind when thinking about predictive justice is an inherent limitation of machine learning: the process of learning is retrospective, not prospective. As these types of programs learn from data, they need to have new data delivered to them all the time to be up-to-date against new cases decided and new lines of jurisprudence. This means that it is currently difficult to imagine that the work of judges can be fully abandoned in favour of “the work of algorithms”. In some simplistic terms, this would lead to the reproduction indefinitely of adjudications replicating those on which the algorithms have learned, even if circumstances were to change. I am afraid that some discussions of hypothetical judicial automation completely ignore this essential feature of machine learning, a feature that creates the

458  Research handbook on law and technology need for new data, a benchmark of sorts, thereby dismissing or preventing automation from going too far (even if, hypothetically, all other factors would make it possible). Additionally, related to the limitation under discussion is the inability of machine learning to satisfactorily solve hard cases (cases that are atypical in specific legal contexts, see, e.g. Zygmunt (2020)). Extensive past data on hard cases do not exist; this is the essence of why such cases are termed “hard”. For hard cases, machine learning models would, therefore, have nowhere to sensibly infer from. There are also less obvious concepts of using predictive justice to assist decision-makers, not at the level of doing part of the analysis for them (by predicting the expected or typical behaviour of any of the actors) but to increase reliability and fairness of decision-makers and prevent inconsistent decisions. Quintessential examples have been provided by Chen (2019) and Bell, Hong, McKeown and Voss (2021). Essentially, in both papers, the idea is to note that machine learning models can detect a particularly strong role for some non-legal factor in the decision-making process (i.e. strong influence on the response variable, high predictive ability). Such a situation will indicate a systematic error or a potential bias in the decision-maker and, thus, provide an opportunity to signal the need for either reflection or reconsideration. In other words, if it emerges that the mere gender or nationality of an applicant allows the accurate prediction of the outcome of cases in which, according to the law, these variables should not count, then something is wrong. The possibility of such a completely different use of machine learning shows the vast potential this technology has. Finally, some of the research can be attributed to predictive justice methodology. For instance, due to characteristics of the European Court of Human Rights’ judgements, which is very attractive for natural language processing purposes (clear and established structure of documents, high availability of data and a limited and known set of legal bases), many models predicting their judgements have been developed (e.g. Aletras, Tsarapatsanis, Preoţiuc-Pietro & Lampos, 2016; Medvedeva, Vols & Wieling, 2020). Interestingly, the average accuracy (i.e. percentage of correct classifications) of some models can reach 75–80%, and in some cases (e.g. a specific article), it can exceed 80%. Such research also provides insights deeper than solely the performance of models; for instance, Medvedeva et al.’s (2020) model, constructed using only one feature, the composition of the court, reaches 66% accuracy. This accuracy rate can be linked to the strength of the influence of the judges on the ruling (if the accuracy of such a model were 100%, this would indicate extreme ideological dissent or bias on the part of the judges). By appropriately using machine learning (specifically, interpreting the reasons for the quality of different models), it is, thus, possible to investigate things far beyond the question “Can x be predicted?” 3.3 Legal Practice Assistance Machine learning is also at the heart of numerous programmes that serve as an aid to the practice of law. Most commonly, these programmes, such as Lawgeex (2023) or Luminance (2023), are used to assist in working on documents, particularly contract analysis. However, also other applications, e.g. Westlaw Edge (Thomson Reuters, 2023), are also present. These applications process written natural language queries, find the requested information and produce a legal analysis report, which may also include the predicted outcomes. Blue J Tax (2023), on the other hand, is used to indicate the most likely outcome and show related cases in tax law.

Machine learning and law  459 The latest development in machine learning that may find application in legal practice is generative pre-trained transformer (GPT) models. The most prominent example of the use of these models is ChatGPT (OpenAI, 2022). ChatGPT has the potential to be widely used by legal professionals due to its ability to execute natural language input instructions, including answering questions relating to professional issues and creating document content based on specifications provided by the user. Unfortunately, due to its characteristics, it can perfectly convincingly at the syntactic level present complete nonsense at the semantic level, as Kubacka (2022) demonstrated. However, this nonsense is written in such cogent language that sometimes only professionals will be able to verify it and not be convinced that it is true (Kubacka, 2022). This causes ChatGPT to create content that will be perceived by many as smarter and more correct than it actually is (Bogost, 2022). Such content, which has a significantly lower semantic quality relative to its high syntactic quality, is a considerable threat in legal practice. Therefore, it is important not to become overly enthusiastic about the use of such technologies by lawyers. However, the potential of the technologies described and their suitability for professional use may change over the next few years or even the next few months.7

4. “BLACK BOXNESS”, UNDERSTANDABILITY PROBLEMS AND TRENDS ADDRESSING THESE ISSUES One of the key problems with using certain classes of machine learning models is referred to as a “black box”. It is the inability to trace the reasons why the system returned a particular result, even if the ways of constructing it are known. Models belonging to these classes will be considered “non-transparent”.8 In opposition to them will be “transparent” models, i.e. models whose way of arriving at a conclusion can be traced and are themselves understandable. Importantly, even if a model is not transparent, it may be possible to ascertain that it is highly effective. Thus, it is possible to exist models that are known to accurately predict 7  Please note that the chapter presented was being written in 2022 and slightly updated in the first quarter of 2023. ChatGPT was presented by OpenAI on 30 November 2022, so at the time of writing the final version of the chapter, only preprints and popular science articles can be found on the relationship between ChatGPT and legal context, not academic journal articles. The most comprehensive current study of the topic of the relationship of generative models to the law (at the time of writing this chapter) is an issue of The Practice (Center on the Legal Profession, 2023). However, this situation will have changed very rapidly by the time this chapter is published. Note also that the most recent Open AI work is a published implementation of the GPT-4 model (OpenAI, 14 March 2023), which also supports user input in the form of images and very long text. (The ChatGPT described here is based on the GPT-3.5 model at the time of writing.) At the time of completion of this text (March 2023), the capabilities and limitations of GPT-4 are not yet well described, but its ability to, for instance, summarise long documents seems at least promising for legal practice. The fact that this text needed updating twice during the final stage of its preparation (due to the introduction of ChatGPT and GPT-4) is a great illustration of the progress being made in the field of machine learning. 8  Note that the term “non-transparent” is sometimes also used when referring to such technologies, that the exact way they work could be traced, but information about them is held secret by their creators. For instance, the COMPAS system mentioned earlier is probably based on an algorithmically transparent class of models, but precise information about its operation has not been made available by the company creating the technology (i.e. Equivant). This problem makes COMPAS a “black box” from an external point of view, i.e. if one is not a member of the organisation that produced it and is not acquainted with its documentation, they will not know how the programme works.

460  Research handbook on law and technology reality, but it is not clear how they arrive at the correct conclusions, and their operation is not understandable. Conversely, in both social and legal contexts, there is often an expectation that the decisionmaker will be able to provide reasons for the decision. This expectation will be particularly clear when the decisions being made have a significant social impact (this is how the need for explanatory machine learning solutions is argued for by Watson and Floridi (2021)), as is the case in the legal application context. When a judge makes a judgement or an official makes a decision, they will often be required to justify it, even if only orally. A tension, therefore, arises between what is expected of those applying the law and the characteristics of the computer models that could support or replace their work. The legal scholarship literature explicitly points to the need for explainability (Deeks, 2019) and the problems attracted by the “black box” nature of some systems (Bathaee, 2018, pp. 919–921). To some extent, this need can be addressed by the “eXplainable Artificial Intelligence” movement (XAI). The movement aims to research and develop techniques that allow for the construction of models that, even if not transparent, can be explained, making them comprehensible (Doran, Schulz & Besold, 2018; Gunning et  al., 2019). In other words, this subfield of research focuses on exploring how to make the resulting logical and statistical models as understandable as possible for humans. However, what does it mean for a system to be understandable, transparent or explainable? Again, the best way to address this question is to refer to the XAI movement, as it addresses not only technical issues about how to construct new, more transparent or explainable and, thus, more understandable models but also theoretical and methodological issues related to understanding and measuring particular key XAI concepts. A comprehensive overview of XAI research can be found in several reviews (Guidotti, Monreale, Turini, Pedreschi & Giannotti, 2019; Longo, Goebel, Lecue, Kieseberg & Holzinger, 2020; Arrieta et al., 2020; Hanif, Zhang & Wood, 2021; Adadi & Berrada, 2018; Islam, Ahmed, Barua & Begum, 2022). A more technical description of some XAI methods is provided by Biecek and Burzykowski (2021). A model is understandable when a human can understand how it works. A model is transparent when, without using additional techniques, it can be considered understandable. A model is explainable when, using certain techniques, its operation can be clarified, making it more understandable (in a specific context, in particular for a specific audience (Arrieta et al., 2020), etc.). Each of the definitions made assumes a conceptual spectrum, not a binary nature of these concepts. Each definition is also modifiable: the different possibilities of definition and the interdependence of the concepts discussed are well demonstrated by van den Berg and Kuiper (2020) and by Bellucci, Delestre, Malandain and Zanni-Merk (2021). XAI typically divides understandable machine learning solutions into groups of antehoc explainable (transparent) and post-hoc explainable models. The latter can be explained using model-agnostic or model-specific techniques (see, e.g. taxonomy based on dozens of papers: Arrieta et al. (2020, p. 93)). Let us now return to the division that was introduced in Section 2. Overall, “simple” supervised learning models (linear regression, decision trees, etc.) usually can be considered transparent, as their methods of operation are understandable enough to fully trace them. At the same time, “more complicated” classes of models (neural networks, random forests, etc.) are typically too complex to understand, and thus, additional operations (explanations) are needed to make them intelligible. As can be seen, XAI can involve not only explaining but also proposing a specific, “simple” class of models,

Machine learning and law  461 and as Rudin (2019) suggests, in the “high stakes decisions” (such as in criminal law cases), focusing on simpler and transparent (inherently interpretable) models is the highly preferred direction of XAI. There have been attempts to measure XAI concepts (e.g. Zhou, Gandomi, Chen & Holzinger, 2021). The measurement process can also consider the ability of XAI techniques to influence user responses (Kenny, Ford, Quinn & Keane, 2021). However, these attempts are still at an early stage, especially when in reference to the legal applications of XAI. One of the first papers that assessed the explainability of the natural language processing model designed for a legal text is from 2021 and is described as a “pilot study” by the authors (Górski & Ramakrishna, 2021). There are already more extensive references to XAI in the legal scholarship, e.g. regarding proposing methods and models in line with the tenets of this stream (Vlek, Prakken, Renooij & Verheij, 2016; Branting et al., 2021) and using such models in empirical (e.g. criminological) research (Loyola-González, 2019). At the same time, some choose to focus on the XAIassociated legal aspects of machine learning models (especially legal requirements) (Bibal, Lognoul, de Streel & Frénay, 2021). I think that for the application of machine learning in much of the legal domain, the discussion about the understandability of the models used is fundamental. It is entangled with concerns regarding rights (e.g. the right to a fair trial) and the practical functioning of the “machine learning–improved” justice system (e.g. identifying and fixing problems with the operation of programs that assist judges). Importantly for this text, I also believe that the various types of problems relating to the use of machine learning in the application of the law in some part intersect with the thread of model understandability, making the XAI issues focal. For instance, an important aspect of algorithmic bias is closely related to aspects of XAI (Fleming & Bruce, 2021). If discrimination is to be avoided, it is important to know how the programme works and on what basis it draws conclusions. Therefore, although the contexts of algorithmic bias and XAI can be discussed separately, in practice, XAI and the strive for fairness are difficult to separate. I recognise that for many of the problems faced by decision-making algorithms based on machine learning, at some stage, the following questions are raised: “How does it actually work?” and “Why does it do this?” Their appearance signals that it is necessary to refer to knowledge of the technical characteristics of machine learning or the achievements of the XAI trend. It is for this reason that both of these topics have earned a prominent place in this chapter. * Machine learning has great potential but carries great risks as well. If lawyers want to leverage the former and avoid the latter, they must engage in an interdisciplinary debate that considers the real characteristics of the technology and its immanent limitations and avoids a mythologised view. This will not happen on its own. Finally, to paraphrase the conclusion of the book Responsible Data Science (Fleming & Bruce, 2021, p. 272), we, as professional lawyers and legal academics, must not push the entire responsibility for creating law-assisting, machine-learning-based solutions onto data science professionals; we must take real part in the debate about their application, its limitations and requirements, before the technical reality creates them on its own.

462  Research handbook on law and technology

BIBLIOGRAPHY Adadi, A. & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. Retrieved from https://doi​.org​/10​.1109​/ACCESS​ .2018​.2870052 Adensamer, A. & Klausner, L.D. (2021). Part Man, Part Machine, All Cop: Automation in Policing. Frontiers in Artificial Intelligence, 4(655486). Retrieved from https://doi​.org​/10​.3389​/frai​.2021​ .655486 Aggarwal, Ch. (2018). Neural Networks and Deep Learning. Berlin: Springer. Retrieved from https:// doi​.org​/10​.1007​/978​-3​-319​-94463-0 Aletras, N., Tsarapatsanis, D., Preoţiuc-Pietro, D. & Lampos, V. (2016). Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective. PeerJ Computer Science, 2(93). Retrieved from https://doi​.org​/10​.7717​/peerj​-cs​.93 Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M. & Rudin, C. (2018). Learning Certifiably Optimal Rule Lists for Categorical Data. Journal of Machine Learning Research, 18(234). Retrieved from https://www​.jmlr​.org​/papers​/volume18​/17​-716​/17​-716​.pdf Angwin, J., Larson, J., Mattu, S. & Kirchner, L. (2016, May 23). Machine Bias. ProPublica. Retrieved from https://www​.propublica​.org​/article​/machine​-bias​-risk​-assessments​-in​-criminal​-sentencing Anthony, J. (2021). 60 Notable Machine Learning Statistics: 2021/2022 Market Share & Data Analysis. FinancesOnline. Retrieved from https://financesonline​.com​/machine​-learning​-statistics Arrieta, A.B., Rodríguez, N.D., Ser, J.D., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R. & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, 58, 82–115. Retrieved from https://doi​.org​/10​.1016​/j​.inffus​.2019​.12​.012 Artificial Lawyer. (2019, June 4). France Bans Judge Analytics, 5 Years In Prison For Rule Breakers. Artificial Lawyer. Retrieved from https://www​.artificiallawyer​.com​/2019​/06​/04​/france​-bans​-judge​ -analytics​-5​-years​-in​-prison​-for​-rule​-breakers Bansal, N., Sharma, A. & Singh, R.K. (2019). A Review on the Application of Deep Learning in Legal Domain. In J. MacIntyre, I. Maglogiannis, L. Iliadis & E. Pimenidis (Eds.). Artificial Intelligence Applications and Innovations. AIAI 2019. IFIP Advances in Information and Communication Technology (pp. 374–381). Cham: Springer. Retrieved from https://doi​.org​/10​.1007​/978​-3​-030​-19823​ -7​_31 Bathaee, Y. (2018). The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology, 31(2), 889–938. Retrieved from https://jolt​.law​.harvard​ .edu​/assets​/articlePDFs​/v31​/ The​-Artificial​-Intelligence​-Black​-Box​-and​-the​-Failure​- of​-Intent​-and​ -Causation​-Yavar​-Bathaee​.pdf Van den Berg, M. & Kuiper, O. (2020). XAI in the Financial Sector. Utrecht: Hogeschool. Bibal, A., Lognoul, M., de Streel, A. & Frénay, B. (2021). Legal requirements on explainability in machine learning. Artificial Intelligence and Law, 29, 149–169. Retrieved from https://doi​.org​/10​ .1007​/s10506​- 020​- 09270-4 Biecek, P. & Burzykowski, T. (2021). Explanatory Model Analysis. Explore, Explain, and Examine Predictive Models. New York: Chapman and Hall/CRC. Bell, K., Hong, J., McKeown, N. & Voss, C. (2021). The Recon Approach: A New Direction for Machine Learning in Criminal Law. Berkeley Technology Law Journal, 36(2), 821–860. Retrieved from https://btlj​.org​/wp​-content​/uploads​/2022​/04​/0004​-36​-2​-Bell​.pdf Bellucci, M., Delestre, N., Malandain, N. & Zanni-Merk, C. (2021). Towards a Terminology for a Fully Contextualized XAI. Procedia Computer Science, 192, 241–250. Retrieved from https://doi​.org​/10​ .1016​/j​.procs​.2021​.08​.025 Blue, J. (2023). Blue J Tax. Retrieved from https://www​.bluej​.com​/us​/ bluej​-tax​-us Bogost, I. (2022, December 7). ChatGPT Is Dumber Than You Think. The Atlantic. Retrieved from https://www​.theatlantic​.com ​/technology​/archive​/2022 ​/12 ​/chatgpt​- openai​-artificial​-intelligence​ -writing​-ethics​/672386/ Branting, L.K., Pfeifer, C., Brown, B., Ferro, L., Aberdeen, J., Weiss, B., Pfaff, M. & Liao, B. (2021). Scalable and explainable legal prediction. Artificial Intelligence and Law, 29, 213–238. Retrieved from https://doi​.org​/10​.1007​/s10506​- 020​- 09273-1

Machine learning and law  463 Brenner, M., Suk Gersen, J., Haley, M., Lin, M., Merchant, A., Jagdishwar Millett, R., Sarkar, S.K. & Wegner, D. (2020). Constitutional Dimensions of Predictive Algorithms in Criminal Justice. Harvard Civil Rights-Civil Liberties Law Review, 55(1), 267–310. Retrieved from https://harvardcrcl​ .org​/wp​-content​/uploads​/sites​/10​/2020​/09​/ Brenner​-et​-al​.pdf Center on the Legal Profession, Harvard Law School. (2023, March). Generative AI in the Legal Profession [Special Issue]. The Practice. Retrieved from https://clp​.law​.harvard​.edu​/ knowledge​-hub​/ magazine​/issues​/generative​-ai​-in​-the​-legal​-profession/ Chen, D.L. (2019). Machine Learning and the Rule of Law. In M. Livermore & D. Rockmore (Eds.). Law as Data (pp. 433–441). Santa Fe: Santa Fe Institute Press. Retrieved from https://doi​.org​/10​ .37911​/9781947864085​.16 Cristianini, N. & Shawe-Taylor, J. (2000). An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge: Cambridge University Press. Retrieved from https:// doi​.org​/10​.1017​/CBO9780511801389 Deeks, A. (2019). The Judicial Demand for Explainable Artificial Intelligence. Columbia Law Review, 119(7), 1829–1850. Retrieved from https://columbialawreview​.org​/wp​-content​/uploads​/2019​/11​/ Deeks​-Judical​_Demand​_for​_ Explainable​_ AI​.pdf Deloitte. (2019). Global Artificial Intelligence Industry Whitepaper. Retrieved from https://thecxlab​.io​/ wp​-content​/uploads​/2020​/12​/deloitte​-cn​-tmt​-ai​-report​-en​-190927​.pdf Dieterich, W., Mendoza, C. & Brennan, T. (2016). COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity. Traverse City: Northpointe. Retrieved from https://go​.volarisgroup​ .com ​/rs​/430​-MBX​-989​/images​/ ProPublica​_Commentary​_ Final​_070616​.pdf Doran, D., Schulz, S. & Besold, T.R. (2018). What Does Explainable AI Really Mean? A New Conceptualization of Perspectives. In T.R. Besold & O. Kutz (Eds.). Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co-located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017). CEUR​-WS​.o​rg. Retrieved from https://ceur​-ws​.org​/ Vol​-2071​/CExAIIA​_2017​_paper​_2​.pdf Dressel, J. & Farid, H. (2018). The Accuracy, Fairness, and Limits of Predicting Recidivism. Science Advances, 4(1). Retrieved from https://doi​.org​/10​.1126​/sciadv​.aao5580 Equivant. (2019). Practitioner’s Guide to COMPAS Core. Retrieved from https://www​.equivant​.com​/ wp​-content​/uploads​/ Practitioners​-Guide​-to​-COMPAS​-Core​- 040419​.pdf Fair Trails. (2021). Automating Injustice: The Use of Artificial Intelligence & Automated DecisionMaking Systems in Criminal Justice in Europe. Retrieved from https://www​.fairtrials​.org​/app​/ uploads​/2021​/11​/Automating​_Injustice​.pdf Ferrucci, D.A. (2012). Introduction to “This is Watson”. IBM Journal of Research and Development, 56(3.4), 1:1–1:15. Retrieved from https://doi​.org​/10​.1147​/JRD​.2012​.2184356 Fleming, G. & Bruce, P. (2021). Responsible Data Science. Indianapolis: Wiley Floridi, L. (2020). AI and Its New Winter: from Myths to Realities. Philosophy & Technology, 33, 1–3. Retrieved from https://doi​.org​/10​.1007​/s13347​- 020​- 00396-6 Gareth, J., Witten, D., Hastie, T. & Tibshirani, R. (2017). An Introduction to Statistical Learning with Applications in R (Second Edition). New York: Springer. Retrieved from https://hastie​.su​.domains​/ ISLR2​/ ISLRv2​_website​.pdf Garson, G.D. (2012). Probit Regression and Response Models. Asheboro, NC: Statistical Associates Publishers Genuer, R. & Poggi, J.-M. (2020). Random Forests with R. Berlin: Springer. Retrieved from https://doi​ .org​/10​.1007​/978​-3​- 030​-56485-8 Górski, L. & Ramakrishna, S. (2021). Explainable Artificial Intelligence, Lawyer’s Perspective. In ICAIL ’21: Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law (pp. 60–68). New York: Association for Computing Machinery. Retrieved from https://doi​.org​ /10​.1145​/3462757​.3466145 Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F. & Pedreschi, D. (2019). A Survey of Methods for Explaining Black Box Models. ACM Computing Surveys (CSUR), 51(5). Retrieved from https://doi​.org​/10​.1145​/3236009 Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. & Yang, G.-Z. (2019). XAI-Explainable Artificial Intelligence. Science Robotics, 4(37). Retrieved from https://doi​.org​/10​.1126​/scirobotics​.aay7120 Gurney, K. (1997). An introduction to neural networks. London: UCL Press.

464  Research handbook on law and technology Hanif, A., Zhang, X. & Wood, S. (2021) A Survey on Explainable Artificial Intelligence Techniques and Challenges. In 2021 IEEE 25th International Enterprise Distributed Object Computing Workshop (EDOCW) (pp. 81–89). Retrieved from https://doi​.org​/10​.1109​/ EDOCW52865​.2021​.00036 Harley, M. (2022, August 2). Testing (And Trusting) Mercedes-Benz Level 3 Drive Pilot in Germany. Forbes. Retrieved from https://www​.forbes​.com ​/sites​/michaelharley​/2022​/08​/02​/testing​-and​-trusting​ -mercedes​-benz​-level​-3​-drive​-pilot​-in​-germany Hastie, T., Tibshirani, R. & Friedman, J. (2009). The Elements of Statistical Learning. Data Mining, Inference and Prediction. New York: Springer. Retrieved from https://hastie​.su​.domains​/Papers​/ ESLII​.pdf Hilbe, J.M. (2011). Negative Binomial Regression (Second Edition). Cambridge: Cambridge University Press. Hutter, F., Kotthoff, L. & Vanschoren, J. (Eds.). (2019). Automated Machine Learning. Methods, Systems, Challenges. Cham: Springer. Retrieved from https://doi​.org​/10​.1007​/978​-3​- 030​- 05318-5 Islam, M.R., Ahmed, M.U., Barua, S. & Begum, S. (2022). A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Applied Sciences, 12(3), 1353. Retrieved from https://doi​.org​/10​.3390​/app12031353 Jackson, E. & Mendoza, C. (2020). Setting the Record Straight: What the COMPAS Core Risk and Need Assessment Is and Is Not. Harvard Data Science Review, 2(1). Retrieved from https://doi​.org​ /10​.1162​/99608f92​.1b3dadaa Janiesch, Ch., Zschech, P. & Heinrich, K. (2021). Machine Learning and Deep Learning. Electronic Markets, 31, 685–695. Retrieved from https://doi​.org​/10​.1007​/s12525​- 021​- 00475-2 Jollifee, I.T. (2002). Principal Component Analysis (Second Edition). New York: Springer. Retrieved from https://doi​.org​/10​.1007​/ b98835 Kenny, E., Ford, C., Quinn, M. & Keane, M. (2021). Explaining Black-Box Classifiers Using PostHoc Explanations-By-Example: The Effect of Explanations and Error-Rates in XAI User Studies. Artificial Intelligence, 294. Retrieved from https://doi​.org​/10​.1016​/j​.artint​.2021​.103459 Khanna, B. (2021). Predictive Justice: Using AI for Justice. Kochi: Centre for Public Policy Research. Retrieved from https://www​.cppr​.in​/wp​-content​/uploads​/2021​/05​/ PREDICTIVE​-JUSTICE​-USING​ -AI​-FOR​-JUSTICE​-2​.pdf Koski, T. & Noble, J. (2009). Bayesian Networks: An Introduction. Chichester: John Wiley & Sons, Ltd. Kubacka, T. [@paniterka_ch]. (2022, December 5). Today I asked ChatGPT About the Topic I Wrote My Phd About. It Produced Reasonably Sounding Explanations and Reasonably Looking… [Tweet Thread]. Twitter. Retrieved from https://twitter​.com​/paniterka​_ch​/status​/1599893718214901760 Kumbhare, T.A. & Chobe, S.V. (2014). An Overview of Association Rule Mining Algorithms. International Journal of Computer Science and Information Technologies, 5(1), 927–930. Retrieved from https://www​.ijcsit​.com​/docs​/ Volume​%205​/vol5issue01​/ijcsit20140501201​.pdf Lapan, M. (2020). Deep Reinforcement Learning Hands-On (Second Edition). Birmingham: Packt Publishing. La Fors, K. (2020). Legal Remedies For a Forgiving Society: Children’s rights, data protection rights and the value of forgiveness in AI-mediated risk profiling of children by Dutch authorities. Computer Law & Security Review, 38(105430). Retrieved from https://doi​.org​/10​.1016​/j​.clsr​.2020​.105430 Lawgeex. (2023). Lawgeex. Retrieved from https://www​.lawgeex​.com ​/platform Lee, W.-M. (2019). Python Machine Learning. Indianapolis: Wiley. Longo, L., Goebel, R., Lecue, F., Kieseberg, P. & Holzinger, A. (2020). Explainable Artificial Intelligence: Concepts, Applications, Research Challenges and Visions. In A. Holzinger, P. Kieseberg, A. Tjoa & E. Weippl (Eds.). Machine Learning and Knowledge Extraction (pp. 1–16). CD-MAKE 2020 Proceedings. Cham: Springer. Retrieved from https://doi​.org​/10​.1007​/978​-3​- 030​-57321​-8_1 Loukides, M. (2021). AI Adoption in the Enterprise 2021. Sebastopol: O’Reilly. Loyola-González, O. (2019). Understanding the Criminal Behavior in Mexico City through an Explainable Artificial Intelligence Model. In L. Martínez-Villaseñor, I. Batyrshin & A. MarínHernández (Eds.). Advances in Soft Computing. MICAI 2019 Proceedings (pp. 136–149). Cham: Springer. Retrieved from https://doi​.org​/10​.1007​/978​-3​- 030​-33749​- 0​_12 Luminance. (2023). Luminance. Retrieved from https://www​.luminance​.com Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T., Manyika, J., Ngo, H., Niebles, J.C., Parli, V., Shoham, Y., Wald, R., Clark, J. & Perrault, R. (2023). The AI Index

Machine learning and law  465 2023 Annual Report. Stanford, CA: AI Index Steering Committee, Institute for Human-Centered AI, Stanford University. Retrieved from https://aiindex​.stanford​.edu​/wp​-content​/uploads​/2023​/04​/ HAI​_ AI​-Index​-Report​_2023​.pdf Medvedeva, M., Vols, M. & Wieling, M. (2020). Using Machine Learning to Predict Decisions of the European Court of Human Rights. Artificial Intelligence and Law, 28, 237–266. Retrieved from https://doi​.org​/10​.1007​/s10506​- 019​- 09255-y Menard, S. (2010). Logistic Regression: From Introductory to Advanced Concepts and Applications. Thousand Oaks: SAGE Publications, Inc. Retrieved from https://doi​.org​/10​.4135​/9781483348964 Montelongo, A. & Becker, J.L. (2020). Tasks Performed in the Legal Domain Through Deep Learning: A Bibliometric Review (1987–2020). 2020 International Conference on Data Mining Workshops (ICDMW), 775–781. Retrieved from https://doi​.org​/10​.1109​/ ICDMW51313​.2020​.00113 Moore, R. (Ed.). (2015). A Compendium of Research and Analysis on the Offender Assessment System (OASys) (2009–2013). London: UK Ministry of Justice. Retrieved from https://assets​.publishing​ .service​.gov​.uk​/government​/uploads​/system​/uploads​/attachment​_data​/file​/449357​/research​-analysis​ -offender​-assessment​-system​.pdf Morikawa, C., Kobayashi, M., Satoh, M., Kuroda, Y., Inomata, T., Matsuo, H., Miura, T. & Hilaga, M. (2021). Image and Video Processing on Mobile Devices: A Survey. The Visual Computer, 37, 2931–2949. Retrieved from https://doi​.org​/10​.1007​/s00371​- 021​- 02200-8 Natale, S. & Ballatore, A. (2020). Imagining the Thinking Machine: Technological Myths and the Rise of Artificial Intelligence. Convergence, 26(1), 3–18. Retrieved from https://doi​.org​/10​.1177​ /1354856517715164 Nayak, P. (2022, February 3). How AI Powers Great Search Results. Google The Keyword. Retrieved from https://blog​.google​/products​/search ​/ how​-ai​-powers​-great​-search​-results OpenAI. (2022, November 30). Introducing ChatGPT. Retrieved from https://openai​.com​/ blog​/chatgpt OpenAI. (2023, March 14). GPT-4. Retrieved from https://openai​.com​/research​/gpt-4 Papagianneas, S. (2022). Towards Smarter and Fairer Justice? A Review of the Chinese Scholarship on Building Smart Courts and Automating Justice. Journal of Current Chinese Affairs, 51(2), 327–347. Retrieved from https://doi​.org​/10​.1177​/18681026211021412 Porębski, A. (2021). Application of Cluster Analysis in Research on the Spatial Dimension of Penalised Behaviour. Acta Universitatis Lodziensis. Folia Iuridica, 94, 97–120. Retrieved from https://doi​.org​ /10​.18778​/0208​-6069​.94​.06 Pugliese, R., Regondi, S. & Marini, R. (2021). Machine Learning-Based Approach: Global Trends, Research Directions, and Regulatory Standpoints. Data Science and Management, 4, 19–29. Retrieved from https://doi​.org​/10​.1016​/j​.dsm​.2021​.12​.002 Queudot, M. & Meurs, M. (2018). Artificial Intelligence and Predictive Justice: Limitations and Perspectives. In M. Mouhoub, S. Sadaoui, O.A. Mohamed & M. Ali (Eds.). Recent Trends and Future Technology in Applied Intelligence. IEA/AIE 2018 Proceedings (pp. 889–897). Cham: Springer. Retrieved from https://doi​.org​/10​.1007​/978​-3​-319​-92058​- 0​_85 Reiling, A. (2020). Courts and Artificial Intelligence. International Journal for Court Administration, 11(2). Retrieved from https://doi​.org​/10​.36745​/ijca​.343 Rish, I. (2001). An empirical study of the naive Bayes classifier. In IJCAI 2001 workshop on Empirical Methods in Artificial Intelligence (pp. 41–46). Retrieved from https://www​.cc​.gatech​.edu​/ home​/ isbell ​/classes​/reading​/papers​/ Rish​.pdf Rokach, L. & Maimon, O. (2014). Data Mining with Decision Trees: Theory and Applications (Second Edition). Singapore: World Scientific Publishing Co. Pte. Ltd. Retrieved from https://doi​.org​/10​.1142​ /9097 Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1, 206–215. Retrieved from https:// doi​.org​/10​.1038​/s42256​- 019​- 0048-x Rudin, C., Wang, C. & Coker, B. (2020). The Age of Secrecy and Unfairness in Recidivism Prediction. Harvard Data Science Review, 2(1). Retrieved from https://doi​.org​/10​.1162​/99608f92​.6ed64b30 Rosili, N.A.K., Zakaria, N.H., Hassan, R., Kasim, S., Rose, F.Z.C. & Sutikno, T. (2021). A systematic literature review of machine learning methods in predicting court decisions. International Journal of Artificial Intelligence, 10(4), 1091–1102. Retrieved from https://doi​.org​/10​.11591​/ijai​.v10​.i4​ .pp1091–1102

466  Research handbook on law and technology SAE International/ISO. (2021, April). J3016 standard. Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles. Retrieved from https://www​.sae​.org​/ standards​/content​/j3016​_202104/ de Saint Laurent, C. (2018). Defence of Machine Learning: Debunking the Myths of Artificial Intelligence. Europe’s Journal of Psychology, 14(4), 734–747. Retrieved from https://doi​.org​/10​.5964​ /ejop​.v14i4​.1823 Sarker, I. (2021). Machine Learning: Algorithms, Real-World Applications and Research Directions. SN Computer Science, 2(160). Retrieved from https://doi​.org​/10​.1007​/s42979​- 021​- 00592-x Sil, R., Roy, A., Bhushan, B. & Majumdar, A. (2019). Artificial Intelligence and Machine Learning based Legal Application: The State-of-the-Art and Future Research Trends. In 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS) (pp. 57–62). IEEE. Retrieved from https://doi​.org​/10​.1109​/ ICCCIS48478​.2019​.8974479 Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., van den Driessche, G., Graepel, T. & Hassabis, D. (2017). Mastering the Game of Go Without Human Knowledge. Nature, 550, 354–359. Retrieved from https://doi​.org​/10​.1038​/nature24270 Scherer, M. (2019). Artificial Intelligence and Legal Decision-Making: The Wide Open? Journal of International Arbitration, 36(5), 539–574. Retrieved from https://doi​.org​/10​.54648​/joia2019028 Schonander, C. (2019, July 3). French Judicial Analytics Ban Undermines Rule of Law. CIO. Retrieved from  https://www​.cio​.com​/article​/220345​/french​-judicial​-analytics​-ban​-undermines​-rule​- of​-law​ .html Schroeder, L., Sjoquist, D. & Stephan, P. (2017). Understanding Regression Analysis (Second Edition). London: SAGE Publications, Inc. Retrieved from https://doi​.org​/10​.4135​/9781506361628 Schwerzmann, K. (2021). Abolish! Against the Use of Risk Assessment Algorithms at Sentencing in the US Criminal Justice System. Philosophy & Technology, 34(4), 1883–1904. Retrieved from https://doi​ .org​/10​.1007​/s13347​- 021​- 00491-2 Stern, R.E., Liebman, B.L., Roberts, M. & Wang, A.Z. (2021). Automating Fairness? Artificial Intelligence in the Chinese Courts. Columbia Journal of Transnational Law, 59, 515–553. Retrieved from  https://www​.jtl​.columbia​.edu ​/volume​-59​/automating​-fairness​-artificial​-intelligence​-in​-the​ -chinese​-courts Subasi, A. (2020). Practical Machine Learning for Data Analysis Using Python. London: Academic Press. Retrieved from https://doi​.org​/10​.1016​/ B978​- 0​-12​-821379​-7​.00003-5 Surden, H. (2014). Machine Learning and Law. Washington Law Review, 89(1), 87–115. Retrieved from https://digitalcommons​.law​.uw​.edu​/wlr​/vol89​/iss1​/5/ Surden, H. (2019). Artificial Intelligence and Law: An Overview. Georgia State University Law Review, 35(4), 1305–1337. Retrieved from https://readingroom​.law​.gsu​.edu​/gsulr​/vol35​/iss4​/8/ Surden, H. (2021). Machine Learning and Law: An Overview. In R. Vogl (Ed.). Research Handbook on Big Data Law. Cheltenham: Edward Elgar Publishing. Retrieved from https://doi​.org​/10​.4337​ /9781788972826​.00014 Thomson Reuters. (2023). Westlaw Edge. Retrieved from https://legal​.thomsonreuters​.com​/en​/products​ /westlaw​-edge Tsai, Y. & Pandey, R. (2020, December 11). Portrait Light: Enhancing Portrait Lighting with Machine Learning. Google AI Blog. Retrieved from https://ai​.googleblog​.com​/2020​/12​/portrait​-light​ -senhancing​-portrait​.html Vanhoeyveld, J., Martens, D. & Peeters, B. (2020). Value-Added Tax Fraud Detection with Scalable Anomaly Detection Techniques. Applied Soft Computing, 86, 105895. Retrieved from https://doi​.org​ /10​.1016​/j​.asoc​.2019​.105895 Vlek, C.S., Prakken, H., Renooij, S. & Verheij, B. (2016). A Method for Explaining Bayesian Networks for Legal Evidence with Scenarios. Artificial Intelligence and Law, 24, 285–324. Retrieved from https://doi​.org​/10​.1007​/s10506​- 016​-9183-4 Watson, D.S. & Floridi, L. (2021). The Explanation Game: A Formal Framework for Interpretable Machine Learning. Synthese, 198, 9211–9242. Retrieved from https://doi​.org​/10​.1007​/s11229​- 020​ -02629-9 Wierzchoń, S. & Kłopotek, M. (2018). Modern Algorithms of Cluster Analysis. Cham: Springer. Retrieved from https://doi​.org​/10​.1007​/978​-3​-319​- 69308-8

Machine learning and law  467 Winder, P. (2021). Reinforcement Learning. Sebastopol: O’Reilly. Wood, S.N. (2017). Generalized Additive Models: An Introduction with R (Second Edition). Boca Raton, FL: CRC Press. Wu, J. (2012). Advances in K-means Clustering. Berlin, Heidelberg: Springer. Retrieved from https://doi​ .org​/10​.1007​/978​-3​-642​-29807-3 Završnik, A. (2020). Criminal Justice, Artificial Intelligence Systems, and Human Rights. ERA Forum, 20, 567–583. Retrieved from https://doi​.org​/10​.1007​/s12027​- 020​- 00602-0 Zhang, D., Maslej, N., Brynjolfsson, E., Etchemendy, J., Lyons, T., Manyika, J., Ngo, H., Niebles, J.C., Sellitto, M., Sakhaee, E., Shoham, Y., Clark, J. & Perrault, R. (2022). The AI Index 2022 Annual Report. Stanford, CA: AI Index Steering Committee, Stanford Institute for Human-Centered AI, Stanford University. Retrieved from https://aiindex​.stanford​.edu​/wp​-content​/uploads​/2022​/03​/2022​ -AI​-Index​-Report​_ Master​.pdf Zhou, J., Gandomi, A.H., Chen, F. & Holzinger, A. (2021). Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics, 10(5), 593. Retrieved from https://doi​ .org​/10​.3390​/electronics10050593 Zygmunt, T.J.G. (2020). An Intuitive Approach to Hard Cases. Utrecht Law Review, 16(1), 21–38. Retrieved from https://doi​.org​/10​.36633​/ulr​.505

28. Why we need to rethink procedural fairness for the digital age and how we should do it Jed Meers, Simon Halliday and Joe Tomlinson1

1. INTRODUCTION The life of modern government—and to a significant extent modern society—revolves around a mass of public officials who make administrative decisions about how law and policy apply at the ‘street level’ (Lipsky, 1980). Each year, these officials take many millions of decisions that span the wide range of policy sectors where the state makes interventions in the name of the public interest, such as social security, immigration, asylum, education, community care, tax, and planning, to name only a few. These decisions perform a variety of functions, including determining the benefits citizens are entitled to, which interests prevail in conflicts between competing claims, and whether to take enforcement action. This ‘frontline’ of public decision-making is, by any measure, one of the primary domains of law and policy in action. But it is more than that: it is a critical fairness interface in contemporary society, akin in significance to the market, that has profound consequences. Individually, these decisions can have enormous effects on the lives of individuals, influencing and even determining where they live, the work they can undertake, and their wellbeing. Collectively, they can impact society and the economy, and the public’s trust in government. Though they routinely concern the most marginalised, frontline decisions are ‘essential expressions of state power’ for all in society (Jewell & Glaser, 2006). The frontline of public decision-making is now evolving quickly. This is the result of multiple trends in how the government is seeking to manage public life, and how the frontline itself is responding. Not least among such trends is the rapidly growing use of digital technologies within government systems, ranging from the use of online application forms to digital ID systems and automated decision-making. This evolution of the frontline of administrative decision-making presents a seismic challenge for those—including administrative lawyers— concerned with the advancement of procedural fairness: that fundamental idea that, separate from the outcome of a decision-making process, the person subject to that process ought to be treated in a procedurally fair manner. We argue in this chapter that this challenge requires us to move beyond traditional ways of thinking about procedural fairness—which have been dominated by legal and normative principles and theories—and place new emphasis on how citizens themselves view and experience fairness within frontline public decision-making processes. Our view is that the public has sophisticated sensibilities about what is procedurally fair in different contexts but that their perspectives have been unfortunately neglected. Taking the views and experiences of citizens seriously is a critical component of reimagining what fairness looks like on the frontlines of 1 

UK.

The survey research in this chapter was kindly supported by York Law School, University of York,

468

Why we need to rethink procedural fairness  469 the digital administrative state, and it must be embraced by administrative law scholars alongside more conventional approaches to understanding procedural fairness. We set out this argument in three stages. First, we explain some of the ways in which the frontline of administrative decision-making is changing. The rapidly expanding use of digital technologies is a critical part of this story but, as we show, it is important to recognise that this trend runs next to, and is interwoven with, multiple other significant developments in this domain. These changes will, overall, certainly have major implications for procedural fairness. Second, we explore how administrative lawyers are responding to this evolution and how this response, insofar as it is concerned with the demands of procedural fairness, tracks onto conventional pathways that have been developed by administrative lawyers for thinking about fairness. Those pathways are, broadly, concerned with legal and normative principles and theories. While the application of these established ways of thinking about fairness to novel developments within the frontline of decision-making—including technology—remains valuable, they inevitably have certain limitations. We set out why these limitations mean we must now move beyond established approaches. In the third section, we set out our case for placing more emphasis on how the public experience and perceive procedural fairness within frontline decision-making by drawing on a survey of the UK public to examine attitudes to the use of digital technology in an administrative decision. Drawing on this example, and the broader, multidisciplinary literature on procedural fairness, we seek to show that this is both a viable and valuable project.

2. THE EVOLVING FRONTLINE The traditional image of the frontline administrative decision-maker is of a bureaucrat sitting in a government office, churning through paper files and making decisions on the applications before them (Zacka, 2017). In this picture, it is through such officials, who exist within a clear organisational hierarchy, that law and policy are administered. But this image is no longer an accurate reflection of reality, and it has not been for some time now. In fact, it has always been somewhat more complex. Frontline decision-makers, like all humans, are complex creatures who undertake their work as part of equally complex organisations, which inevitably also have their own cultural dynamics. Their work, as Lipsky’s landmark study of street-level bureaucrats famously demonstrated, generally involves a significant exercise of discretion: ‘policy implementation in the end comes down to the people who actually implement it’ (Lipsky, 1980). Rather than automatons executing law and policy, officials have always existed at the psychologically complex places where the ‘state meets the street’ (Zacka, 2017; Maynard-Moody & Musheno, 2003) and are susceptible to being influenced by their organisational (Jewell & Glaser, 2006; Maynard-Moody & Portillo, 2010; Satzewich, 2014; Halliday, 2000; Raso, 2017) and even political environments (Satzewich, 2014) in the same way we all are. Beyond the basic reality of the frontline being a complex personal and organisational space, government is almost continuously seeking to change the shape of frontline decision-making systems. In recent years, the rapid expansion of the use of technology has been perhaps the most notable development in this domain. Take the UK Home Office—the department charged with administering immigration law and policy, which makes many thousands of decisions about migration status each year (Thomas, 2022). In recent years, it has been experimenting

470  Research handbook on law and technology with automated decision-making—and other technologies—in a variety of ways, in an effort to improve efficiency and performance (Maxwell & Tomlinson, 2022). For instance, EU citizens applying to stay in the United Kingdom after Brexit applied via an online Home Office application form which was then checked automatically against tax and social security data held by two other government departments (Tomlinson, 2020). Clear cases received minimal—if any—human official involvement and more complex cases received greater attention from human officials. Even the cases presented to human officials were triaged into ‘Red’, ‘Amber’, and ‘Green’ categories—representing different levels of decision-making complexity—by a sorting algorithm. As of 31 March 2022, 6.5 million such applications—each one of them critical to the life circumstances of the person making it—have been processed this way. This system—which is becoming an increasingly typical template across government—is quite some distance from the image of the dusty bureaucrat behind a pile of papers: bureaucracy is shifting from the ‘street level’ to the ‘screen level’, and now beyond (Bovens & Zouridis, 2002). While technology is reshaping—in some places, radically—the frontline of administrative decision-making, it is important to recognise that it is not the only trend leading to change in this domain. The growth of the use of digital technology can sometimes seem—and is often promoted as—a change so revolutionary that it invites focus to the exclusion of all else, but that would be to neglect many other powerful forces at play, which are both separate and interwoven with the progression of technology within public administration. For instance, since the 1980s there has been a growing trend for government to outsource to private companies functions that were traditionally public. The typical logic here is that the free market, driven by competition, is more agile and efficient than the flabby and inherently monopolistic public sector, so if the government outsources to the private sector—or otherwise adopts its mindset or practices—it can increase both its efficiency and performance. Such outsourcing can take many forms within the frontline of decision-making, and even overtake it (Thomas, 2022). For instance, in modern social security administration, large private entities are an integral part of the assessment process that determines whether people get welfare benefits—occupying a role that is as central to the decision-making process as can be. By pointing to trends such as outsourcing to the private sector we do not seek to underplay the impact of technology on decision-making on the frontline but, instead, to make explicit that the emergence of what we may conveniently but somewhat glibly call the ‘digital administrative state’ is being shaped by a variety of complex forces. The overall result, put simply, is that the frontline of administrative decision-making is not what it used to be. It is more fragmented and complex than it has ever been before, with no sign of the pace of change abating. All the change we are witnessing, and whether it represents progress in the pursuit of good government, is also heavily contested. Take the implementation of automated decision-making as an example (Maxwell & Tomlinson, 2022). Some see it as the path to more consistent and efficient decision-making. However, there are credible concerns that it is often not capable of fulfilling either of those goals and, even where it might be, can have negative side effects, such as reproducing discrimination or reducing the transparency of official reasons. Given how vast, diverse, and contested frontline decision-making activity is, it can seem like an overwhelming task to get a grip on the implications of developments and, in particular, on the implications of all this change for procedural fairness. But the changes we are witnessing must be grappled with. A critical part of our task as administrative law scholars at this point in history is to meet this challenge, even though it is a daunting one.

Why we need to rethink procedural fairness  471

3. PATHWAYS OF PROCEDURAL FAIRNESS Administrative lawyers, and others, have long been concerned with procedural fairness in frontline decision-making. The ways in which they have sought to understand and elaborate upon what fairness requires in this context have typically followed certain intellectual pathways. Unsurprisingly, the major focus has been on fairness as understood through legal principles. It is with some concern that we observe that the dominant vantage point on frontline decision-making for the majority of administrative lawyers remains what the courts say about it. For those interested in understanding the relationship between law and administration, this, of course, is the equivalent of looking only at the referees to ensure a sport is fair or looking through a keyhole to check if the structure of a building is robust. Nonetheless, there is a long history of common law jurisprudence to be found on what constitutes ‘natural justice’ or ‘procedural fairness’ (Craig, 2022). Under this broad heading exists legal principles such as the right to be heard, the duty to give reasons, and the principle against apparent bias. In more recent times, judicial decisions about procedural fairness have come under the sway of new legal frameworks which—at least in places—incorporate principles concerned with procedural fairness, most notably human rights law and equality law. We also suspect data protection law will make further inroads in this respect in the coming years. These frameworks give those thinking about fairness on the frontline, through the vehicle of judicially pronounced legal doctrine, new principles to draw upon. An associated and similarly prominent pathway for thinking about what fairness on the frontline looks like can be found in legal theory. This body of literature has sought to address the question of why procedural fairness is important, in order to explain and even give shape to how the courts enforce related legal standards. This literature is dominated by two schools of thought, which are not necessarily mutually exclusive. The first is an instrumental theory (e.g. Galligan, 1996). In this theory, a fair process—one where a range of affected people are consulted and get the opportunity to put their views—is much more likely to produce higher-quality information. This information, achieved via a commitment to procedural fairness, then creates the conditions where a better-informed decision-maker is likely to make a better overall decision. The second school of thought is not instrumental but rather sees procedural fairness as an end in itself (e.g. Allan, 1998). As Laurence Tribe expresses the notion, this is ‘the elementary idea that to be a person, rather than a thing, is at least to be consulted about what is done with one’ (Tribe, 1998, p. 666). Separately, scholars working in the field of ‘administrative justice’ have sought to build conceptions of what is ‘just’ or ‘fair’ in processes by moving much closer to the street level and, moving away from courts, envisaging an overall system of public dispute resolution (Hertogh et al., 2022). This seminal work in this body of scholarship is Jerry Mashaw’s 1983 book Bureaucratic Justice, which sought to combine empirical data about the operation of the American Disability Insurance with notions of ‘internal administrative law’. By taking this approach, Mashaw identified three different models of administrative justice—bureaucratic rationality, professional treatment, and moral judgement. Each competes for priority and is underpinned by its own set of legitimating values, primary goals, organisational structures, and cognitive techniques. In subsequent scholarship, these models have been developed (Mashaw, 2022), ultimately taking us more directly into the grubbier but inevitable

472  Research handbook on law and technology institutional trade-offs involved in frontline decision-making, and their connection with differing notions of legitimacy and acceptability. Happily, administrative law scholars are increasingly paying attention to the growing role of technology in government, including in frontline decision-making (Daly, Raso, & Tomlinson, 2022). Procedural fairness has been a key issue, but this developing body of research has generally followed the familiar pathways to analyse fairness. For instance, administrative lawyers are thinking through the application of judicial review principles—including procedural fairness principles, such as the duty to give reasons—to automated administrative decisions (Cobbe, 2019). Similarly, administrative justice scholars are investigating how frontline official decision-making behaviour is changing as they interact with the new technologies that surround them (Raso, 2017). The developing body of work at the intersection of administrative law and technology is highly valuable, and it is making important contributions to reimagining procedural fairness in the emerging digital administrative state. However, our view is that simply applying our established ways of thinking about procedural fairness means we fall short of meeting the real task before us. Specifically, it means we are almost exclusively rethinking evolving frontline decision-making from the position of abstract ideas, legal principles, and normative theories and from the perspective of institutions, as opposed to the perspective and experiences of people who are subject to such decision-making. Even those invested in studying administrative justice as a whole system, with their trademark willingness to dive into the routine life of public administration, tend to omit this perspective. Moreover, recognising there are trade-offs in how we design decision-making procedures on the frontline in turn leaves us with questions that the public will have legitimate views about. Is it fairer to make a quick decision but take more risks on the accuracy decision in certain contexts? Are people willing to wait longer for a decision if they have more opportunities to express their view before it is made? Do people experience a decision taken with the use of digital technology as more or less fair (a question we will return to below)? These are really the questions at the heart of procedural fairness on the frontline but we rarely ask them and the voice of the citizen continues to be marginalised if not missing here, when it really ought to be a central part of the conversation. In the next part of this chapter, we sketch out why trying to understand the perspectives of citizens in this context is a valuable and viable project.

4. THE IMPORTANCE OF PUBLIC PERCEPTIONS IN ADMINISTRATIVE DECISION-MAKING Having argued that traditional approaches to evaluating procedural fairness in administrative decision-making provide valuable insight, we now turn to making the case for developing a more comprehensive understanding that incorporates the perspectives of those affected by these decisions. We argue that public perceptions are important in at least three respects. First, empirical evidence on public perspectives can and should inform the development of the kind of normative frameworks outlined above. The development of theory about decision-making processes should be grounded in the voices of those affected by those same processes. Drawing on the example of asylum decision-making, Cowan and Halliday (2022) have argued that doing so is not only ‘theoretically essential’—as failing to is an ‘epistemic injustice’ because it ignores the perspectives of those affected—but it also leads to stronger

Why we need to rethink procedural fairness  473 theoretical frameworks by providing data on alternative perspectives. They use the example of the decision-making value of ‘participation’ in the asylum process. Empirical evidence on the experiences of asylum-seeking women demonstrates that some asylum claimants may ‘prefer to stay silent rather than expose themselves to further traumatisation’, complicating the orthodox understandings by procedural justice scholars of what meaningful ‘participation’ looks like in the asylum decision-making context (Cowan & Halliday, 2022). Second, public perceptions of procedural fairness may have behavioural consequences. Nowhere is this evidence more developed than in policing. Tyler’s insight into the ‘process-based’ model of police legitimacy has been hugely influential in catalysing a focus on perceptions of procedural justice in policing research. His argument—endorsed subsequently by a large body of evidence (Walters & Bolger, 2019)—is that people who believe that they have been treated fairly by the police are more likely to view them as legitimate, which in turn affects their compliance with law (Tyler, 2006). Subsequent research has identified a wide range of other behaviours that are also impacted by perceptions of procedural fairness, from the willingness to cooperate with the police (such as by reporting crime or joining a neighboured watch scheme), to support for policies that empower the police (such as stop and search powers) (Sunshine & Tyler, 2003). These arguments rooted in the policing literature have begun to be applied in contexts more familiar to administrative law scholars. For instance, Murphy’s work on tax compliance has demonstrated that people are more likely to accept their tax obligations if they are treated in a procedurally just way (Murphy, 2003, 2005). Indeed, longitudinal work suggests that the perception that a tax authority has acted in a procedurally fair way has a positive impact on compliance with tax obligations over a significant time period (Murphy, 2016). More broadly, the criminological literature illustrates how interactions with policing are not hermetically sealed: they can ‘contaminate’ an individual’s response toward other legal authorities and vice versa (Crawford, 2003). Interactions with the state are not in a silo—a small number of prior experiences with public decision-making can affect an individual ‘further down the justice chain’. Third, evidence on public perceptions can have policy implications and help to translate procedural justice scholarship into practical recommendations for policymakers. Insights on procedural justice can, for instance, form a cost-effective (though far from exhaustive) means of eliciting a more cooperative approach to tax compliance over a more traditional emphasis on deterrence and enforcement (Murphy, 2004, p. 203). Likewise, the police are likely to have more direct control over how they interact with people than they do on larger-scale socio-economic issues, such as the rates of crime or issues that drive perceptions of community safety (Crawford, 2003). For policy-makers tasked with designing administrative processes, data on the perceptions of these processes—and the impact of these perceptions—can help to achieve their policy goals. Likewise, examining the experiences of those affected by administrative systems can demonstrate how particular sub-groups may have different attitudes to procedural processes and therefore require a different policy response.

5. PUBLIC ATTITUDES TO THE USE OF DIGITAL TECHNOLOGY IN ADMINISTRATIVE PROCESSES Given this Research Handbook’s focus on law and technology, the analysis that follows focuses on the public’s attitudes to administrative decision processes involving the use of digital technology. It addresses two key questions:

474  Research handbook on law and technology • •

how does the public assess the effects of the use of digital technology on the fairness of an administrative process? do socio-demographic characteristics influence perceptions of the fairness of the use of digital technology?

We explore these two questions first by outlining our data set and then through regression analysis of our survey results. In doing so, we seek to illustrate the benefit of using empirical evidence of public perceptions to inform the development of theory. 5.1 The Survey Our analysis draws on a survey of 3,454 adults in the United Kingdom conducted by YouGov, a professional panel provider. The fieldwork took place between 24 October and 4 November 2022, with YouGov delivering the survey instrument via their online panel to a random selection of their sample base of over 185,000 adults. This sample was then weighted to be representative of the UK adult population as a whole. The survey instrument included a question aimed at determining whether respondents had a recent experience of an administrative process in the public sector. This was a broadranging question—detailed in Table 28.1 below—which prompted respondents to think of any application to the government or public body, and provided non-exhaustive examples, from applications for benefits and credits through to planning permission and passport renewal. The 37% of the sample who had made a recent application to a public body progressed onto a further question about the use of digital technology in the administrative process they had experienced. As we can see from Table 28.2 for most of those who had made an application to a public body for something, the use of digital technology was part of the administrative process.

Table 28.1  Question on applying to a public body Over the last three years, have you applied to the government or another public body in the United Kingdom for something? This could be applied to something like: • a social security benefit • a tax credit • housing or housing support • social care or community care support • a tax assessment • planning permission • a council tax rebate • a passport or passport renewal Responses

Percentage of sample

Yes

37%

No

57%

Don’t know/prefer not to say

6%

Why we need to rethink procedural fairness  475 Table 28.2  Question on use of digital technology Sometimes the government and public bodies use digital technologies, such as online application forms, to help them make a decision. Was a digital technology used by the public body in your case for your most recent experience/situation? Responses

Percentage of respondents

Yes

62%

No

19%

Don’t know

19%

Table 28.3  Questions on the fairness of the use of digital technology In your opinion, do you think the use of this digital technology made the handling of your case fairer, less fair, or did it make no difference at all? Responses

Percentage of respondents

A lot fairer

7%

Slightly fairer

8%

It made no difference to the fairness

71%

Slightly less fair

4%

A lot less fair

3%

Don’t know

7%

5.2 The Fairness Effects of Digital Technology The group of respondents who had made an application to a public body for something, and for whom the use of digital technology was part of the administrative process, was asked a further question about how, in their opinion, the use of digital technology had affected the fairness of the administrative process. The results are presented in Table 28.3 below. Our findings suggest that, for respondents whose recent experience of a public body’s decision involved the use of digital technology, a large majority thought it made no difference to the fairness of the decision (71% of respondents). This is a striking finding, one that should be of comfort to policy-makers advancing the use of digital technology in the provision of public services. However, it is important to note that our survey posed a general question, both in relation to the type of administrative decision being made (covering all areas of governmental administration) and the role of digital technology (covering all applications, from online portals to the use of artificial intelligence in decision-making). The research agenda here is still in its infancy and further enquiry is merited about the extent to which the type of decision and the role of digital technology affects fairness perceptions. Moreover, we must also recognise that 22% of our respondents felt that the use of digital technology affected the fairness of the administrative process of which they were the subjects: 15% thought it made the decision fairer; 7% thought it made it less fair. This sizeable minority

476  Research handbook on law and technology is also worthy of further exploration, and it is in this vein that we conducted further analysis in our study. 5.3 The Influence of Socio-Demographics on Fairness Perceptions For the purposes of this chapter, we examined the potential for socio-demographics to shape people’s perceptions of the fairness effects of the use of digital technologies in administrative processes. This inquiry is in the spirit of Cowan and Halliday’s arguments outlined above: if evidence suggests that certain groups have different attitudes to decision-making processes, this may challenge orthodox understandings and theoretical approaches. There is some limited attitudinal data to suggest that socio-demographic factors do influence the extent to which digital technologies are perceived as fair, particularly on the grounds of ethnicity. For instance, in a large-scale study of attitudes to the fairness of algorithm-based decision-making in the United States, 25% of white respondents thought the use of an algorithm to determine access to consumer finance was fair to consumers, compared to 45% of black and 47% of Hispanic respondents (Pew Research Centre, 2018). Accordingly, we tested a hypothesis that non-white respondents in our sample would be more likely than white respondents to perceive that the use of digital technology enhanced the fairness of the administrative process. We tested our hypothesis using a binary logistic regression. This widely employed statistical technique examines the likelihood that respondents would fall into one of two categories, while controlling for potentially significant predictor variables. Respondents were categorised into those who answered that the handling of their case was fairer (108 respondents in total), or not fairer (577 respondents in total), including those who considered it made no difference to overall fairness, but excluding those who responded, ‘don’t know’. A number of sociodemographic variables were included in this regression: • • • • •

Gender: This is a binary variable detailing ‘male’ and ‘female’. Age: This is a continuous variable, included as a raw number rather than bracketed into bands. Social grade: This is captured in YouGov panel data as Approximated Social Grade with its six categories A, B, C1, C2, D and E. As is common practice, this variable was recoded to a binary to group A, B, C1 and C2, D and E together, respectively. Ethnicity: This is a binary variable detailing ‘white’ and ‘ethnic minority’ (i.e. nonwhite) respondents. Prior voting in the 2019 UK General Election: This variable was recoded to a binary, detailing Conservative voters and non-conservative voters.

Two further variables were included in the regression on the basis that they might have influenced respondents’ perceptions of whether digital technology enhanced the fairness of the decision process they had experienced: •

Complexity of decision: we asked respondents, in their opinion, how complicated they thought the decision was (or would be) for the government or public body to make. This was a four-stage scale ranging from ‘very complicated’ to ‘not at all complicated’. This was recoded into a binary variable of respondents who thought the decision was complex or not complex.

Why we need to rethink procedural fairness  477 •

Outcome of the decision: We asked respondents whether they got the thing they applied to the public body for. This was coded as a binary: the respondent got it, or they did not.

The results of this regression analysis are detailed in Table 28.4. The associations that have statistical significance are denoted with one or more asterisks. Those without asterisks do not have statistical significance but are nonetheless reported for the purpose of transparency. The number of asterisks attached to a finding of statistical significance denotes different ‘p-values’ or ‘probability values’. A single asterisk denotes a p-value of 0.05, a double asterisk denotes a p-value of 0.01, and a triple asterisk denotes a p-value of 0.001. These different values tell us how likely it is to have that finding if the null hypothesis were true. In other words, if, for example, a finding has a p-value of 0.05 (a single asterisk), it means that, if there was truly no association between the variables in the real world, and if we repeated the survey 100 times with different samples, we would falsely declare a positive association on only five occasions. If, however, a finding has a p-value of 0.01 (a double asterisk), it means that, if there was truly no association between the variables in the real world, and if we repeated the survey 100 times with different samples, we would falsely declare a positive association on only one occasion. The triple asterisk denotes that a false positive would only be reported once in 1,000 tests with separate samples. The analysis reveals that, while controlling for other socio-demographics, decision-complexity and decision outcome, ethnicity was statistically significantly associated with perceptions of digital technology’s relationship to procedural fairness: non-white respondents were more likely to believe that it enhanced the fairness of the administrative process. Indeed, amongst the socio-demographics included in the model, it was the only one found to have a statistically significant association with those perceptions. Moreover, the size of this effect was considerable: the odds of a non-white respondent thinking that digital technology made the handling of their case fairer were over three times higher than for white respondents. Why might this be so? Why might those from minority ethnic backgrounds be more likely to see the use of digital technology as having a fairness premium? To gain some further insight Table 28.4  Perception that digital technology made the handling of their case fairer Significance

Odds ratios

Standard error

Ethnicity

.002**

3.270

.379

Age

.057

.984

.008

Social Grade

.411

.806

.263

Outcome

.142

2.699

.675

Complexity

.859

.951

.286

Voting record

.856

.953

.268

Gender

.105

.665

.251

Constant

.032

.100

1.073

Cox & Snell R Square

Nagelkerke R Square

.038

.064

Notes:   *p =