Contracting and Contract Law in the Age of Artificial Intelligence 9781509950683, 9781509950713, 9781509950706

This book provides original, diverse, and timely insights into the nature, scope, and implications of Artificial Intelli

445 122 5MB

English Pages [325] Year 2022

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Contracting and Contract Law in the Age of Artificial Intelligence
 9781509950683, 9781509950713, 9781509950706

Table of contents :
Preface
Acknowledgements
Contents
Notes on Contributors
PART I: FORMATION OF CONTRACT, AUTONOMY AND CONSENT
1. Mapping Artificial Intelligence: Perspectives from Computer Science
I. Introduction
II. The AI Seasons
III. Big Data and Deep Learning
IV. Artificial Intelligence and Law
V. Conclusions
2. Artificial Intelligence, Contracting and Contract Law: An Introduction
I. Introduction
II. Contracting in the Age of AI
III. Freedom of Contract and Party Autonomy
IV. Pre-contractual Duties
V. Formation of Contract
VI. Defects in Consent
VII. Incorporation of Standard Terms
VIII. Interpretation
IX. Fairness of Contracts
X. Contractual Liability
XI. Outlook
3. When AI Meets Smart Contracts: The Regulation of Hyper-Autonomous Contracting Systems?
I. Introduction
II. Hyper-Autonomous Contracting: Super-Smart Contracts?
III. The 'Rule of Code'? Insights from Relational Contract Scholarship
IV. Conclusion
4. A Philosophy of Contract Law for Artificial Intelligence: Shared Intentionality
I. Introduction
II. Artificial Intelligence in Contracting: Past, Present, and Future
III. Objective Intent and the Attribution of Intentionality
IV. Shared Intentionality as the Core of Contractual Obligation
V. Conclusion
5. From Document to Data: Revolution of Contract Through Legal Technologies
I. A New Season of the Contract
II. Legal as a Product and Lawyers as Legal Tech Integrators
III. The Legal Tech Software as a New Intermediary
IV. Compliance and Regtech Through Contract Software
V. Asymmetries and Clauses Evaluation
VI. Conclusion
PART II: DRAFTING, AI TOOLS FOR CONTRACTING AND CONTRACT ANALYSIS, MANAGEMENT
6. Legal Tech Solutions for the Management of the Contract Lifecycle
I. Introduction and Definitions
II. Drafting Phase
III. Review and Negotiation
IV. e-Signing
V. Storage and Post-signing Management
VI. Analysis Through AI & Machine Learning
VII. Choosing the Right Solution. Outsourcing, Custom Build Applications, No Code Solutions and Enterprise Marketplaces
VIII. Conclusions
IX. Appendix: Phases of the Contract Lifecycle and Providers Quoted in this Chapter
7. Building a Chatbot: Challenges under Copyright and Data Protection Law
I. Introduction
II. Technical Background of Chatbots
III. Legal Framework for Building Chatbots
IV. Conclusion
8. Legal Tech Solutions as Digital Services under the Digital Content Directive and E-Commerce Directive
I. Introduction
II. Do Legal Tech Services Fall within the Scope of the Digital Content Directive?
III. Objective Conformity Criteria for Legal Tech Services
IV. Legal Profession Rules as Objective Conformity Criteria under DCD?
V. Possibility to Exclude or Limit Liability for Legal Tech Services in Standard Terms
VI. Legal Tech Services and the Country-of-origin Principle of the E-Commerce Directive
VII. Conclusion
9. Contracting in Code
I. Introduction
II. A Primer on Translation
III. Logical Ancestors and the Formalistic Return
IV. A Study of Code
V. Observations and Implications
VI. Emerging Frontiers
10. Summarising Multilingual Documents: The Unexpressed Potential of Deep Natural Language Processing
I. Preliminaries
II. The State of the Art of Automated Text Summarisation
III. The Summarisation Pipeline
IV. Legal Document Summarisation
V. Conclusions and Future Research Directions
PART III: (NON-)PERFORMANCE, REMEDIES AND DISPUTE RESOLUTION
11. Remedies for Artificial Intelligence
I. Introduction
II. Overview of the Problems Raised by the Use of AI
III. Defects in Consent
IV. Force Majeure and Frustration
V. Specific Performance
VI. Damages
VII. The Quest for Explainability
VIII. Coding Law and Ethics into AI Algorithms
IX. Conclusion
12. Artificial Intelligence and Platform Services: EU Consumer (Contract) Law and New Regulatory Developments
I. Setting the Scene
II. Online Platforms as a Driving Force for AI in Consumer Markets
III. Instrumental Rationality of the EU acquis
IV. A High Level of Consumer Protection in the Age of Platforms and AI
V. Regulatory Developments: Proposed Artificial Intelligence and Digital Services Acts
VI. Insights for EU Consumer (Contract) Law
13. Artificial Intelligence and Anticompetitive Collusion: From the 'Meeting of Minds' towards the 'Meeting of Algorithms'?
I. Introduction
14. Artificial Intelligence and Contracts: Reflection about Dispute Resolution
I. Introduction
II. Smart Contract and Self-Driving Contract
III. The Use of AI in Dispute Resolution
IV. Conclusion
Index

Citation preview

CONTRACTING AND CONTRACT LAW IN THE AGE OF ARTIFICIAL INTELLIGENCE This book provides original, diverse, and timely insights into the nature, scope, and implications of Artificial Intelligence (AI), especially machine learning and natural language processing, in relation to contracting practices and contract law. The chapters feature unique, critical, and in-depth analysis of a range of topical issues, including how the use of AI in contracting affects key principles of contract law (from formation to remedies), the implications for autonomy, consent, and information asymmetries in contracting, and how AI is shaping contracting practices and the laws relating to specific types of contracts and sectors. The contributors represent an interdisciplinary team of lawyers, computer scientists, economists, political scientists, and linguists from academia, legal practice, policy, and the technology sector. The chapters not only engage with salient theories from different disciplines, but also examine current and potential realworld applications and implications of AI in contracting and explore feasible legal, policy, and technological responses to address the challenges presented by AI in this field. The book covers major common and civil law jurisdictions, including the EU, Italy, Germany, UK and the US. It should be read by anyone interested in the complex and fast-evolving relationship between AI, contract law, and related areas of law such as business, commercial, consumer, competition, and data protection laws.

ii

Contracting and Contract Law in the Age of Artificial Intelligence Edited by

Martin Ebers Cristina Poncibò and

Mimi Zou

HART PUBLISHING Bloomsbury Publishing Plc Kemp House, Chawley Park, Cumnor Hill, Oxford, OX2 9PH, UK 1385 Broadway, New York, NY 10018, USA 29 Earlsfort Terrace, Dublin 2, Ireland HART PUBLISHING, the Hart/Stag logo, BLOOMSBURY and the Diana logo are trademarks of Bloomsbury Publishing Plc First published in Great Britain 2022 Copyright © The editors and contributors severally 2022 The editors and contributors have asserted their right under the Copyright, Designs and Patents Act 1988 to be identified as Authors of this work. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage or retrieval system, without prior permission in writing from the publishers. While every care has been taken to ensure the accuracy of this work, no responsibility for loss or damage occasioned to any person acting or refraining from action as a result of any statement in it can be accepted by the authors, editors or publishers. All UK Government legislation and other public sector information used in the work is Crown Copyright ©. All House of Lords and House of Commons information used in the work is Parliamentary Copyright ©. This information is reused under the terms of the Open Government Licence v3.0 (http://www.nationalarchives.gov.uk/doc/ open-government-licence/version/3) except where otherwise stated. All Eur-lex material used in the work is © European Union, http://eur-lex.europa.eu/, 1998–2022. A catalogue record for this book is available from the British Library. A catalogue record for this book is available from the Library of Congress. Library of Congress Control Number: 2022933882 ISBN: HB: 978-1-50995-068-3 ePDF: 978-1-50995-070-6 ePub: 978-1-50995-069-0 Typeset by Compuscript Ltd, Shannon To find out more about our authors and books visit www.hartpublishing.co.uk. Here you will find extracts, author information, details of forthcoming events and the option to sign up for our newsletters.

PREFACE Today we are witnessing the rapid development of artificial intelligence (AI), which is enabling software agents to carry out a growing variety of transactions independent of human intervention. Some of these transactions include comparative pricing, negotiating contractual terms, and buying and selling goods and services. A broad question underlying numerous contributions to this book is whether the use of AI in contracting can support or, perhaps in the near future, even replace humans when it comes to entering into, drafting, and executing contracts. AI and other new technologies such as blockchain-based smart contracts can remove various forms of human intervention from contractual transactions. Accordingly, the law must find a way to enforce transactions between AI and a human contracting party where, strictly speaking, there is no ‘consensus’ or ‘meeting of the minds’ in the traditional definition of these concepts. This is crucial for machine-generated transactions where human intervention is absent from at least one side of the transaction. Indeed, AI systems and tools in use today can enter into transactions whereby the details of such transactions are unknown to humans. In some cases, contracts are concluded with content that was never explicitly intended, foreseen, or authorised by their programmers or users. In these cases, such AI systems are not mere instruments for contractual transactions. They are not static vending machines that merely serve as conduits through which contractual transactions take place. While extra-contractual liability for AI has been widely analysed by scholars in various jurisdictions, the specific impact of AI on our understanding of contracting and contract law in this new age remains largely unexplored. This book thus provides original, diverse, and timely insights into the nature, scope, and impact of AI – particularly machine learning and natural language processing – in relation to contracting practices and contract law. The chapters of this book provide an in-depth analyses of a range of topical issues, including how the use of AI in contracting affects key principles of contract law (from formation to remedies); the impact of AI on autonomy, consent, and information asymmetries in contracting; and how AI is shaping contracting practices and the laws relating to specific types of contracts and sectors. The chapters not only examine these issues by drawing on key theories from different disciplines, but also real-world applications of AI in contracting. The chapters further explore possible legal, policy, and technological responses to the challenges that AI poses in this field. The contributors to this book represent an interdisciplinary team of lawyers, computer scientists, economists, political scientists, and linguists from academia,

vi  Preface legal practice, policy, and the technology sector. Furthermore, the book covers major common law and civil law jurisdictions that are at the forefront of AI development and application globally, including the European Union (EU), the United Kingdom (UK), and the United States (US).

AI in Contractual Decision-making and Drafting Numerous chapters of the book explore inter alia how AI may help (and in some cases, replace) humans in contractual decision-making and drafting. AI may be used in the pre-contractual phase to assess whether or not a party should enter into a contract at all. This involves several aspects. AI systems can encourage people to enter into a contract by identifying the potential needs to be met by the contract. To this end, some companies and banks are using AI to analyse prospective corporate mergers or takeovers. In the same vein, a company may use AI for location-based intelligence to predict specific product purchases by customers in a particular locality. AI can help to enhance decision-making processes by analysing large datasets that reveal a range of contractual risks. AI systems make it possible to undertake due diligence of hundreds of thousands of documents in a very short time, and to identify issues that need to be flagged to a human reviewer. AI-driven risk analysis tools can also be deployed to assess individual risk profiles. Several AI systems are already in use by financial and insurance institutions to evaluate their customers in relation to whether or not a loan may be granted or an insurance policy taken out, and on what terms. In addition to risk analysis, AI can also improve the analysis of transaction costs. Economists have examined how people decide between different contractual alternatives on the basis of the related transaction costs. AI-driven pricing software can facilitate this decision-making process through intelligent data analytical tools. Moreover, AI can help with the analysis and drafting of contract clauses. AI-driven contract review software, which analyses contract clauses, makes it possible to identify certain risks. Applications of this type are already widespread in several jurisdictions such as the US and UK. Nevertheless, many chapters of this book point out that automated drafting of contracts is still in its infancy. In fact, most software solutions currently available are based on standard templates that have been adapted to the circumstances of an individual case with the help of questionnaires created by virtual robot assistants, so-called ‘chatbots’. Another method involves writing a contract directly in code to integrate with a computer system for managing and executing contracts. The contract simultaneously becomes part of the database that will later be used to train the AI system for drafting contracts. Finally, AI can support decision-making during the lifetime of a contract. Although the use of AI is not yet widespread in this area, it is foreseeable that

Preface   vii relevant use cases will become more common. Such applications may be able to evaluate the advantages and disadvantages of certain options available to parties during the performance phase of a contract. AI may even help parties decide whether or not a breach of contract is more advantageous than performing the contract by predicting the respective costs of performance and non-performance.

AI in Contract Formation and Execution The chapters of this book also examine use cases of AI in contract formation and execution. We mention two scenarios here: first, the role of AI as a contractual agent, ie, acting on behalf of humans and/or other computers; second, the potential role of AI (in the future) as an autonomous contractual party. First, AI can be considered an agent in contract law in the context of computer systems that can be programmed by humans to make certain contractual decisions. Thus, a machine can decide whether or not to conclude a contract if specific predetermined conditions are met. This is the case with algorithmic trading, which accounts for a significant proportion of financial transactions globally. Based on hundreds of parameters, a computer system can decide the buying or selling of a particular financial product at a certain price, in a certain volume, at a certain time, in a certain market, and under certain conditions. Moreover, AI systems may also decide when and how to perform a contract. The mechanical execution of contracts is well-known, for example, in beverage vending machines. In such a case, no intelligence is required. The same applies to smart contracts involving software code that automates the execution of a contract. Second, in the future, AI systems may become autonomous contractual parties. This is where AI no longer acts as an agent for others, but for itself. It is autonomous, in the literal sense of auto-nomos, which is governed by its own laws. To be party to a contract, one must generally have the capacity to contract, ie, possess legal personality. In a resolution adopted in 2017, the European Parliament supported the idea of ‘creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently’. At the dawn of a new era, AI has the potential to radically impact decisionmaking in contracting, with significant implications for contract law. This raises serious research questions that numerous contributors of this book have analysed from different perspectives. For example, with what and/or whom is the human contracting party transacting? From a civil law perspective, a question arises as to whether the contract is concluded by the interposition of a ‘thing’ or a ‘person’? Intuitively, we may see the interposition of a ‘thing’ in this case: the contract is concluded by AI as an instrument in the same way that a phone or computer may

viii  Preface be used to enter into a contract. However, the use of AI here involves a certain level of autonomy in contracting. So, should we conclude that, more than the interposition of a ‘thing’, it is AI that actually realises the interposition of a form of ‘person’? In both civil and common law jurisdictions, the law already grants legal personality to non-human contracting parties such as corporations. The possibility to grant legal personality to AI systems is, however, disputed among legal scholars. If we classify the AI system involved in the conclusion of a contract as an ‘agent’ acting on behalf of the system’s user (the ‘principal’), this could give rise to some practical problems. Notably, the expression of the user’s will may be limited due to the autonomy of the AI. Depending on the circumstances, the distribution of risks between the ‘principal’ and ‘agent’ may differ. For instance, if the AI system itself does not function properly or is poorly programmed and makes bad decisions, the risks must be assumed by the developer. However, if the bad decisions made by the AI system reflect defects in a poorly designed contract, risk sharing between the parties could be considered. In this latter case, it may be necessary to provide certain legal protection to some contracting parties such as consumers. For example, consumers may be given the right to an extended period of time to cancel the contract, which would enable consumers to regain their decision-making power to some extent. Third and important, Alan Turing, the father of modern computing, basically studied the following question: can machines think? In terms of contract law theory, the relevant questions are: can AI systems make contracts, or more broadly, make decisions instead of a contracting party? Does such a machine have a place in modern contract law? The use of AI in contracting has the potential to shake up core aspects of contract law theory, such as agreement, consensus, and the intention of the parties. For instance, even if we were to decide on policy grounds to impose contractual liability on individuals who use or program AI systems, it is difficult to regard these individuals as ‘offerors’ in the conventional understanding of this concept. Such individuals have no idea if or when the system will make an offer; they have no idea what the terms of the offer will be; and they have no easy way to influence negotiations once the transaction is underway. Moreover, if we consider relational contract theories, such as the work of Ian MacNeil, contracts should be studied and understood as (human) relations rather than as discrete transactions. AI systems have the potential of replacing the relational aspects of contracts, such as trust, promise, consent, and enforcement.

Conclusion The chapters in this book highlight the multifaceted and complex nature and implications of AI in relation to contracting and contract law. There is a dominant discourse surrounding the use of AI to drive efficiency in contracting, such as cost reduction, process optimisation, and disintermediation. This economic logic can

Preface   ix directly clash with the law’s own logic and values such as the importance of good faith, fairness, and protection of the weaker party to a contract. We conclude with considering the risk of viewing AI as displacing the human elements of contracts. Contracts are created by and for humans to undertake a range of activities, even if contracts entail (non-human) legal persons such as corporations. Furthermore, contract law pays attention to the human elements of contracting, such as defects in consent and the intent of the parties. It remains to be seen whether AI will marginalise these human elements of contracting and contract law. As AI develops, it may remove humans from important processes of decision-making in relation to contracting. This is already reflected in algorithmic trading whereby the assumption is that humans cannot make decisions in microseconds (that is, in millionths of a second) when faced with vast amounts of data. There is a real danger in heralding the superiority of AI over humans in achieving optimal economic outcomes in contracting. We believe that the human elements are what essentially make contracts and contract law ‘living institutions’ that serve a range of important market and non-market purposes. Martin Ebers Cristina Poncibò Mimi Zou

x

ACKNOWLEDGEMENTS The contributions to this book are primarily based on the presentations held at the virtual conference ‘Contracting and Contract Law in the Age of Artificial Intelligence’ at the University of Turin on the 11th and 12th of February 2021. The conference received funding by the Journal of Artificial Intelligence (AIJ) under the AIJ’s 22nd call for Sponsorship. We are grateful to the people who made this conference possible and would like to thank especially Prof. Luca Cagliero (Department of Control and Computer Engineering of Turin Polytechnic), Prof. Massimo Durante (Department of Law of the University of Turin) and Dr. Willem Wiggers (Weagree) for chairing the sessions of our conference. In addition, we would like to express our gratitude to the Interdepartmental Research Center for Artificial Intelligence of the University of Eastern Piedmont, the Department of Control and Computer Engineering of Turin Polytechnic, The Turin Observatory on Economic Law & Innovation (TOELI) and the Robotics & AI Law Society (RAILS) for supporting the conference and this book. Last but not least, we would like to thank the wonderful team at Hart, especially Rosemarie and Roberta, for their continued support and patience in helping us at every step of the way in publishing this book.

xii

CONTENTS Preface����������������������������������������������������������������������������������������������������������������������������v Acknowledgements������������������������������������������������������������������������������������������������������ xi Notes on Contributors�������������������������������������������������������������������������������������������������xv PART I FORMATION OF CONTRACT, AUTONOMY AND CONSENT 1. Mapping Artificial Intelligence: Perspectives from Computer Science�����������������3 Luigi Portinale 2. Artificial Intelligence, Contracting and Contract Law: An Introduction����������������������������������������������������������������������������������������������������19 Martin Ebers 3. When AI Meets Smart Contracts: The Regulation of Hyper-Autonomous Contracting Systems?�������������������������������������������������������������������������������������������41 Mimi Zou 4. A Philosophy of Contract Law for Artificial Intelligence: Shared Intentionality��������������������������������������������������������������������������������������������������������59 John Linarelli 5. From Document to Data: Revolution of Contract Through Legal Technologies���������������������������������������������������������������������������������������������������������������81 Silvia Martinelli and Carlo Rossi Chauvenet PART II DRAFTING, AI TOOLS FOR CONTRACTING AND CONTRACT ANALYSIS, MANAGEMENT 6. Legal Tech Solutions for the Management of the Contract Lifecycle�����������������99 Giulio Messori 7. Building a Chatbot: Challenges under Copyright and Data Protection Law���������������������������������������������������������������������������������������������������115 Aleksei Kelli, Arvi Tavast and Krister Lindén 8. Legal Tech Solutions as Digital Services under the Digital Content Directive and E-Commerce Directive����������������������������������������������������������������135 Karin Sein

xiv  Contents 9. Contracting in Code�������������������������������������������������������������������������������������������155 Megan Ma 10. Summarising Multilingual Documents: The Unexpressed Potential of Deep Natural Language Processing�����������������������������������������������181 Luca Cagliero PART III (NON-)PERFORMANCE, REMEDIES AND DISPUTE RESOLUTION 11. Remedies for Artificial Intelligence��������������������������������������������������������������������201 Cristina Poncibò 12. Artificial Intelligence and Platform Services: EU Consumer (Contract) Law and New Regulatory Developments����������������������������������������221 Monika Namysłowska and Agnieszka Jabłonowska 13. Artificial Intelligence and Anticompetitive Collusion: From the ‘Meeting of Minds’ towards the ‘Meeting of Algorithms’?���������������������������������249 Giuseppe Colangelo 14. Artificial Intelligence and Contracts: Reflection about Dispute Resolution�����������������������������������������������������������������������������������������������������������267 Paola Aurucci and Piercarlo Rossi Index��������������������������������������������������������������������������������������������������������������������������283

NOTES ON CONTRIBUTORS Paola Aurucci is a research fellow at the Department of Management of the University of Turin. For several years she has been a researcher at the University of Eastern Piedmont and from 2016 she has been collaborating with the Faculty of Philosophy of Law at the University of Turin (Department of Law). Since 2017, she has collaborated as a researcher and law consultant at the Center for Advanced Technology in Health & Wellbeing of the San Raffaele Hospital in Milan. Since 2021, she has collaborated as a Privacy Consultant with the Ethical Committee of the Hospital San Raffaele. Luca Cagliero is Associate Professor at the Politecnico di Torino, where he teaches B.Sc., M.Sc., Ph.D., and Master-level courses on database and data warehouse design, data mining and Machine Learning techniques, and Deep Natural Language Processing. He is currently a member of the Centro Studi@PSQL board for data-driven planning of strategic, university-level actions. He is also the academic advisor of both incoming and outgoing students for the Data Science and Engineering exchange program. Since 2017, he has joined the SmartData@Polito interdepartmental research center. His research activities are mainly devoted to studying innovative data mining and machine learning solutions and algorithms, with a particular emphasis on text summarisation, classification of structured and semi-structured data, and multiple-level pattern mining. Luca is associate editor of the Expert Systems With Applications and Machine Learning With Applications journals, both edited by Elsevier. He has co-authored 100+ scientific publications. He has been the recipient of the Telecom Italia Working Capital 2011 Research Grant. He has coordinated scientific collaborations with big companies (eg, F.C.A. SpA, Telecom SpA, Telepass SpA) and with various SMEs (eg, Tierra SpA, Reale Mutua Assicurazioni, Pattern Srl). Giuseppe Colangelo is a Jean Monnet Professor of European Innovation Policy and an Associate Professor of Law and Economics at University of Basilicata. Since 2017, he has been a Transatlantic Technology Law Forum (TTLF) Fellow at Stanford University Law School. He also serves as Adjunct Professor of Markets, Regulation and Law, and of Competition and Markets of Innovation at LUISS. His primary research interests are related to innovation policy, intellectual property, competition policy, market regulation, and economic analysis of law. Martin Ebers is Associate Professor of IT Law at the University of Tartu (Estonia) and as ‘Privatdozent’ permanent research fellow at the Humboldt University of Berlin (Germany). He is co-founder and president of the Robotics & AI Law

xvi  Notes on Contributors Society (RAILS). Martin is the author and editor of 16 books and over 120 articles published in national and international journals. In addition to research and teaching, he has been active in the field of legal consulting for many years. His main areas of expertise and research are IT law, liability and insurance law, and European and comparative law. In 2020, he published the books ‘Algorithms and Law’ (Cambridge University Press), ‘Rechtshandbuch Künstliche Intelligenz und Robotik’ (C.H. Beck) and ‘Algorithmic Governance and Governance of Algorithms’ (Springer Nature). Agnieszka Jabłonowska is an assistant professor at the Institute of Law Studies of the Polish Academy of Sciences in Warsaw where she works on the project ‘Citizen empowerment through online terms of service review: an automated transparency assessment by explainable AI’. Prior to that she was a Max Weber Fellow at the European University Institute in Florence, a research assistant and a PhD researcher at the University of Lodz. Her academic interests lie at the intersection of law and technology, with a focus on consumer protection, online platforms and artificial intelligence. Agnieszka has been involved, among others, in the European Law Institute’s project ‘Model rules on online platforms’, the European University Institute’s project ‘Artificial intelligence systems and consumer law & policy’ (ARTSY) and the project ‘Consumer protection and artificial intelligence. Between law and ethics’ carried out at the University of Lodz. Aleksei Kelli is Professor of Intellectual Property Law (the University of Tartu, Estonia). He is a member of the court of honour of the Estonian Bar Association and CLARIN ERIC Legal and Ethical Issues Committee. Aleksei holds a doctorate (PhD in Law) from the University of Tartu (2009). Aleksei has acted as the Head of an Expert Group on the Codification of the Intellectual Property Law (2012–2014, the Ministry of Justice of Estonia). He was the principal investigator in the Programme for Addressing Socio-economic Challenges of Sectoral R&D in the field of industrial property (2017–2018) and open science (2016–2017). Dr Kelli managed a project to improve industry-academia cooperation and knowledge transfer in the Ukraine (2015–2016) and was the leading intellectual property expert in the research and innovation policy monitoring programme (2011–2015). Dr Kelli was also a Member of the Team of Specialists on Intellectual Property (2010–2013, the United Nations Economic Commission for Europe). He has taken part in several EU and Estonian R&D projects as a leading IP, innovation, and data protection expert. Dr Kelli has published numerous works on intellectual property, innovation, personal data protection, knowledge transfer, cultural heritage and related issues. John Linarelli is Associate Dean for Academic Affairs and Professor of Law at Touro University Jacob D. Fuchsberg Law Center in Central Islip, New York. He was, until July 2020, Professor of Commercial Law at Durham University Law School in the UK. His work on artificial intelligence draws connections between moral psychology, philosophy of mind, and law. His many publications include

Notes on Contributors   xvii ‘Artificial Intelligence and Contract’, published in the Uniform Law Review. He is the shared recipient of the European Society of International Law Book Award for 2019. He is an elected member of the American Law Institute and a fellow of the European Law Institute. He is series co-editor of Hart Studies in Commercial and Financial Law. He holds a PhD in Philosophy from the University of California Riverside and a PhD in Law from King’s College London. Krister Lindén is Research Director of Language Technology at the Department of Digital Humanities at the University of Helsinki. He is National Coordinator of FIN-CLARIN and has served as Chair of the CLARIN Strategy and Management Board and is Vice Chair of the CLARIN National Coordinators’ Forum, as well as participates in the CLARIN Legal and Ethical Issues Committee and the CLARIN Interoperability Committee. He holds a doctorate (PhD in Language Technology) from the University of Helsinki in 2005. He serves as National Anchor Point in ELRC and is Finland’s representative in ELG. He has experience as CEO and CTO of the commercial company Lingsoft Inc. with the successful application and completion of several EU projects. He is very familiar with current methods and branches within language and speech technology and has directed a number of research projects funded by the Academy of Finland and is currently Vice-Team Leader of the Center of Excellence in Ancient Near Eastern Empires. In addition to having developed software for processing resources for the national languages of Finland, he has published more than 140 peer-reviewed scientific publications. Megan Ma is a Residential Fellow at the Stanford Center for Legal Informatics (CodeX). Her research considers the limits of legal expression, in particular how code could become the next legal language. Her work reflects on the frameworks of legal interpretation and its overlap in linguistics, logic, and aesthetic programming. Megan is also the Managing Editor of the MIT Computational Law Report and a Research Affiliate at Singapore Management University in their Centre for Computational Law. As well, she is finishing her PhD in Law at Sciences Po and was a lecturer there, having taught courses in Artificial Intelligence and Legal Reasoning, Legal Semantics, and Public Health Law and Policy. She has previously been a Visiting PhD at the University of Cambridge and Harvard Law School respectively. Silvia Martinelli is lawyer, Philosophiae Doctor (PhD), Research Fellow at the University of Turin, Co-founder and Fellow at the Turin Observatory on Economic Law and Innovation (TOELI), Fellow at the Information Society Law Center (ISLC) and Strategic Research Manager in Data Valley. Graduated in Law at the University of Milan and specialised in Legal Informatics, she obtained her doctorate with merit from the University of Turin, with a thesis on platform economy and the responsibility of intermediary platforms, focusing on Uber, Airbnb, Amazon and Ebay. In Data Valley she is Strategic Research Manager and deals with research related to data-driven business models and the development of hightech solutions and services. She is also co-founder and member of the Editorial

xviii  Notes on Contributors Board of the Journal of Law, Market & Innovation (JLMI), TOELI Research Papers coordinator, member of the Board of the Journal of Strategic Contracting and Negotiation and of the Editorial Committee of the Law Reviews ‘Ciberspazio e Diritto’, ‘Diritto, Mercato e Tecnologia’ and ‘Diritto di Internet’, Fellow of the European Law Institute and of the Italian Academy of Internet Code, Member of the European Law & Tech Network. Giulio Messori is a co-founder at Sweet Legal Tech (SLT), an Italian Legal Tech start up offering consulting, education and the integration of existing legal tech solutions with the aim of transforming legal and administrative processes in legal teams. Prior to SLT, he worked as a Legal Counsel at Chino.io, a company providing cloud and IT infrastructure solutions to the healthcare sector and Data Protection and IT law Senior Associate at CRCLEX, advising companies in the Digital-Out-Of-Home, Digital Signage, Healthcare and IoT sector on Privacy, Data Governance and Data Security compliance. In 2019, with two fellow colleagues at Google Italy he co-founded develawpers.com, an Open-Coding Library dedicated to the collection and illustration of coding cases in the world of law. Giulio holds an LL.M. (Master of Laws) in Law of Internet Technology, obtained cum laude at Bocconi University (Milan, Italy) and a Master’s Degree in Law with full marks at the University of Bologna (Italy). Monika Namysłowska is Professor of Law and Head of the Department of European Economic Law at the University of Lodz, Poland. Her main areas of research cover European, Polish and German Private Law, particularly IT law and consumer law. She was visiting professor in Germany (Humboldt-University, Berlin; Georg-August-University, Göttingen; University of Regensburg; University of Münster), Italy (University of Naples Federico II), Spain (Universidad Publica de Navarra in Pamplona) and Hungary (University of Szeged). She is principal investigator in the project ‘Consumer Protection and Artificial Intelligence. Between Law and Ethics’ funded by the National Science Centre in Poland (DEC/2018/31/B/ HS5/01169). She is local coordinator of TechLawClinics – an international project (University of Nijmegen (NL), University of Lodz (PL), University of Krakow (PL), University of Eastern Piedmont (IT)) on legal challenges and implications of digital technologies, supported by Erasmus+. She was member of the Advisory Board of the President of the Office of Competition and Consumer Protection (UOKiK) in Poland (2014–2016). She is expert in the Consumer Policy Advisory Group established by the European Commission. Carlo Rossi Chauvenet is Managing partner of CRCLEX, a law firm specialising in IT Law and Privacy Law, and is appointed as Data Protection Officer (DPO) in listed companies. Carlo is the coordinator of the Legal Clinic of the start-up accelerator ‘Bocconi4Innovation’, co-founder of legal tech companies such as ‘Iubenda’, an automatic privacy policy generator, and ‘Sweet Legal Tech’, a consultancy company in Legal Tech. Carlo is Adjunct professor of ‘Privacy Law and Data Strategy’ at the LLm in Law of Internet Technology at Bocconi University, at the

Notes on Contributors   xix Master for Corporate Counsels and at the Master in Open Innovation Management at the University of Padova. He is also Chair of the ‘National Centre for IOT and Privacy’ and manager of the ‘Data Valley’ initiative, a program dedicated to the development of partnerships between PMI and digital Multinational companies. Education: BA (hons.) in Law (Bocconi University), PHD (University of Padua), LLM in International Corporate law (New York University School of law), LLM in International Commerce (National University of Singapore). Cristina Poncibò is Professor of Comparative Private Law at the Law Department of the University of Turin, Italy and Visiting Professor (2021–2022) at the Georgetown Law Center for Transnational Legal Studies in London. Cristina is Fellow of the Transatlantic Technology Law Forum (Stanford Law School and Vienna School of Law). She is a co-editor of the Cambridge Handbook of Smart Contracts, Blockchain Technology and Digital Platforms (Cambridge University Press, 2019, with L Matteo and M Cannarsa). Cristina is a member of the International Association of Comparative Law and Delegate of the Law Department (sponsor institution) to the American Association of Comparative Law. She is also the scientific director of the Master in International Trade Law, co-organised with ITC-ILO, in cooperation with Unicitral and Unidroit. Cristina is a graduate of the University of Turin (MA) and Florence (PhD). In her career, she has been a Marie Curie IEF Fellow (Université Panthéon-Assas) and a Max Weber Fellow (EUI). Luigi Portinale is full Professor of Computer Science and Artificial Intelligence at Università del Piemonte Orientale (UPO) in Italy. He has been working on AI for more than 30 years dealing with several topics: knowledge representation, uncertain and probabilistic reasoning, case-based reasoning, machine learning and deep learning. He is currently the director of the Research Center on Artificial Intelligence at UPO (AI@UPO). He has published one book, more than 150 papers on international journal and referred conference proceedings, and edited several volumes. He is member of the Italian Association for Artificial Intelligence (AI*IA) and of the Association for the Advancement of Artificial Intelligence (AAAI, formerly American Association for AI). Piercarlo Rossi is full Professor of Private Law, International Contract Law and Comparative Legal Systems at the Department of Management of the University of Turin. From 2019 he has been appointed as President of The University Institute of European Studies (IUSE) in Turin. He is a member of the steering committee of several publications such as the Springer series ‘Law, Governance and Technology’. Prof. Rossi is scientific director and coordinator of numerous national and European projects. His research interests are mainly focused on the comparison of legal reforms occurring in Europe and Asia, law and economics, law and information technology. Karin Sein is Professor of Civil Law and the Deputy Head of the Institute of Private Law in the Faculty of Law of the University of Tartu, Estonia. Her main

xx  Notes on Contributors research interests cover domestic and European contract law, consumer law, private international law and international civil procedure and also law and ­digitalisation. Since 2018, she has been leading the 4-year scientific project funded by the Estonian Research Council and concentrating on consumer contract law in the Digital Single Market. In recent years, she has provided expertise for the Estonian Ministry of Justice on implementing European consumer protection directives into Estonian contract law. During the Estonian EU Presidency in July–December 2017, she was acting as Chair for the Council Working Group for the Proposals of the Directive on Digital Content and of the Directive on Online Sale of Consumer Goods. Arvi Tavast is Director of the Institute of the Estonian Language. Arvi holds a PhD from the University of Tartu (2008). Dr Tavast has been involved in development of dictionaries and language models in public and private sector (eg, acted as the developer for Ekilex). Arvi has numerous publications on language technology, copyright and personal data. He has also supervised several theses. Mimi Zou is Associate Professor (Reader level) at the School of Law, University of Reading, UK. She was formerly the Principal Investigator of the Deep Tech Dispute Resolution Lab at the University of Oxford. Mimi’s research interests are in comparative contract and commercial law and the intersection between law and technology, especially in the area of dispute resolution.

part i Formation of Contract, Autonomy and Consent

2

1 Mapping Artificial Intelligence: Perspectives from Computer Science LUIGI PORTINALE

I. Introduction Artificial Intelligence (AI) is a mature discipline stemming from computer science, now pervading every discipline (scientific and non-scientific) and several aspects of our everyday life. Even if understanding the design and implementation of intelligent systems requires specific skills in computer science, mathematics and statistics, gaining awareness about AI and the impact of related technologies is now becoming a must in humanities as well, and law is not an exception. As shown in Figure 1 below, AI is at the intersection between several subfields of mathematics (in particular, logic, probability theory, statistics, optimisation, calculus, linear algebra) and computer science (theory and practice of algorithms and data structures, complexity theory, programming languages, computational architectures). Figure 1  The AI landscape

Moreover, a specific subfield of AI (exploiting substantial elements of both mathematics and computer science) is the Machine Learning (ML) field; the goal of ML

4  Luigi Portinale is to provide artificial systems (let’s call them agents) with learning from experience capabilities. Even if very often ML is confused with AI ‘in toto’, it should be clear that it is just a subfield with specific goals concerning automatic learning from data and situations, while AI has a wider objective concerning the construction of an intelligent agent that, once it has learned the needed knowledge (either from data or because externally provided in some other form) exploits such a knowledge to perform a specific task, that we usually consider to require some form of intelligence to be solved. For example, old fashioned expert systems1 were usually built without any learning component; the knowledge base needed to perform the target task (eg, the suggestion of a therapy, the discovery of a mineral deposit, the design of a computer configuration just to cite some real-world applications that have been tackled by this approach) was usually constructed through a manual process called knowledge engineering, where the computer scientist had to work in strict contact with a ‘domain expert’, with the goal of transferring some of the expert’s knowledge into the system and in the suitable formalism. This approach quickly showed its limits, highlighting more and more the need for automatic learning as promised by ML approaches (discussed further below). On the other hand, while evaluating the impact and the influence with respect to other disciplines, we can notice that, in addition to computer science and mathematics, AI plays an important role, and is somehow influenced, by several other disciplines, some scientific (as in the case of biology and neuroscience), some more related to humanities (as in the case of philosophy, sociology and law) and some at the border between the two (cognitive science and psychology). It is worth noting that the impact mentioned above is, however, bidirectional: AI is definitely bringing new opportunities and interesting applications in all such fields, while at the same time it can take insights and principles from them. For instance, cognitive science has been the inspiration for the so-called Cased-Based Reasoning (CBR) paradigm, one of the most popular reasoning frameworks in the application of AI in law. The idea is to solve new problems, or to interpret new situations, by resorting to the solution or the interpretation of similar problems already solved (or interpreted) in the past. This methodology for problem solving, that avoids reasoning from scratch every time a new problem in encountered, is a typical cognitive scheme of humans that can be successfully transferred to artificial agents. Another source of inspiration comes from neuroscience: our brain, which is the ‘tool’ by which we, as humans, perform our reasoning patterns, is composed of several (billions) of interconnected cells called neurons; each connection is called a synapse, and the information needed to perform our reasoning and activities flow from neuron to neuron through the synapses in the form of electrical signals. Even if we do not really know how this process actually works in detail, an ultra-simplified version of the brain, the so-called neural network model, is one of the most successful methodologies in modern AI, and it forms the basis of the

1 P

Jackson, Introduction to Expert Systems 3rd edn (Addison Wesley, 1998).

Mapping Artificial Intelligence: Perspectives from Computer Science  5 ML approach called Deep Learning. What is really impressive is that, even if an artificial neuron has actually nothing to do with a natural neuron, the simplified reconstruction of the whole architecture of the brain, as done in an artificial neural network, seems to be able to mimic some important tasks like object classification, image recognition, interpretation of language sentences, and so on. And all of that is in essence only a matter of ‘matrix manipulation’, in other words a series of sums and multiplications of real numbers (ie, entities that a computer can handle easily, in a much more efficient way than a human being can). The path to the current stage of AI started in 1956, when a group of researchers headed by John McCarthy organised the Dartmouth Summer Research Project on Artificial Intelligence, which is now considered the official birthplace of the discipline, and where the term ‘Artificial Intelligence’ has been coined.2 The initial steps of AI were essentially based on the exciting possibility of building programs able to exhibit intelligent behaviour at the human level. However, the initial optimistic predictions about such a possibility immediately faced the hard truth: understanding and replicating human intelligence is an extremely difficult problem and involves the definition of sophisticated methodologies, as well as the availability of suitable computational resources. The history of AI has thus evolved between ups and downs in the so-called ‘AI seasons’, where spring days (with a lot of funding and active projects) were followed by winter days (where funding almost completely disappeared, and the disappointment for the results was very high). Today, we are experiencing a very sunny summer, with a renewed interest for AI methods and applications never seen before. This is due essentially to the availability of three main ‘assets’: new methodologies for AI model building and refinement (from basic research in computer science and related disciplines), a huge amount of available data of very different types (text, images, sounds, structured data, etc …), and finally large scale and high-performance computational resources. The rise of the so-called deep learning approach3 allows us to address very difficult problems requiring complex reasoning capabilities as in decision support, reactive behaviour as in autonomous devices, interactive behaviour as in personal assistant devices understanding natural language, and so forth. The impact of such methodologies pushes the current technologies towards several new applications, by producing complex side-effects on the everyday environment, in particular at the socio-economical, juridical and ethical level. In the present chapter, we will review the mainstream approaches that have been developed in AI during the years, until the current rise of deep learning. We will then discuss the current trends, their strengths and their limitations and the impact on the socio-economical system related to the adoption of more and more sophisticated intelligent devices and tools.

2 J Moor, ‘The Dartmouth College Artificial Intelligence Conference: The Next Fifty years’, (2006) 27 AI Magazine 4, 87–90. 3 I J Godfellow, Y Bengio and A Courville, Deep Learning (MIT Press, 2016).

6  Luigi Portinale

II.  The AI Seasons Since the very beginning, AI has raised a very fundamental question: what do we expect from an artificial system for it to be defined as ‘intelligent’? In fact, as noticed by many researchers, the problem with AI is in the name itself: while we are pretty confident in the precise meaning of the word ‘artificial’ (ie, something that has been built by a human being), we do not have any precise definition of the term ‘intelligence’. And this is because we associate so many facets with the human intelligence, that it becomes really hard to be able to condensate all of them in a single definition. Returning to the original question, it is clear that it concerns the actual goal of AI and given that, we have seen two main schools of thought take hold: strong AI and weak AI. According to strong AI, the computer is not merely a tool in the study of the mind; rather, as stated by John Searle: ‘The appropriately programmed computer really is a mind in the sense that computers given the right programs can be literally said to understand and have other cognitive states.’4 To demonstrate the impossibility for computer programs to achieve this level of cognition (including to have consciousness), Searle imagined a mental experiment called the Chinese Room. Consider a computer program that takes as input a sequence of Chinese characters and, through a set of specific, and perhaps very complex rules, it is able to produce other Chinese characters as output. The program is so sophisticated so that it can pass the so called ‘Turing test’ (it may be confused with a human being, if involved in a conversation with another human being).5 The argument raised by Searle is that the computer does not really understand what it is doing. To sustain that, he considers himself being provided with an English version of the program instructions; if he receives a sequence of Chinese characters, he will be able to produce another sequence of Chinese characters by following the instructions, and by simulating a Chinese conversation. However, he ‘does not speak a single word of Chinese’ and so he is does not understand Chinese; in the same way, the program is also not able to understand the Chinese language. Strong AI has not been seriously considered and investigated inside computer science, since more emphasis has been given to practical aspects concerning the construction of systems able to exhibit behaviour that may be considered intelligent by commonsense. This is exactly the issue raised by the weak AI hypothesis: the goal is to build programs (software) that may exhibit intelligent behaviour in restricted and well-specified tasks. A computer program able to diagnose specific diseases in a particular field of medicine can be considered intelligent (by weak AI point of view), even if the same program is not able to understand a sentence in any natural language, or is not able to recognise any kind of physical objects, or is not able to play a simple game like tic-tac-toe.

4 JR

Searle, ‘Minds, Brains and Programs’ (1980) 3(3) Behavioral and Brain Sciences 417–457. Turing, ‘Computing Machinery and Intelligence’ (1950) 59 Mind 433–460.

5 AM

Mapping Artificial Intelligence: Perspectives from Computer Science  7 Moreover, a clear feature of what we consider intelligent behaviour is the ability to learn from experience. This has given birth to an important subfield of AI which is ML. However, the construction of (weak) intelligent systems and the investigation of learning capabilities have usually followed quite separate paths, with several traditional AI systems (we can call them knowledge-based systems) built without a learning component, or with a learning component that was not tightly integrated into the system itself but used as a quite independent module. The equivalence that we can see today in several documents and papers between AI and ML is then not properly justified, both because AI is a more general term than ML, and because of the historical reasons mentioned above. Regarding the first years of AI, they have been characterised by what John McCarty called the ‘look Ma, no hands’ period. The time featured great optimism and very high expectations on what AI could be able to achieve. Some of the most prominent figures of the time like Herbert Simon (Nobel prize for economics in 1978, and the Turing award in 1975) and Marvin Minsky (Turing award in 1969) were very confident in producing the following predictions: • ‘Machines will be capable, within twenty years, of doing any work a man can do.’ (H Simon, 1965). • ‘Within a generation … the problem of creating artificial intelligence will substantially be solved.’ (M Minsky, 1967). • ‘In from three to eight years we will have a machine with the general intelligence of an average human being.’ (M Minsky, 1970). None of these expectations were actually met, since the problems related to making AI successful were much harder to be tackled than imagined at first. AI started a cycle of seasons, alternating disappointments and failures with enthusiasm and successes. The first AI winter started in the 1970s, when it became clear that building systems able to exploit common-sense knowledge was actually very difficult, and when the limits of logic-based approaches (the dominant approaches at the time) became evident (in particular in dealing with situations involving uncertain knowledge and reasoning under uncertainty). Moreover, a discovery (that went down in history as the Moravec’s paradox) especially impacted the field: contrary to some traditional assumptions, basic sensory processing and perception seem to require significantly more computational resources than modelling high-level reasoning processes. In other words, it is easier to build a system able to play an intelligent game like chess at the human-champion level, than to build a system having the same sensory capabilities of a two-year-old baby. The recognition of such limits had the consequence of reducing the scope of AI methodologies, and this was actually a good choice, since it focused more attention on the more practical (and somewhat easier) weak AI hypothesis. Indeed, the 1980s became what has been called the first AI spring. Research in AI focused on intelligent architecture called expert systems; they were systems able to exhibit competence at the human expert level, but only in very restricted areas like the

8  Luigi Portinale diagnosis of restricted classes of diseases, suggestion of specific antibiotic therapies, discovery of mineral deposits, determination of the optimal customer specific configuration of a computer, and so forth. In practice, the victory of weak AI against strong AI. Expert systems like MYCIN,6 CADUCEUS,7 PROSPECTOR,8 and R19 became the most important representatives of the so-called rule-based systems, that is systems with a reasoning paradigm consisting in formalising the problem solving knowledge of a specific problem or task in a set of if-then rules; such rules allows the system to test the occurrence of a specific condition (the if part), and if this condition is found to occur, the system is allowed to obtain a conclusion (the then part). The idea was to have a mechanistic and possibly efficient way of simulating a form of deductive reasoning (ie, from premises to intermediate and then final conclusions). In the same period in Japan, the so called Fifth Generation Computer project was started, with the aim of using PROLOG-based logic programming as the main tool for building AI systems. PROLOG is a programming language born in Europe10 aimed at implementing the type of rule-based reasoning we mentioned above, but with more formal semantics based on a specific (restricted) type of logical reasoning: resolution with Horn clauses.11 At that time, the main programming language used to build knowledge-based systems was LISP,12 a language proposed by John McCarthy that quickly became the standard (especially in the US) in the expert system industry. Some computer manufacturing companies even started the construction and the marketing of ad-hoc computer architectures called Lisp Machines. They were actually general-purpose computers, but designed to efficiently run Lisp as their main software and programming language, usually via specific hardware support. LISP was one of the first functional languages (influenced by the Alonzo Church’s lambda ­calculus13) and its list manipulation primitives (the name is the acronym of the LISP Processor) made it useful to the symbolic manipulation operations required by the AI systems of that time. The Fifth Generation Computer project was among other things the Japanese answer to such a business operation driven 6 BG Buchanan and EH Shortliffe, Rule Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project (Addison-Wesley, 1984). 7 G Banks, ‘Artificial intelligence in medical diagnosis: the INTERNIST/CADUCEUS approach’ (1986) 1(1) Critical Reviews in Medical Informatics 23–54. 8 AN Campbell, VF Hollister, RO Duda and PE Hart, ‘Recognition of a Hidden Mineral Deposit by an Artificial Intelligence Program’ (1982) 217 Science 927–929. 9 J McDermott, R1: an Expert in the Computer Systems Domain, Proc. First National Conference on Artificial Intelligence (AAAI 80), (AAAI Press, 1980) 269–271. 10 A Colmerauer and P Roussel, ‘The birth of Prolog’ (1993), ACM Special Interest Group on Programming Languages (SIGPLAN) Notices. 28(3) 37–52. 11 JA Robinson, ‘A Machine-Oriented Logic Based on the Resolution Principle’ (1965) 12(1) Journal of the Association for Computing Machinery 23–41. 12 J McCarthy, R Brayton, D Edwards, P Fox, L Hodes, D Luckham, K Maling, D Park and S Russell, LISP I Programmers Manual (MIT Press, 1962): Artificial Intelligence Group, M.I.T. Computation Center and Research Laboratory (http://history.siam.org/sup/Fox_1960_LISP.pdf). 13 The Lambda Calculus, introduced by American mathematician Alonzo Church in the 1930s, is a formal system in mathematical logic for expressing computation based on function abstraction.

Mapping Artificial Intelligence: Perspectives from Computer Science  9 by American  companies. Despite the failure of the original Japanese project, PROLOG is still adopted in several AI systems, especially with extensions that have been provided in modern releases, that allows to efficiently address combinatorial optimisation tasks such as timetabling, resource allocations, planning and scheduling. Finally, by the end of the decade, the re-discovery of the backpropagation algorithm14 for learning the parameters of a neural network was the key for the resurgence of the so-called connectionism or sub-symbolic approach to AI, ie, the use of models based on the processing of non-symbolic information, and based on a metaphor of the connections between neurons in the human brain. However, the expert system approach quickly showed its limits by producing another AI winter lasting until the mid-1990s. The main unresolved issue was the ‘knowledge acquisition bottleneck’. In fact, the main source for making an expert system to work is the knowledge base, a repository of specific domain dependent knowledge used by the system for the fulfilment of its task. The content of the knowledge base had to be elicited manually, by a collaborative activity between the domain expert (the expert in the domain addressed by the expert system) and the knowledge engineer (the scientist expert in AI methods, but totally unaware of the domain of interest). This phase proved to be the hardest one in the development of an expert system, and together with the lack of practical ML methods and tools for automatic learning of the needed knowledge, the difficulties in the knowledge acquisition process marked the end of the expert system era. Even connectionist models, that could exploit raw data more directly in learning the system parameters, were rapidly challenged by the complexity of real-world problems, and their limited learning capabilities were not able to offer practical solutions. The market of LISP machines started to collapse, and several computer companies that had performed relevant investments in the expert system business had to quickly change direction, returning to more traditional kind of applications. New interests in AI methodologies (and a consequent new AI spring) started in the 1990s, when probabilistic methods came to the rescue. An important milestone was the publication of Judea Pearl’s book15 advocating the use of probability theory to deal with both modeling and inference in intelligent systems. Probability had been rejected by some of the fathers of AI (John McCarthy and Patrick Hayes) as ‘epistemologically and computationally inadequate’ for AI.16 Pearl disputed this view, by showing that a consistent interpretation of probability theory is possible 14 The backpropagation algorithm was originally proposed in the context of control theory in the 1960s; its adoption in the Machine Learning setting is generally ascribed to David Rumelhart, Geoffrey Hinton and Ronald Williams who showed how to efficiently learn the parameters of an artificial neural network through its use in such a kind of model (DE Rumelhart, GE Hinton and RJ Williams, ‘Learning Representations by back-propagating Errors’ (1986) 323(6088) Nature 533–536). 15 J Pearl, Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Morgan Kaufmann Publ, 1988). 16 J McCarthy and P Hayes. ‘Some Philosophical Problems from the Standpoint of Artificial Intelligence’, in B Meltzer and D Michie (eds), Machine Intelligence 4 (Edinburgh University Press, 1967) 463–502.

10  Luigi Portinale by means of graph-based formalism called Probabilistic Graphical Models,17 whose main representative are Bayesian Networks.18 The use of such formalisms for compact and efficient modeling of uncertain knowledge, the use of specialised inference algorithms to answer any kind of probabilistic query, and the discovery and application of algorithms able to learn both the structure and the parameters of a graphical model, made it possible to overcome several limitations of logic-based systems and opened the way to more real-world applications. Evidence of that is the fact that at the beginning of the new millennium, a company like Microsoft hired some of the most prominent researchers in the field of Bayesian Networks (in particular, David Heckerman, Jack Breese and Eric Horvitz) to open a new research division on AI and ML. That team invented some of the most successful AI applications of the period including the world’s first machine-learning spam filter, the Answer Wizard (which became the backend for Clippy, the small avatar helping users during their activity in using the Microsoft Office suite), the Windows Printer Trouble-shooters and Microsoft’s first machine-learning platform, now represented by Azure. They also released one of the first graphical tools for building and reasoning with Bayesian Networks, the MSBN (MicroSoft Bayesian Network) tool. Probabilistic Graphical Models and Bayesian networks methodologies started to be incorporated in several systems by several companies, even if the fall of all the previous expectations promised during the expert system era put a stop to the advertisement of such new services as AI-based. However, the rapid success and the positive impact of probabilistic-based techniques became the new instrument for the resurgence of the interest on intelligent systems and AI in general. At the same time, ML became more and more able to tackle big problems with the introduction of several statistical methods, such as the Support Vector Machines (SVM),19 and the Ensemble Learning approaches.20 Ensemble approaches implement the so called ‘wisdom of the crowd’ idea: if a given ‘process’ is able to perform a given prediction with a particular performance, then taking into account several processes performing the same prediction can in principle increase the overall performance in the task. The ‘process’ can be differentiated by either using different algorithms or using different data. For example, one can execute a set of different algorithms, producing different prediction models, on the same set of data, then output the prediction provided by the majority of the models, possibly by weighting the vote with the corresponding confidence in the prediction, in such a way that models predicting a result with high confidence have a greater weight in producing the final answer.21 On the contrary, a different 17 D Koller and N Friedman, Probabilistic Graphical Models (MIT Press, 2009). 18 Charniak, ‘Bayesian networks without tears: making Bayesian networks more accessible to the probabilistically unsophisticated’ (1991) 12(4) AI Magazine 50–63. 19 C Cortes and V Vapnik, ‘Support-vector Network’ (1995) 20(3) Machine Learning 273–297. 20 L Rokach, ‘Ensemble-based Classifiers’ (2010) 33(1-2) Artificial Intelligence Review1–39. 21 Differently from the basic rule of a democratic ‘real-world’, in the ‘machine world’, a standard democratic approach where the vote/opinion of each subject (ie model) counts as the vote/opinion of any other subject is usually not a useful and practical idea. Subjects (ie models or algorithms) performing better and having more competence have the right to count more.

Mapping Artificial Intelligence: Perspectives from Computer Science  11 ensemble approach could be to use the same algorithm or model several times, but on a different set of data. This kind of approach is strictly related to some computational statistics methodologies, such as the well-known bootstrap method,22 that represent an effective way for improving the final performance of a predictive model. The performance is usually measured by means of the accuracy metric, meaning the percentage of correct predictions over the whole set of prediction provided; having different opinions over the same set of data or the same opinion over different chunks of similar data is the ensemble learning way to increase the final accuracy. This new spring season had the merit of involving into AI and ML researchers and practitioners from several different fields and has eventually evolved into a summer about 10 years ago; this started a totally new era dominated by the exploitation of big data and witnessed an increasing number of successful results in very different areas. The next section will discuss in more detail.

III.  Big Data and Deep Learning In 2010, The Economist was published with a significant title on the cover: The data deluge. The cover reported a man carrying a reversed umbrella under a deluge of data. The reverse umbrella captured part of such data, and a plant was watered through the collected rain. The big data era was starting, and AI became a way to exploit such a huge quantity of data made available by both electronic devices as well as human activities. Availability of big data is, however, only one source of the current success of AI methods; having a huge amount of data available can be of no use unless analytical methods are available to extract useful information (we may say knowledge) from it. And such methods should be able to provide answers in a reasonable time. Fortunately, researchers in computer science and engineering have been able to provide both new analytical methodologies and significant improvements in computational resources. This means that difficult problems requiring a lot of data, can be solved by resorting to specific and dedicated formalisms and analysed by exploiting computational facilities (like multi-core CPUs or GPUs) which are by now far more performant than those available just a few years ago. Concerning modelling issues, old-fashioned neural networks have been extended to models now called deep neural networks, giving birth to a novel set of approaches called Deep Learning. We may say that deep learning methodologies represent a specific subset of ML, in which models based on neural network architectures are extended in such a way that dozens or even hundreds of different layers of neurons can be dealt with. Before the deep learning era, any effort to learn the parameters of networks with more than a few hidden layers was prohibitive

22 B Efron, ‘Bootstrap methods: Another look at the jackknife’ (1979) 7(1) The Annals of Statistics1–26.

12  Luigi Portinale and attempts to do so were doomed to fail. Nowadays, the introduction of specific methods to deal with some learning problems (like the vanishing or exploding gradient problems that cause the learning algorithm to stop learning from data), together with the availability of very performant computational resources (in  particular, new graphical processing units or GPU, originally thought for image processing, and now used for general computation as well) allow for the construction and learning of very deep models. Moreover, deep learning allows one to address another important issue related to any ML approach: feature extraction. Before an ML model can be used, the relevant features of the problem to be solved must be extracted from the available data. For example, if we want to predict the severity level of a specific disease, we must obtain knowledge like the age and the gender of the patient, as well as the symptoms, the results from blood and lab tests and other related features. The set of attributes relevant to the problem are usually manually extracted from the data before building the ML model, meaning that a specific task concerning feature engineering should be put in charge of the analyst. On the contrary, the hidden layers of a deep network can automatically extract the relevant features from raw data. An example is given by the CNN (Convolutional Neural Network) model used for image interpretation. Given a set of pixels representing the image, each layer is able to abstract specific features of the image such as edges and more and more specific subparts of the figure, until the network can recognise what is in the input image. There is no need for partitioning the original image into subparts, since this is done by the network itself. A related concept is that of feature embeddings; this means finding a suitable numeric representation of the objects (images, sentences or any other kind of signals) that needs to be dealt with. Since every object is eventually represented as a vector of numbers, numerical operations, which computers are very comfortable with, can be performed to implement very complex tasks like object detection, image classification and segmentation, interpretation of sentences in natural language, complex forms of information retrieval and so forth. For instance, the success of modern chatbots (intelligent artificial agents able of holding natural language conversation with humans) is essentially due to the fact that thanks to the word embedding process,23 clusters of similar words and relationships between words are formed; these relationships are then exploited to provide a specific role to the words in the sentence and to determine the meaning of the sentence itself. The great interest in the deep learning approach is indeed justified by the good results obtained in several areas; in particular, deep learning is best performing in the following tasks: image recognition (classification, object detection), language interpretation (written or spoken), intelligent games (chess, go), autonomous agents (self-driving cars, space rovers, robots) and precision medicine (CT scans or MRI interpretation, system biology). Very often, what is important for the 23 D Jurafsky and HM James, Speech and language processing: an introduction to natural language processing, computational linguistics, and speech recognition (Prentice Hall, 2000) ch 6.

Mapping Artificial Intelligence: Perspectives from Computer Science  13 above tasks is the final output; for instance, given a CT scan of the lungs of a patient, the goal is to select the region were a potential tumour is identified. In such cases, the final output is the desired answer. However, there are situations where a detailed explanation of the final answer is also needed. This is the case in decision support where, in order to have the system’s suggestion to be reasonably accepted, an explanation of why this is the answer should also be provided. Explainable AI (or XAI) is a new buzzword trying to address the issues related to the construction of a reasonable explanation for a system answer. As in the case of deep learning (where the basic neural network models were already available several years before), for XAI the problems and the potential solutions are also not new. Again, as in the case of deep learning where the emphasis is on neural networks models which are well-known models in AI for several years, the explainable AI concept is also not new to the AI community. Indeed, providing suitable explanations was one of the main tasks required by an expert system; since such systems were usually based on a set of rules to be applied to the available data, the chain of the rules activated during the reasoning process was sometimes considered an explanation for the conclusion. However, extracting a set of rules from a deep network is not easy and it is not completely clear how to address the problem of explanation in such a setting. This opens the way for a return to the use of symbolic formalisms based on some form of logic and ontologies (eg, description logics24) and directly modelling the relationships between specific entities in the domain of interest. The combination of symbolic and sub-symbolic approaches is then becoming of great interest in AI. Another potential pitfall of deep learning approaches is the possibility of being ‘fooled’ by particular techniques of adversarial machine learning.25 For instance, given an image classifier with a very god accuracy in determining the objects in the image, a very small perturbation in the pixels of the image can produce completely different predictions, even if with the human eye, the image appears unchanged. A well-known experiment showed that a deep network able to recognise with good accuracy the image of a panda, was almost sure to recognise it as a gibbon after the input image was ‘corrupted’ with some disruption to the pixels, even if the resultant image looked completely unchanged to any human observer. The ‘corrupted’ image is what is called an adversarial example. Adversarial examples exploit the way ML algorithms (especially those based on the deep learning paradigm) work to disrupt the behaviour of the same or other AI algorithms. In the past few years, adversarial ML has become an active area of research as the role of AI continues to grow in many of the applications we currently use today. As one can easily suppose, this is a big issue to consider when working with and designing systems based on deep learning, since it is not hard to imagine a malicious (or even criminal) use of 24 F Baader, I Horrocks and U Sattler, ‘Description Logics’, in F van Harmelen, V Lifschitz and B Porter (eds), Handbook of Knowledge Representation (Elsevier, 2007) ch 3. 25 I Goodfellow, P McDaniel and PN Papernot, ‘Making machine learning robust against adversarial inputs’ (2018) 61(7) Communications of the ACM 56–66.

14  Luigi Portinale such techniques in different contexts (eg, in a military or defence application or in a legal setting or even in a now quite common environment such as a self-driving car mistaking a stop signal for something else).

IV.  Artificial Intelligence and Law Since we have seen that modern AI and ML are now having an impact on several fields and disciplines, it is then not surprising that the law has also to deal with AI. According to Harry Surden,26 the connection between AI and law involves ‘the application of computer and mathematical techniques to make law more understandable, manageable, useful, accessible, or predictable’. This is not completely different from what we expect from AI in other disciplines, and in fact AI and law have a quite significant history, since similar kinds of requirements were often proposed as inputs to AI systems. However, AI is definitely successful in domains and tasks when there are specific underlying patterns, rules and precise or well-defined answers. This is particularly true for data driven AI like deep learning approaches; the models can dredge up and mine useful knowledge from the available data, if there are patterns in it, by also taking into account the fact that data which are unobserved during training can be input to the system when it is asked to provide answers. Data driven AI is only partially successful in areas that are value-laden, judgment-oriented, abstract, and that involve persuasion and argumentation. In this case knowledgebased systems, where data are integrated with additional knowledge in forms of semantic networks, ontologies and knowledge graphs can play a significant role. By looking then at the main applications that we may devise for AI in law, it is quite clear that this integration is of paramount importance. An example of such a kind of integration is provided by the Case-Based Reasoning (CBR) paradigm,27 where lazy learning methods are complemented with specific knowledge sources. A lazy learning methodology learns to solve specific cases by storing all the past cases already solved by the system, together with the corresponding solution. When a new case must be solved, the system retrieves the set of cases most similar to the current one from its memory, and uses the retrieved solutions as the basis for the solution of the current target case.28 This kind of precedent-based reasoning is similar to the pattern that is followed in judicial systems that are committed to precedents to obtain a sentence; indeed, a lot of research has been conducted in the US concerning the application of the CBR

26 H Surden, ‘Artificial Intelligence and Law: An Overview’ (2019) 35 Georgia State University Law Review, available at: ssrn.com/abstract=3411869. 27 MM Richter and RO Weber, Case Based Reasoning: a textbook (Springer, 2013). 28 A Aamodt and E Plaza, ‘Case-Based Reasoning Foundational Issues, Methodological Variations, and System Approaches’, AI Communications. (IOS Press, 1994)39–59.

Mapping Artificial Intelligence: Perspectives from Computer Science  15 methodology in law practice since the very beginning.29 However, a complete CBR system cannot rely only on past data and precedents, but also needs specific knowledge, in order to build the solution to the target case from the old ones. Ontologies, as well rules and knowledge graphs can be usefully adopted to this end.30 Despite that, the realisation of this important part of a CBR system (called the revise or adaptation step) may result in the implementation of a complete knowledge-based system. In a legal domain, this has often implied that a flexible interleaving between case-based and rule-based reasoning was a potential solution.31 In fact, as reported by Quattrocolo,32 it is impossible to translate all norms and acts into mathematical, computational rules, thus the case-based model seems to be a better option; but since the basic difference in the value of the precedent in common law and in civil law is very relevant, the suitable combination of cases with rules and other structured forms of knowledge is a key success to the introduction of AI methodologies in this setting. A further aspect related to the use of AI in law concerns the problem of performing judgment under uncertainty. As we have previously mentioned, one of the reasons for the failure of purely logic-based systems in the first period of AI was the unsuitability of such formalisms to properly deal with uncertainty issues. This caused the rising of the probabilistic revolution headed by the introduction of Bayesian networks and the resurgence of interest in subjectivist or Bayesian forms of reasoning under uncertainty.33 This means that, when reasoning under uncertain conditions, and with the goal of providing a suggestion or a decision, an intelligent system must exploit any relevant knowledge it has, in addition to the available data; the use of prior knowledge must be incorporated into the reasoning process, exactly as is done in Bayesian statistics. The well-known prosecutor fallacy34 is an example where a correctly designed AI system can provide the right answers contrary to the erroneous common-sense conclusions reached by several (even potentially expert) humans. Suppose that a positive DNA match has been found for a given suspect (let us call him Fred) on the crime scene. Scientific evidence suggests that the probability

29 KD Ashley, ‘Case-Based Reasoning and its Implications for Legal Expert Systems’ (1992) 1 Artificial Intelligence and Law 1113–208. 30 A Wyner, ‘An Ontology in OWL for Legal Case-Based Reasoning’ (2008) 16 Artificial Intelligence and Law 361–387. 31 EL Rissland and DB Skalak, ‘Combining case-based and rule-based reasoning: a heuristic approach’ (1989), Proceedings of the 11th international Joint Conference on Artificial intelligence (IJCAI 89) (Vol. 1), Detroit (MI), 524–530. 32 S Quattrocolo, Artificial Intelligence, Computational Modeling and Criminal Proceedings: a framework for a European legal discussion (Springer, 2020). 33 It is worth noting that, despite the word ‘Bayesian’ in their names, Bayesian networks can be interpreted as frequentist models as well, since their definition does not rely on any specific interpretation of the concept of probability. 34 WC Thompson and EL Shumann, ‘Interpretation of Statistical Evidence in Criminal Trials: The Prosecutor’s Fallacy and the Defense Attorney’s Fallacy’ (1987) 2(3) Law and Human Behavior 167–187.

16  Luigi Portinale of having that kind of DNA for a random subject is very low, say 1 in over 1,000 people. Since Fred has a positive match, the prosecutor’s argument is that Fred must be guilty, since there is a very small probability that the match is positive by chance. The argument is a kind of common-sense mistake that several people would make; in particular, the source of the problem is that the estimate of a probability of innocence equal to 0.1 per cent actually refers to the probability of getting a positive match, given that Fred is innocent. This is because, given the fact that Fred is innocent, then there is a 0.1 per cent probability that Fred gets a positive DNA match (as stated before this is the probability of getting a positive match by chance). The actual estimate that the prosecutor should consider is instead the probability of Fred being innocent, given that the DNA match is positive. This is easily computable using the well-known Bayes formula, by also providing the prior probability (estimated before collecting any evidence about the DNA) that Fred is innocent (alternatively guilty). For instance, if we have no specific reason to suspect Fred is guilty, if the crime has been committed in a community of 1,000 people, we can easily compute the probability of Fred being innocent, given the positive DNA match, as about 91 per cent. This would completely demolish the prosecutorial framework based on a clear fallacy in the reasoning process. An AI agent correctly implementing this sort of Bayesian reasoning under uncertainty would not be prone to similar fallacies. Moreover, the impact of the prior knowledge is something that is very often either neglected or overestimated in common-sense reasoning by people. If Fred is a good citizen with no precedents, and this is the situation for almost all the people in the community of reference, then associating a uniform prior distribution to get the prior probability of Fred being guilty can be reasonable; however, if we know a history of precedents for similar crimes committed by Fred, then this prior probability should adequately reflect this situation. The problem of eliciting the right priors in a specific problem is still considered a field of active research, but some methods can be successfully exploited in several practical situations, leading to the design of AI systems that can avoid mistakes such as that discussed above. It is also worth mentioning that this is not an abstract or ‘academic’ problem, since there have been real cases where this kind of fallacy has unfortunately been reported, and in particular the Sally Clark case35 in a UK court, and the Lucia De Berk case36 in a Dutch court. In the first case, where Sally Clark was accused of the murder of her two infant sons, some simple considerations of the principles underlying probabilistic graphical models (and Bayesian networks in particular) could have avoided several wrong conclusions based on the wrong set of assumptions, such as the failure to consider a common cause in the sudden death of the infants. Finally, another important issue that comes with purely data driven AI is the problem of fairness; a given algorithm is said to be fair, or to have fairness, if its 35 CJ Bacon, ‘The Case of Sally Clark’ (2003) 96(3) Journal of the Royal Society of Medicine 105. 36 RD Gill, P Groeneboom and P de Jong, (2019) ‘Elementary statistics on trial (the case of Lucia de Berk)’, Arxiv, arxiv.org/abs/1009.0802.

Mapping Artificial Intelligence: Perspectives from Computer Science  17 results do not depend on some specific and given variables, particularly those considered sensitive, such as the traits of individuals which should not correlate with the outcome (ie, gender, ethnicity, sexual orientation, disability, political preferences, etc …). Machine learning methods that only rely on data can be in principle biased, that is subject to any bias present in the data itself. This is especially remarkable in applications of AI to law. For instance, in US courts it is quite common for assessment AI algorithms designed to consider the details of a defendant’s profile and returning a recidivism score estimating the likelihood that he or she will reoffend to be used; if the algorithm has been instructed and has been given biased data, then such biases will reappear in the score that the algorithm will compute (for instance by making higher the likelihood of recidivism for specific ethnic communities, as reported by ProPubblica news organisation37). Law then is an area that can be really challenging for AI and the use of intelligent algorithms in this field should be carefully dealt with, by considering a suitable classification of AI and law users. In particular, Surden proposes to distinguish the needs of different categories of users: 1. administrators of law (judges): they need support for sentencing and bail decisions; 2. practitioners of law (attorneys): they need to predict legal outcomes (eg, CBR), predictive-coding (retrieval of relevant document for a litigation), or document assembly support (for contract review and negotiation); 3. users governed by law (citizens, companies): they need law compliance checking, legal self-help systems, or computable contracts. Having in mind these differences would definitely help in focusing on the right AI methodology, and knowing its strengths and weaknesses would force a correct application of the methods and models in a proper context.

V. Conclusions This chapter has presented an historical excursus of AI and ML, by trying to point out ideas, expectations and both possibilities and limits of the related methodologies. The viewpoint is that of computer science, the science of which AI and ML are a part; we have also briefly discussed the social impact of intelligent systems, by focusing on applications of AI and law. Today we are experiencing a new industrial revolution guided by AI, and the impact on everyday life begins to be perceived. New professional profiles with interdisciplinary competences will be needed to both devise new applications and govern such an important revolution. Last but not least, ethics and values must be heavily taken into account in this 37 www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing (last accessed 7 Apr 2021).

18  Luigi Portinale framework, in order to avoid problems concerning safety, transparency, explainability and human values preservation like rights, cultural differences and judicial fairness. To this end, the Asilomar manifesto for AI principles of the Future of Life institute38 promotes specific principles that should be satisfied for a beneficial AI. They are organised in three different areas: research, ethics and values, longer-term issues. Research issues are related to the goals of AI (to create beneficial AI and not undirected AI), to the way funding and investments should be accomplished, to the correct link between science (AI) and policy makers, to the fostering of a new culture of cooperation and trust, and to the avoidance of corner-cutting on safety standards. But the emphasis of the manifesto is given to ethics and values principles that should be promoted and firmly kept when designing and developing intelligent systems impacting our every-day life. Safety (an AI system must not harm), failure transparency (we must know the reason of a failure), judicial transparency (any automatic decision making process should be auditable by a competent human authority), responsibility of designers and implementers, value alignment with the human values like dignity, freedom, rights and cultural diversity, personal privacy, liberty (AI applications must not curtail real or perceived personal liberty), sharing of benefits and economic prosperity among people, human control of every AI activity or application, non-subversion (the power conferred by control of highly advanced AI systems should respect and improve, rather than subvert, the social and civic processes on which the health of society depends), and finally the avoidance of lethal autonomous weapons through the so-called AI arms race. To complete the figure, longer-term issues concern risk assessment and mitigation, careful management and planning of the impact of AI applications (especially those proposing a real revolution in human relationships, jobs and activities), the avoidance of strong assumptions regarding upper limits on future AI capabilities (the so-called capability caution principle), the presence of strict safety and control measures related to possible self-improvement capabilities of AI systems, and finally the common good principle: AI applications and systems should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organisation. Taking into account what AI has been, what AI currently is and all such principles, we may hope to design and build a future for AI that can get rid us of the fears usually associated with a science-fiction scenario represented by a society where humans have lost the control of their activities, or even of their lives. In this scenario intelligent agents, which are not necessarily robots, but also simple programs with no physical interaction with the external environment, will cooperate with humans, helping them to solve old and new problems in a more efficient and comfortable way.



38 futureoflife.org/ai-principles.

2 Artificial Intelligence, Contracting and Contract Law: An Introduction MARTIN EBERS*

I. Introduction We are witnessing a major revolution in the ways in which contracts are initiated, negotiated, concluded, performed and enforced. One of the most significant trends in the field of contracting and contract law is the use of Artificial Intelligence (AI) techniques, such as machine learning (ML) and natural language processing (NLP) – deployed by many companies during the whole lifecycle of a contract to make contracting more efficient. This chapter gives an overview of the use of AI in contracting (II.) and the manifold challenges that arise from it under contract law – ranging from freedom of contract and party autonomy (III.), pre-contractual duties (IV.), formation of contract (V.), defects in consent (VI.), incorporation of standard terms (VII.), interpretation (VIII.), to fairness of contracts (IX.), contractual liability (X.) and the question of whether contract law itself might eventually become superfluous in the age of AI (XI.). In order to narrow down the topic, some clarifications are necessary. First, the purpose of this chapter is not to provide a detailed exposition on the numerous legal issues that arise under contract law when AI systems are used. The literature on this topic merits further analysis but such an endeavour would go far beyond the scope of this chapter. Rather, the focus is on providing an overview of the challenges that arise under contract law when AI systems are used. Second, it should be noted that the term ‘artificial intelligence’ is used here in a narrow sense1 to refer primarily to data-driven technologies such as ML, including NLP as a subfield of ML which refers to the ability of a computer to understand, * This work was supported by Estonian Research Council grant no PRG124. All internet sources were last accessed on 8 March 2022. 1 In contrast, the proposal for an ‘Artificial Intelligence Act’ (hereinafter: AIA) published by the European Commission in April 2021 uses a much broader definition; European Commission, Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act), COM(2021) 206 final. Art 3(1) AIA in conjunction with Annex I covers almost every computer program,

20  Martin Ebers analyse, manipulate, and potentially generate human language. Data-driven technologies are fundamentally different from earlier forms of automation. In the past, many algorithmic systems, especially expert systems, relied on rule-based conditional logic operations using symbolic rules to represent and infer knowledge. By contrast, the current wave of successful AI applications is rooted in data-learned knowledge, which relies less on hand-coded human expertise than the knowledge learned from data. Instead of programming machines with specific instructions to accomplish particular tasks, ML algorithms enable computers to learn from the ‘training data’ and experience. Based on this notion, contracts using AI systems must be distinguished from smart contracts. The term ‘smart contracts’, which famously can be traced back to Nick Szabo,2 refers to a special protocol, the distributed ledger technology (DLT), especially blockchain, intended to contribute, verify or implement the negotiation or performance of the contract in a trackable and irreversible manner without the interference of third parties.3 Accordingly, smart contracts are based on a completely different technology. While it is true that there have been increasing efforts to merge blockchain and AI in recent years,4 the concept of smart contracts still very much refers to self-executing promises based on a blockchain rather than AI-driven contracts. Finally, the use of AI systems by a company during the ‘life cycle’ of a contract must be delineated from the constellation in which AI systems themselves are the subject of the contract.5 The latter case concerns contracts where providers offer ‘AI as a Service’ (AIaaS)6 or AI-based (smart) products/services7 – and the related since the definition for AI refers not only to machine learning, but also to logic and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems. However, such a definition is overly broad; cf M Ebers et al., ‘The European Commission’s Proposal for an Artificial Intelligence Act – A Critical Assessment by Members of the Robotics & AI Law Society (RAILS)’ (2021) 4(4) Multidisciplinary Scientific Journal 589–603. https://doi.org/10.3390/j4040043. 2 According to Nick Szabo, a smart contract is a ‘computerized transaction protocol that executes the terms of a contract. The general objectives of smart contract design are to satisfy common contractual conditions (such as: payment terms, liens, confidentiality, and enforcement etc.), minimize exceptions both malicious and accidental, and minimize the need for trusted intermediaries like banks or other kind of agents’; N Szabo, ‘Smart Contracts’, www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/ CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html. 3 For an overview on the different smart contracts definitions, see M Finck, ‘Grundlagen und Technologie von Smart Contracts’, in M Fries and BP Paal (eds), Smart Contracts (Mohr Siebeck, 2019) 1–12. 4 On the convergence of blockchain and AI ‘to make smart contracts smarter’ see, in this book, ch 3, under II.C. Moreover, see The European Union Blockchain Observatory & Forum, Convergence of Blockchain, AI and IoT, v.1.1, 21 April 2020, www.eublockchainforum.eu/sites/default/files/ report_convergence_v1.0.pdf. 5 S Grundmann and P Hacker, ‘Digital Technology as a Challenge to European Contract Law – From the Existing to the Future Architecture’ (2017) 13(3) European Review of Contract Law 255–293, 264. 6 Typically, AIaaS providers offer their customers access to pre-built AI models and services via APIs (application programming interfaces). However, usually, AIaaS is offered only to commercial organisations and public sector bodies, and not to consumers. Cf S Parsaeefard et al., ‘Artificial Intelligence as a Services (AI-aaS) on Software-Defined Infrastructure’ 2019 IEEE Conference on Standards for Communications and Networking (CSCN), 2019, 1–7, doi:org/10.1109/CSCN.2019.8931372. Javadi, Cloete, Cobbe, Lee and Singh, ‘Monitoring Misuse for Accountable “Artificial Intelligence as

Artificial Intelligence, Contracting and Contract Law: An Introduction  21 question as to what requirements should be placed on contractual conformity when a lack of conformity exists, and under what preconditions the trader is then liable to the customer.8 This topic will not be explored further. Instead, this chapter focusses on the internal use of AI systems to initiate, negotiate, conclude, perform or enforce contracts and its manifold implications on contract law.

II.  Contracting in the Age of AI AI techniques, such as ML and NLP, are used by many companies during the entire ‘lifecycle’ of a contract to make contracting more efficient. At the pre-contractual stage, AI-driven profiling techniques provide better insights into customers’ behaviour, preferences, and vulnerabilities. Companies can not only tailor their advertising campaigns9 but also their products and prices10 specifically to suit the customer profile, credit institutions can use the profiles for credit ratings,11 and insurance companies can better assess the insured risk.12 In particular, AI-driven big-data profiling techniques give companies the opportunity to gain a more profound understanding about customers’ personal circumstances, behavioural patterns, and personality, including future preferences. These insights enable companies to not only tailor their advertisements (so called ‘online behavioural advertising’)13 but also their contracts14 in ways that maximise their expected utility. a Service”’ (AIES 2020: AAAI/ACM Conference on AI, Ethics and Society, New York, 7–8 February 2020); M  Berberich and A Conrad, ‘§ 30 Plattformen und KI’ in M Ebers et al. (eds), Künstliche Intelligenz und Robotik – Rechtshandbuch (CH Beck 2020) 930ff, 938ff. 7 Eg self-driving cars, vacuum cleaners, surveillance equipment, health apps, voice assistants, and translation apps. 8 Cf thereto (for consumer contracts) M Ebers, ‘Liability for Artificial Intelligence and EU Consumer Law’(2021) 12 Information Technology and Electronic Commerce Law (JIPITEC) 204–221, 62 ff, www.jipitec.eu/issues/jipitec-12-2-2021/5289. 9 Cf R Calo, ‘Digital Market Manipulation’ (2014) 82 The George Washington Law Review 995, 1015ff, dx.doi.org/10.2139/ssrn.2309703; N Helberger, ‘Profiling and Targeting Consumers in the Internet of Things – A New Challenge for Consumer Law’ in R Schulze and D Staudenmayer (eds), Digital Revolution: Challenges for Contract Law in Practice (Nomos, 2016) 135–164, doi. org/10.5771/9783845273488. 10 FZ Borgesius and J Poort, ‘Online Price Discrimination and EU Data Privacy Law’ (2017) 40 Journal of Consumer Policy 347–366, doi.org/10.1007/s10603-017-9354-z. 11 Cf DK Citron and FA Pasquale, ‘The Scored Society: Due Process for Automated Predictions’ (2014) 89 Washington Law Review 1; T Zarsky, ‘Understanding Discrimination in the Scored Society’ (2014) 89 Washington Law Review 1375. 12 Cf R Swedloff, ‘Risk Classification’s Big Data (R)evolution’ (2014) 21 Connecticut Insurance Law Journal 339; MN Helveston, ‘Consumer Protection in the Age of Big Data’, (2016) 93(4) Washington University Law Review 859. 13 Cf N Fourberg et al., ‘Online advertising: the impact of targeted advertising on advertisers, market access and consumer choice’, Study requested by the IMCO committee of the European Parliament, PE 662.913 – June 2021, www.europarl.europa.eu/RegData/etudes/STUD/2021/662913/ IPOL_STU(2021)662913_EN.pdf. 14 O Bar-Gill, ‘Algorithmic Price Discrimination When Demand Is a Function of Both Preferences and (Mis)perceptions’ (2019) 86(2) University of Chicago Law Review 217. Some scholars suggest that AI-driven big data analytics can even allow for personalised legal rules that match individual needs and

22  Martin Ebers Of particular concern are especially two aspects: First, AI techniques can be used by companies for first-degree price discrimination.15 Sellers are increasingly utilising big data and sophisticated algorithms to assess the customer’s willingnessto-pay for their goods or services and to charge each customer a personalised price that is as close as possible to the maximum price that each customer is willing to pay. Personalised pricing can be both beneficial and detrimental. For instance, personalised pricing allows firms to set a lower price and profitably sell to customers that would not be willing to pay the uniform price that firms would otherwise set. However, personalised pricing can also be inequitable because for some customers, it will lead to higher prices than a uniform price. Moreover, price discrimination can help to monopolise a market and to make market entry unattractive for competitors.16 The second aspect concerns the use of AI systems to exploit behavioural biases of customers: AI-driven big-data profiling techniques enable companies to manipulate the choices of customers in a predetermined direction and even exploit their biases,17 for example, by offering products or services exactly when customers (due to the time of the day, a previous event or personal situation) can only make suboptimal decisions, or by creating certain digital choice architectures and dark patterns. The Cambridge Analytica case is a shining example of how the AI-driven big data profiling can nudge undecided voters to gain political power. Apart from the pre-contractual phase, AI contracting tools and chatbots are also used to govern the contracting process itself, especially for negotiating and drafting contracts.18 Whereas the first generation of Negotiation Support Systems (NSSs) were mostly template-based and did not explicitly use AI techniques; the current systems based on ML can advise the parties about their respective ‘Best Alternative to a Negotiated Agreement’ (BATNAs) and hence facilitate the negotiating process.19 In the field of Alternative Dispute Resolution,20 some providers preferences and promote more efficient outcomes. Cf C Busch and A De Franceschi (eds), Algorithmic Regulation and Personalized Law (C.H. Beck/Hart Publishing/Nomos, 2021). 15 First-degree price discrimination occurs when individual customers receive different prices based on their individual preferences. In contrast, second-degree price discrimination refers to different prices charged to different buyers depending on the quantity or quality of the goods or services purchased, whereas third-degree price discrimination happens when different groups of consumers receive different prices, for example in the case of coupons. Cf AC Pigou, The economics of welfare, 4th edn (Macmillan & Co., 1932); Borgesius and Poort (n 10) 351f. 16 Borgesius and Poort (n 10) 354. 17 Behavioural economics has been able to show that humans have only limited rationality which can be exploited by choice architectures, for example, by presenting desired options as a preselected default (status quo bias), by adding unattractive options (decoy effect), by positioning an option earlier/later (primacy/recency effect), by positioning an option in the middle (middle option bias), by indicating a sum of money (anchoring effect) or by personalising the choice environment for individual users according to group profiles. See RH Thaler and CR Sunstein, Nudge: Improving Decisions About Health, Wealth, And Happiness (Penguin Books, 2009). 18 For an overview of contract drafting solutions, cf ch 6 in this book. 19 J Zeleznikow, ‘Using Artificial Intelligence to provide Intelligent Dispute Resolution Support’ (2021) 30 Group Decision and Negotiation 789–812. 20 For a discussion of the use of AI systems in alternative dispute resolution schemes cf M Ebers, ‘Automating Due Process – The Promise and Challenges of AI-based techniques in Consumer Online

Artificial Intelligence, Contracting and Contract Law: An Introduction  23 are also offering blind bidding processes where an automated algorithm evaluates bids from parties, assessing whether they are within a prescribed range to settle the case – a technique which can also be used for negotiating contracts.21 Additionally, argumentation support tools as well as decision support systems might be helpful in negotiations. While an argumentation support tool helps the parties (often by means of a dialog system) to improve the structure of the information exchanged between them, decision support systems include rule-based or case-based reasoning and ML, including neural networks, suggesting the best strategy for optimal outcomes. AI contracting tools can also be used for algorithmic (automated) decisionmaking and formation of contracts.22 Nowadays, such systems can be found not only in financial markets (eg for algorithmic trading), but also in other markets (eg for sales, where an algorithmic system – and sometimes even a self-learning AI system – is contracting on behalf a company).23 During the performance phase, AI systems facilitate and automatise the execution of transactions, assisting and simplifying real-time payments and managing supply chain risks. They also play a crucial role in contract management and due diligence.24 Companies can review and manage contracts faster and more accurately by identifying terms and clauses that are suboptimal, and by flagging individual contracts based on firm-specified criteria. Finally, at the post-contractual phase, AI systems can help to litigate legal disputes by handling customer complaints and resolving online disputes,25 or predicting the outcome of court proceedings.26 Dispute Resolution’, in X Kramer et al. (eds), Frontiers in Civil Justice: Privatisation, Monetisation and Digitisation (Edward Elgar Publishing, 2022), (forthcoming). 21 AR Lodder and EM Thiessen, ‘The Role of Artificial Intelligence in Online Dispute Resolution’ Proceedings of the UNECE Forum on ODR 2003, www.mediate.com/Integrating/docs/lodder_ thiessen.pdf. 22 From the technical perspective, cf (in a chronological order) especially the following books: S Ossowski (ed), Agreement technologies (Springer, 2013); M Rovatsos et al. (eds), Multi-agent systems and agreement technologies – 13th European Conference, EUMAS 2015, and Third International Conference, AT 2015, Athens, Greece, December 17–18, 2015, Revised Selected Papers (Springer, 2016); NC Pacheco et al. (eds), Multi-agent systems and agreement technologies – 14th European Conference, EUMAS 2016, and 4th International Conference, AT 2016, Valencia, Spain, December 15–16, 2016, Revised Selected Papers (Springer, 2017); M Lujak (ed), Agreement technologies – 6th International Conference, AT 2018, Bergen, Norway, December 6–7, 2018, Revised Selected Papers (Springer, 2019). 23 For an explanation on how AI is involved in the process of contract formation today and how it may be involved in the future, cf ch 4 in this book. 24 Cf ch 5 in this book; see also R Schuhmann, ‘Quo Vadis Contract Management? Conceptual Challenges Arising from Contract Automation’ (2000) 16(4) European Review of Contract Law 489–510. 25 The most prominent example is eBay’s ODR Resolution Center, which reportedly handles (automatically) over 60 million disputes annually; AJ Schmitz and C Rule, The New Handshake: Online Dispute Resolution and the Future of Consumer Protection (ABA Publishing, 2017) 53; C Rule and C  Nagarajan, ‘Leveraging the Wisdom of Crowds: The eBay Community Court and the Future of Online Dispute Resolution’, ACResolution Magazine (Winter 2010). 26 KD Ashley, ‘A Brief History of the Changing Roles of Case Prediction in AI and Law’ (2019) 36(1) Law in Context 93–112; M van der Haegen, ‘Quantitative Legal Prediction: The Future of Dispute

24  Martin Ebers

III.  Freedom of Contract and Party Autonomy Historically, contract law is based on the principle of freedom of contract and party autonomy. As a rule, a natural and legal person should be free to decide whether or not to contract, with whom to contract, and to agree freely on the terms of their contract. The underlying idea is the assumption that freedom of contract – under the condition that the parties to a contract are fully informed and at an equal bargaining position – leads to justice: ‘Qui dit contractuel, dit juste’.27 However, even classic contract law recognises exceptions to these principles, for example, if the contract was concluded as a result of mistake, fraud, duress or the exploitation of a party’s circumstances to obtain an excessive advantage.28 In modern times, the growth of consumer protection as well as rent and employment legislation has restricted the parties’ freedom even further to restore ‘genuine’ contractual freedom.29 To this end, specific areas of law recognise certain restrictions on the freedom of contract in order to remedy the inequality in the bargaining power of parties, information asymmetries or unacceptable forms of discrimination. Such interventions are most common in consumer law, however, some legal systems even intervene in contracts between businesses, particularly when one party is a small business lacking bargaining power.30 The rise of AI in contracting processes poses the question of whether contract law needs to provide additional or new corrective mechanisms to address the new power imbalances caused by AI systems. Obviously, the use of AI systems can exacerbate existing power asymmetries, as parties to a contract can leverage them to draft lopsided contracts and gain a better bargaining power. From the standpoint of consumers and other customers, one of the most troubling developments is the growing information asymmetry between providers and customers. In many cases, customers remain oblivious to personalised advertisements, information, prices, or contract terms based on big data profiling. If, for example, a business refuses to conclude a contract or makes an offer with unfavourable conditions because of a certain customer score, the customers Resolution?’, in J De Bruyne and C Vanleenhove (eds), Artificial Intelligence and the Law, (Intersentia, 2021), doi.org/10.1017/9781839701047.005, 73–99; DM Katz, ‘Quantitative Legal Prediction – or – How I Learned to Stop Worrying and Start Preparing for the Data Driven Future of the Legal Services Industry’ (2013) 62 Emory Law Journal 909–966; M Scherer, ‘Artificial Intelligence and Legal Decision-Making: The Wide Open?’ (2019) 36(5) Journal of International Arbitration 539–574, 547ff. 27 A Fouillée, La science sociale contemporaine (Hachette et cie, 1885) 410. 28 J Cartwright and M Schmidt-Kessel, ‘Defects in Consent: Mistake, Fraud, Threats, Unfair Exploitation’ in G Dannemann and S Vogenauer (eds), The Common European Sales Law in Context Interactions with English and German Law (OUP, 2013), 373–422. 29 PS Atiyah, The Rise and Fall of Freedom of Contract (Clarendon Press, 1979). 30 This applies in particular to the legislation on unfair contract terms, which in many countries also applies to contracts between businesses; for a comparative overview in the EU cf M Ebers, ‘Unfair Contract Terms Directive (93/13)’, in H Schulte-Nölke et al. (eds), EC Consumer Law Compendium, (Sellier European Law Publishers, 2008) 197–261, 221ff.

Artificial Intelligence, Contracting and Contract Law: An Introduction  25 are usually left in the dark from understanding how this score was achieved in the first place. This asymmetry arises not only because the algorithms used are well-guarded trade secrets, but also because the specific characteristics of many AI ­technologies31 – such as opacity (the ‘black box effect’), complexity, unpredictability and semi-autonomous behaviour – can make effective enforcement of rights more difficult, as the decision cannot be traced and therefore cannot be checked for legal compliance. On the other hand, consumers and other customers can also use algorithms to make and execute decisions by directly communicating with other systems through the internet. Such algorithmic systems (eg shopping bots) can significantly reduce search transaction costs, avoid consumer biases, overcome manipulative marketing techniques, enable more rational and sophisticated choices, and create or strengthen buyer power.32 Moreover, Legal Tech companies using intelligent algorithmic systems can also help consumers to enforce their rights. For example, companies such as Do-Not-Pay,33 Flightright34 or RightNow35 help consumers at a large-scale to enforce small claims which otherwise would not have been brought to court due to relatively high legal fees and the well-known problem of ‘rational apathy’. Additionally, consumer organisations and public watchdogs can use Legal Tech services to monitor and enforce existing consumer law,36 for instance for detecting unfair contract terms in online contracts.37 As a result, Legal Tech offers the possibility to strengthen the rule of law, to reduce existing cost barriers, to open-up latent markets and create new areas of competition.38 Consequently, (contract) law may not always proffer instruments for balancing power and information asymmetries in favour of customers. The more self-help is used by customers, the less corrective intervention in the market mechanism is required. However, such a self-help is rather unlikely in case of consumers who purchase goods or services for private purposes. As an observation, only technology literate people and companies can be expected to use software tools to enhance their bargaining power and improve their decision-making process.39

31 European Commission, White Paper ‘On Artificial Intelligence – A European approach to excellence and trust’, COM(2020) 65 final, 14. 32 MS Gal and N Elkin-Koren, ‘Algorithmic Consumers’ (2017) 30(2) Harvard Journal of Law & Technology 309–352. 33 www.donotpay.com. 34 www.flightright.de. 35 www.rightnow.de. 36 G Contissa et al., ‘Towards Consumer-Empowering Artificial Intelligence’, Proceedings of 27th IJCAI Conference 2018, 5150-5157, doi.org/10.24963/ijcai.2018/714. 37 http://claudette.eui.eu/about/index.html. 38 Cf M Ebers, ‘Legal Tech and EU Consumer Law’, in LA DiMatteo et al. (eds), The Cambridge Handbook of Lawyering in the Digital Age (Cambridge University Press, 2021), 195–219, ssrn.com/ abstract=3694346. 39 Likewise, D Schäfers, ‘Rechtsgeschäftliche Entscheidungsfreiheit im Zeitalter von Digitalisierung, Big Data und Künstlicher Intelligenz’, (2021) 221 Archiv für die civilistische Praxis (AcP) 32–67, 51.

26  Martin Ebers

IV.  Pre-contractual Duties A.  Contract Law, Fair Trading Law and Data Protection Law All over the world, legal systems have established pre-contractual duties that (potential) parties to a contract have to observe before entering into a contract.40 In the European Union, many consumer law directives establish pre-contractual duties for businesses – by prohibiting unfair commercial practices, such as misleading advertisements or by establishing information duties – in order to allow the consumer to make an informed decision before concluding a contract.41 In addition, the General Data Protection Regulation (GDPR)42 protects data subjects against having their personal data misused by companies. Against this backdrop, the question arises as to what extent these legal regimes can help to mitigate the aforementioned two problems – price discrimination and the exploitation of customer bias. When assessing this question, it is important to distinguish between contract law on the one hand and other bodies of law – such as fair-trading law (UCPD)43 and data protection law (GDPR) – on the other. The UCPD primarily aims to protect market participants against unfair commercial practices; however, the Directive is not designed to provide contractual remedies to individual consumers in the event that an unfair commercial practice leads to the conclusion of contracts.44 In the same vein, the GDPR protects fundamental rights and freedoms 40 For a comparative overview, cf R Sefton-Green (ed), Mistake, Fraud and Duties to Inform in European Contract Law (Cambridge University Press, 2005); H Fleischer, ‘Informationsasymmetrie im Vertragsrecht’, (CH Beck, 2001). See also R Schulze et al. (eds), Informationspflichten und Vertragsschluss im Acquis communautaire – Information Requirements and Formation of Contract in the Acquis Communautaire (Mohr Siebeck, 2003). 41 For a detailed analysis of the pre-contractual duties in EU Private Law, cf M Ebers, Rechte, Rechtsbehelfe und Sanktionen im Unionsprivatrecht (Mohr Siebeck, 2016), 798ff; C Busch, Informationspflichten im Wettbewerbs- und Vertragsrecht (Mohr Siebeck, 2008); T Wilhelmsson and C  Twigg-Flesner, ‘Pre-contractual information duties in the acquis communautaire’ (2006) 2(4) European Review of Contract Law 441–470. 42 Regulation 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] OJ L119/1. 43 Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council [2005] OJ L149/22 (Unfair Commercial Practices Directive) (UCPD). 44 According to Art  3(1) UCPD 2005/29, the Directive is ‘without prejudice to contract law and, in particular, to the rules on the validity, formation or effect of a contract’. Additionally, recital (9) UCPD clarifies that the Directive ‘is without prejudice to individual actions brought by those who have been harmed by an unfair commercial practice’. Although Art  3(5) Modernization Directive 2019/2161 introduced a new Art 11a in UCPD to make available a right to damage and, where relevant, the right to price reduction or unilateral termination of the contract, the conditions under which the consumer can make use of these remedies remain largely unclear; cf MBM Loos, ‘The Modernization

Artificial Intelligence, Contracting and Contract Law: An Introduction  27 of natural persons and in particular their right to the protection of personal data (Article  1(2) GDPR), but does not deal with the conditions for contracts to be legally binding. While both UCPD and GDPR45 as well as other tools (eg opt-out rights46) can help mitigate the problems of price discrimination and exploitative contracts, neither legal regime addresses the contractual remedies that apply in these cases.

B.  Pre-contractual Duties and Price Discrimination From a contract law perspective, price discrimination primarily raises the question of whether companies must disclose the application of personalised prices. Such an obligation indeed seems appropriate.47 EU consumer law already recognises such an obligation, albeit with certain exceptions. Article  6(1)(ea) Consumer Rights Directive (CRD) 2011/83/EU48 as amended by Modernization Directive 2019/2161/EU,49 requires the trader to inform the consumer ‘that the price has been personalised on the basis of an automated decision making process’. At the same time, however, recital (45) Modernization Directive 2019/2161 states that ‘[t]his information requirement should not apply to techniques such as ‘dynamic’ or ‘real-time’ pricing that involve changing the price in a highly flexible and quick manner in response to market demands when those techniques do not involve personalisation based on automated decision-making.

of European Consumer Law (Continued): More Meat on the Bone After All’, (2020) 28(2) European Review of Private Law 407–423, 408f. 45 For an analysis of the UCPD 2005/29 and the GDPR see M Ebers, ‘Beeinflussung und Manipulation von Kunden durch “Behavioral Microtargeting”’ (2018) MultiMedia und Recht 423; F Galli, ‘Online Behavioural Advertising and Unfair Manipulation Between the GDPR and the UCPD’ in M Ebers and M Cantero (eds), Algorithmic Governance and Governance of Algorithms (Springer, 2020) 109–135; Helberger (n 9) 135 ff.; E Mik, ‘The Erosion of Autonomy in Online Consumer Transactions’ (2016) 8(1) Law, Innovation and Technology 1, ink.library.smu.edu.sg/sol_research/1736. 46 In favour for a right of the consumer to opt out of personalised pricing by clicking on a stop button: G Sartor, ‘New aspects and challenges in consumer protection’, Study requested by the IMCO committee, PE 648.790 – April 2020, 35, www.europarl.europa.eu/RegData/etudes/STUD/2020/648790/IPOL_ STU(2020)648790_EN.pdf; G Wagner and H Eidenmüller, ‘Down by Algorithms? Siphoning Rents, Exploiting Biases, and Shaping Preferences: Regulating the Dark Side of Personalized Transactions’, (2019) 86(2) University of Chicago Law Review 581–609. 47 Wagner and Eidenmüller (n 46). 48 European Parliament and Council Directive 2011/83/EU of 25 October 2011 on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council and repealing Council Directive 85/577/EEC and Directive 97/7/EC of the European Parliament and of the Council [2011] OJ L304/64. 49 European Parliament and Council Directive 2019/2161 of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernization of Union consumer protection rules [2019] OJ L328/7.

28  Martin Ebers Moreover, Article 6(1)(ea) CRD does not require the trader to reveal the algorithm nor how the price has been adjusted for a particular consumer.50 Consequently, it is unlikely that the consumer will notice any price discrimination. If, on the other hand, such information obligations were recognised under (consumer) contract law, price discrimination could be sanctioned by damages: Consumers would then have to be placed in the position they ought to have been in, if they had been made aware of the dynamic pricing: If the consumers can show that, upon being properly informed about the dynamic prices, they would have refrained from the contract and instead would have concluded another contract (with the same company or another provider) at a lower price, the trader would have to reimburse the consumer for the difference between the dynamic and the lower price.

C.  Pre-contractual Duties and Exploitative Contracts In contrast to the pre-contractual duties applied to price discrimination under contract and/or consumer law, ‘exploitative contracts’, require other legal instruments to effectively remedy the problem. If a company exploits biases of the other contracting party or even actively induces them into unfavourable terms, then the disclosure obligations simply appear futile. Information requirements are largely ineffective if the recipient is unable to make a rational decision due to behavioural biases. The legal system must therefore find alternative solutions. In the event that one of the parties to a contract is unable to oversee the consequences of its actions, contract law typically provides for various defences based on doctrines of duress, mistake, undue influence or misrepresentation that focus on the perspective of the aggrieved party. However, these legal institutions do not provide a remedy if widespread or highly individual biases are exploited by companies:51 The defence of duress is not applicable, since algorithmic manipulation does not involve a wrongful or illegitimate threat. ‘Mistake’ is also not an available defence for the customers, since the error must relate to specific contractual clauses. The equitable doctrine of ‘undue influence’ requires a relationship of trust or confidence between parties, which is mostly missing in AI-based contracts. ‘Misrepresentation’ requires false statements, which are also usually not present in the case of AI-based exploitative contracts. Owing to the said deficiencies in law to remedy the problem of exploitative contracts, legal scholars thus argue in favour of recognising a ‘right to cancel the contract’52 if: (i) the customer has significant rationality deficits; and (ii) the trader 50 A Reyna, ‘The Price Is (Not) Right: The Perils of Personalisation in the Digital Economy’, Informa­ Connect, 4 January 2019, informaconnect.com/the-price-is-not-right-the-perils-of-personalisationin-the-digital-economy/. 51 Cf Mik (n 45). 52 Ebers (n 45).

Artificial Intelligence, Contracting and Contract Law: An Introduction  29 exploits or even induces these suboptimal decision-making behaviours in their customers.53

V.  Formation of Contract The use of autonomous (AI-based) software agents for concluding contracts has caused a heated debate54 whether computer-generated declarations can be attributed (eg as an offer or acceptance) to a natural or legal person, even if the operator had no concrete idea what the systems will do and/or even if the AI misrepresented the operator. This debate does not need to be traced in detail here. Rather, I will limit myself to a cursory overview of the solutions discussed. One possibility is to rely on the objective theory of contract, according to which contract law does not inquire how or why a statement came into being, but only inquires whether the reasonable addressee of such statement would think the other party intends to contract on the terms provided.55 Another way to deal with computer-generated declarations is to focus on the operator’s prior intention embodied in the programming system.56 Relying on either or both of these theories, courts all over the world have regarded computers as mere tools, attributing their actions to those who operate them. In this vein, the US Court of Appeals in State Farm Mutual Insurance Co v Bockhorst,57 regarded computer errors as errors of its human controllers – stating: Holding a company responsible for the actions of its computer does not exhibit a distaste for modern business practices (…). A computer operates only in accordance with the information and directions supplied by its human programmers. If the computer does not think like a man, it is man’s fault.

Similarly, in the UK, Lord Denning discussed in Thornton v Shoe Lane Parking Ltd58 the situation of a customer putting money into an automatic vending machine and 53 Such a right warrants, of course, further clarification. In particular, it is necessary to draw the line between permissible informing and/or nudging on the one hand and (illegal) exploitation or outright manipulation on the other. 54 Cf T Allen and R Widdison, ‘Can Computers Make Contracts?’ (1996) 9 Harvard Journal of Law & Technology 26; G Sartor, ‘Agents in Cyber Law’, in G Sartor and C Cevenini, ‘Proceedings of the Workshop on the Law of Electronic Agents (LEA02)’ (CIRSFID, 2002) 7; J Turner, Robot Rules. Regulating Artificial Intelligence (Palgrave Macmillan, 2019) 106ff; for German law cf C Wendehorst and J Grinzinger ‘§ 4 Vertragsrechtliche Fragestellungen beim Einsatz intelligenter Agenten’ in M Ebers et al. (eds), Künstliche Intelligenz und Robotik – Rechtshandbuch (CH Beck, 2020) 149ff. 55 See JM Perillo, ‘The Origins of the Objective Theory of Contract Formation and Interpretation’ (2000) 69 Fordham Law Review 427; TAO Endicott, ‘Objectivity, Subjectivity, and Incomplete Agreements’, in J Horder (ed), Oxford Essays in Jurisprudence, Fourth Series (Oxford University Press, 2000) 151. In the context of AI, cf ch 4 in this book; moreover E Mik, ‘From Automation to Autonomy: Some Non-existent Problems in Contract Law’ (2020) 36(3) Journal of Contract Law 205, papers.ssrn. com/sol3/papers.cfm?abstract_id=3635346. 56 Mik (n 55); See also, Wendehorst and Grinzinger (n 54), discussing framework agreements and the platform model in order to attribute AI-generated declarations to humans. 57 State Farm Mutual Insurance Co v Bockhorst 453 F.2d 533 (10th Circuit 1972). 58 Thornton v Shoe Lane Parking Ltd [1971] 2 QB 163.

30  Martin Ebers being issued with a ticket, coming to the conclusion that the offer was made when the proprietor of the machine held it out as being ready to receive money and the customer accepted the offer by inserting money into the machine. Likewise, in Germany, the Federal Supreme Court decided in online Flugbuchung:59 It is not the computer system, but the person (or the company) using it as a means of communication, who makes the declaration or who is the recipient of the submitted declaration. The content of the declaration is, therefore, not to be determined according to how the automated system is likely to interpret and process it, but according to how the human addressee may understand it in good faith and according to custom.

Similar observations can be found in Quoine Pte Ltd v B2C2 Ltd,60 a recent appellate case in Singapore on automated trading systems. The possibility of automating the contracting process has been confirmed also by a series of e-commerce regulations, such as the UN Convention on the Use of Electronic Communications in International Contracts 200561 and the Uniform Electronic Transactions Act.62 However, all of the above-mentioned court decisions and regulations concern the so-called ‘deterministic algorithms’, which would always produce precisely the same output given the same input. Whether traditional contract theories can be applied to cases where the software can learn from experience, adapting its own responses to varying conditions, is an open question – let alone constellations in which two machines (M2M) communicate with each other in machine language. Precisely for these concerns, legal scholars have discussed a number of alternative legal concepts, such as understanding AI systems as legal agents; denying validity to transactions generated by autonomous systems; or granting legal personhood to software agents.63 59 BGHZ 195, 126, para 17. Translation by the author of this chapter. 60 Quoine Pte Ltd v B2C2 Ltd [2020] SGCA (I) 2. 61 Cf Art 12 UN Convention on the Use of Electronic Communications in International Contracts 2005: ‘A contract formed by the interaction of an automated message system and a natural person, or by the interaction of automated message systems, shall not be denied validity or enforceability on the sole ground that no natural person reviewed or intervened in each of the individual actions carried out by the automated message systems or the resulting contract.’ 62 Cf s 14 Uniform Electronic Transactions Act 1999: ‘A contract may be formed by the interaction of electronic agents of the parties, even if no individual was aware of or reviewed the electronic agents’ actions or the resulting terms and agreements.’ 63 For an in-depth discussion of the various concepts see Allen and Widdison (n 54); S Chopra and LF White, A Legal Theory for Autonomous Artificial Agents (University of Michigan Press, 2011); Bert-Japp Koops et al., ‘Bridging the Accountability Gap: Rights for New Entities in the Information Society?’ (2010) 11(2) Minnesota Journal of Law, Science & Technology 497; Jean-Francois Lerouge, ‘The Use of Electronic Agents Questioned Under Contractual Law: Suggested Solutions on a European and American Level’ (2000) 18 The John Marshall Journal of Computer & Information Law 403, 410; A Matthias, Automaten als Träger von Rechten. Plädoyer für eine Gesetzesänderung, PhD Thesis, (Humboldt-University, Berlin 2007); G Teubner, ‘Rights of Non-humans? Electronic Agents and Animals as New Actors in Politics and Law’ (2006) 33 Journal of Law & Society 497, 502; U Pagallo, The Laws of Robots: Crimes, Contracts and Torts (Springer 2013); Sartor (n 54); S Wettig and E Zehendner, ‘A Legal Analysis of Human and Electronic Agents’ (2004) 12 Artificial Intelligence and Law 111, 112.

Artificial Intelligence, Contracting and Contract Law: An Introduction  31 Other scholars, however, want to adhere to the principle in contract law that all transactions produced by a computer must be attributed to its operator, even if the computer system acts ‘autonomously’ and generates other systems or reprograms itself.64 According to this view, what matters is not the decision-making process itself (which might be opaque and unforeseeable), but the output of the decisionmaking process from the perspective of a reasonable addressee. Such a view, however, is a reductio ad absurdum of the fundamental principle of consensus ad idem under contract law. If the existence of a contract no longer depends on the intention to conclude a particular contract with specific terms, but only on turning on the machine, combined with the general idea that the machine will make some kind of declaration (offer or acceptance) based on some input data, then the contract as such becomes a mere fiction.

VI.  Defects in Consent A.  Doctrine of Mistake Similar problems arise with respect to the doctrine of mistake and other comparable doctrines.65 Ultimately, if the user has neither a specific intention nor an actual state of mind when the contract is concluded, there can be no (unilateral) divergence between his/her inner will and the outer expression of it. Courts confronted with the question of whether a mistake is attributable to the use of electronic agents, face the difficult task of assessing whose knowledge is relevant and at what point in time, and whether certain expectations, such as the user’s expectation that the software will function without error, are worth protecting. While addressing these issues in Quoine v B2C2,66 the Singapore Court of Appeal (CA) rejected to apply the doctrines of unilateral and common mistake, in a case which concerned automated trading algorithms which resulted in trades being concluded at 250 times the market rate. In assessing the state of knowledge attributable to the parties, the CA stated that in the case of deterministic algorithms, ‘the relevant inquiry cannot be directed at the parties themselves, who had no knowledge of or direct personal involvement in the formation of contract’.67 64 Mik (n 55). 65 Whereas in many systems, the doctrine of mistake is available in a wide range of circumstances for mistakes in communication and mistakes regarding essential elements of the contract or the scope of the contract (eg in Austria and Germany), the English and Irish doctrines of mistake as to facts are very narrow. Instead, English and Irish law recognises, additionally, the doctrine of ‘innocent’ misrepresentation to deal with cases that in some other systems would fall under the doctrine of mistake. Cf Draft Common Frame of Reference (DCFR), Principles, Definitions and Model Rules of European Private Law, Full Edition, prepared by the Study Group on a European Civil Code and the Research Group on EC Private Law (Acquis Group), edited by C v. Bar/E Clive (Sellier European Law Publishers, 2009), Vol 1, Notes to II.-7:201, p. 464. 66 Quoine Pte Ltd v B2C2 Ltd [2020] SGCA (I) 2. 67 Quoine v B2C2, [98].

32  Martin Ebers Instead, the relevant inquiry should be the knowledge of the developer or deployer of the program, which must be assessed from the time of programming until the formation of the contract. Since the trading price was the result of deterministic algorithms and these algorithms had functioned exactly as they had been programmed, the CA concluded that no case of a unilateral mistake in the use of automated trading algorithms was made out, whether at common law or equity. Quoine v B2C2 demonstrates that the existing principles of common law or equity are sufficiently flexible to provide for sound solutions when it comes to deterministic algorithms. However, it remains to be seen whether this also holds true for ML algorithms. Since ML algorithms are programmed to continually ‘learn’ from data input and modify their decision-making process, they may no longer reflect the intent of their programmers. Hence, the reasoning in Quoine v B2C2 does not apply squarely to the characteristics of ML algorithms.68 Also, machines can make very different mistakes than humans. Consequently, there is a need for evolutionary jurisprudence on contract law applicable to ML algorithms to be propounded imminently.

B. Fraud The limitations of traditional doctrines in the case of self-learning algorithms also become apparent in other cases, for example, when it comes to the question of whether contracts concluded with the implicit (‘fraudulent’) knowledge of an AI system can be deemed as fraudulent behaviour of one party to the contract. Imagine, for example,69 that a company uses an AI system to decide which of their many properties they should sell. To this end, the company trains the algorithm on historical data regarding previous sales decisions. One factor that significantly affects the sales decisions is contamination. However, the presence of such contamination is not a direct input factor for training the algorithm. Instead, the AI system ‘learns’ that contamination is a major risk factor. Imagine furthermore, that the AI system now recommends the sale of a contaminated site, but without explicitly disclosing the ‘true’ reasons, because in the algorithmic model, the factor ‘contamination’ is merely implicit knowledge. If the company now sells the property based on this recommendation – can the buyer challenge the purchase contract on the grounds of fraudulent misrepresentation? In most legal systems, the agent’s knowledge is imputed to the principal.70 Thus, as a rule, a person who entrusts another with executing certain affairs on his or her own responsibility will be regarded as having the knowledge which the 68 Cf V Ooi and KP Soh, ‘Rethinking Mistake in the Age of Algorithms: Quoine Pte Ltd v B2C2 Ltd’, (2020) 31(3) King’s Law Journal 367 –372, 372. 69 The following example is taken from P Hacker, ‘Verhaltens- und Wissenszurechnung beim Einsatz von Künstlicher Intelligenz’ (2018) 9(3) Rechtswissenschaft 243–288. 70 Cf HG Beale (ed), Chitty on Contracts, 29th edn (Sweet and Maxwell, 2004) 6-048 and 6-068; GH Treitel, The law of Contract, 14th edn (Sweet and Maxwell, 2015) 9-029 with references.

Artificial Intelligence, Contracting and Contract Law: An Introduction  33 other has acquired in that context. However, it is unclear whether these rules can be applied to AI systems; especially, it remains to be seen under which conditions is it possible to attribute not only explicit but also implicit knowledge of an AI system to a human principal. At the end of the day, the agency law reasoning only applies to a person in the legal sense who is appointed to act on another person’s behalf. Even if one accepts an analogous application of the agency law in the case of AI systems, the problem would not be solved. After all, we are not concerned with the explicit knowledge of the agent, ie, knowledge that is directly evident in the algorithmic model or in the algorithmic output. Rather, it is about implicit knowledge, ie about relevant facts that are not visible from the outside. Arguably, it may seem plausible to assume a strict attribution of implicit AI knowledge. However, such a strict attribution would not be justified. If someone employs a human agent, no one would consider that the principal is liable for all subconscious mental activities of this human agent. If all implicit knowledge of an AI system were to be attributed to the user, the other party to the contract would be unjustifiably better off. Also, such a far-reaching attribution would hardly be feasible from a practical point of view, since it would be unclear which facts are implicitly represented in the model. This example shows, again, that traditional concepts of contract law – in this case the attribution of knowledge – lag behind the pace of technological advancements, including cases where AI systems are used for concluding contracts. In this regard, too, legislators and courts are faced with the task of adapting existing legal principles and rules to the application of new technologies.

VII.  Incorporation of Standard Terms Another problem, which is hardly discussed in the legal literature71 concerns the incorporation of standard terms. Most legal systems have developed a series of protective mechanisms to review standard terms, in order to protect consumers or – more generally – all parties (whether a consumer or a business) against whom the standard terms are used.72 The first question that arises in this context concerns the problem of whether standard terms exist at all, if someone uses software that was not created by him, but by an independent developer? Similar issues emerge, if the user feeds some clauses into the software in advance, but the learning software then – depending on the negotiation situation – partly incorporates them into the contract, partly does not adopt them, and partly even modifies them: Is the operator of the software then to be considered as the ‘user’ of the standard terms, although the clauses in their concrete form were not pre-formulated by him, but by the software? 71 See, however, J Groß, ‘AGB 4.0: Allgemeine Geschäftsbedingungen im Rahmen autonomer Vertragsschlüsse’ (2018) 1 Zeitschrift zum Innovations- und Technikrecht (InTeR) 4–9. 72 Cf Ebers (n 30) 205 ff.

34  Martin Ebers How about the situation where both contracting parties are using software agents which are communicating in real time? Who is then to be regarded as the ‘user’ of standard terms, bearing the risk of invalid standard terms? In particular, how about a battle of forms, ie if both parties are using AI systems and these systems try to introduce conflicting standard terms into the contract? If we applied in these cases either the first shot rule or the last shot rule,73 real-time communication between the agents ultimately leads to random results, depending on the order in which a system tries to introduce its standard terms into the contract – whether first or last. Problems also arise with regard to the reasonable notice test,74 according to which, the standard terms would only then form part of the contract, if the user had given the other contractual party reasonable opportunity to become acquainted with the terms. This requirement obviously does not fit for ad hoc contracts that are concluded using (autonomous) software agents. Arguably, it may be possible to send the general terms and conditions before the contract is concluded, provided that at least one human being is involved in this communication. However, this criterion reaches its limits in the case of multilateral autonomous contracts, wherein both contracting parties use software agents, with the intent of automating the conclusion of the contract and to avoid directly controlling individual conclusion of each contract. These few examples fairly evince that incorporating standard terms to AI-based autonomous contracts lead to many legal inconsistencies.

VIII. Interpretation The use of AI systems for drafting contracts, especially the use of contract generators, also raises issues of interpretation. First, it is already contentious as to whether special rules of interpretation should apply to contracts concluded by traditional electronic means.75 This problem is exacerbated if the art of drafting and negotiating contracts is delegated to AI agents. For example, AI systems can be fed with certain parameters regarding the factual environment of the contract (goods sold, quantities, time and place of performance; concrete counterparty) to find contractual clauses that are most favourable to the user; or least likely to trigger a lengthy negotiation process or need to be litigated over. 73 For an overview of different legal solutions for the battle-of-forms-problem see G Rühl, ‘The Battle of the Forms: Comparative and Economic Observations’, (2003) 24 University of Pennsylvania Journal of International Economic Law 189–224. 74 Parker v South Eastern Railway Co Ltd [1877] 2 CPD 416; Thornton v Shoe Lane Parking Ltd [1971] 2 QB 163. 75 See, eg, CP Gillette, ‘Interpretation and Standardization in Electronic Sales Contracts’ (2000) 53 Southern Methodist University Law Review 1431; F Schuster, in G Spindler and F Schuster (eds), Recht der elektronischen Medien, 3rd edn, (Beck, 2015), Bürgerliches Gesetzbuch (BGB), § 305c, para 14.

Artificial Intelligence, Contracting and Contract Law: An Introduction  35 In such cases, the question arises as to how to interpret ambiguous terms. Most legal systems converge on the principle that interpretation is not the meaning given to it in words, but rather by the meaning as intended by the parties based on the context of the contract.76 Determining the intended meaning of contractual terms is difficult when the task of contract drafting and negotiation is left to software. In such a scenario, how can the courts establish clarity on the intentions of the parties? One possibility in which the AI system can give evidence on the intention of the parties could be computer-generated logs on the parties’ actions when using the software to draft the contract (eg deleting, adding or amending clauses).77 However, this only works if the parties were personally involved in the process of negotiation. If, on the other hand, the software negotiates the terms of the contract itself, there is no concrete intention which can be evidenced. In this case, other evidence must be furnished as an alternative, such as the general intent of the operator or how a reasonable recipient could objectively interpret the actions of the software agent in the specific situation. Another solution to deal with ambiguous clauses is to apply the doctrine of contra proferentem rule, which states that the words in the contract must be construed against the draftsman.78 This scenario raises again the question of whether a party using an AI system can be regarded as the ‘draftsman’, even though the contractual terms were not drafted by him/her but by the AI system.79 Interpretation of contracts also present a thorny issue when the same is written in code.80 Do courts have to interpret the programming language? If so, then by what criteria? Or, does contract interpretation still depend on the individual statements and actions of the contracting parties? In principle, the parties are free to choose the language of the contract and the way in which the contract is concluded. Therefore, the content of the contract can be formulated not only in a foreign or artificial language, but also through a programming language.81 Accordingly, it is not necessary to first formulate the contractual terms in natural language to determine the content of the contract. Rather, the contracting parties can also express their intent directly in a programming language. However, natural language differs in many respects from programming language. Hence, it remains to be seen as to how the courts will grapple with this difficulty. 76 For a traditional analysis on the contextual approach to interpretation of ambiguous terms in a contract, see generally, Investors Compensation Scheme Ltd v West Bromwich Building Society [1997] UKHL 28; See also E Peel, Treitel: On The Law Of Contract, 14th edn, (Sweet and Maxwell, 2015) 227–233 [6-006-6-013]. 77 I Ng (H Ying), ‘The Art of Contract Drafting in the Age of Artificial Intelligence: A Comparative Study Based on US, UK and Austrian Law’ (2017) TTLF Working Papers No. 26, 51. 78 E Peel, Treitel: On The Law Of Contract, 14th edn, (Sweet and Maxwell, 2015) 270–271 [7-015]. In European Consumer Law, this rule goes even further: According to Art 5(2) Unfair Contract Terms Directive (93/13/EEC), any doubt on the meaning of a clause is always to be resolved in the manner most favourable to the consumer. 79 Cf above, under VII. 80 Cf ch 9 in this book. 81 HM Anzinger, ‘Smart Contracts in der Sharing Economy’, in M Fries and BP Paal (eds), Smart Contracts (Mohr Siebeck, 2019) 33–72, 55.

36  Martin Ebers Finally, AI technology can also be used to automate the process of contract interpretation.82 Courts could use such a system to determine an interpretive dispute. In such a case, the AI system, through ML, could assist the court by weighing and balancing the competing arguments to arrive at the probable intention of the parties.

IX.  Fairness of Contracts The use of AI systems for contracting can significantly distort the balance between the parties’ rights and obligations to the detriment of consumers and other customers. In the EU, the Unfair Contract Terms Directive 93/13 (UCTD)83 offers consumers protection against contractual terms that have been drafted in advance by the other party to a contract and that derogate, to the detriment of the consumer, from the otherwise applicable law. Beyond the UCTD, a number of EU Member States as well as non-European countries provide for content review of standard contract terms, not only for business-to-consumer contracts, but also for business-to-business contracts and contracts concluded between consumers. As discussed before, online traders may use various automated decisionmaking systems and profiling techniques, including AI systems, to personalise prices or terms. In order to protect customers from price and terms discrimination, the European Parliament has stated already in 2019, that consumers should not only be informed about how the automated decision-making systems work, but also ‘about how to reach a human with decision-making powers, and about how the system’s decisions can be checked and corrected’.84 Indeed, such an information duty, if recognised under contract/consumer law, would not only facilitate human oversight over automated decision-making, and trigger – in case of its non-observance – contractual claims for damages, but also improve the review of unfair contractual terms. Furthermore, any terms that have been personalised through the use of automated decision-making, but not disclosed to consumers as such, could then be deemed unfair and, therefore, non-binding.85 82 Cf Ryan Catterwell, ‘Automation in contract interpretation’, (2020) 12(1) Law, Innovation and Technology 81–112, DOI: 10.1080/17579961.2020.1727068. 83 Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts [1993] OJ L95/29. 84 European Parliament resolution on automated decision-making processes: ensuring consumer protection and free movement of goods and services, (2019/2915(RSP)). 85 M Loos and J Luzak, Update the Unfair Contract Terms directive for digital services, study requested by the JURI committee of the European Parliament, PE 676.006 – 30 February 2021. Arguably, if a digital service provider personalised their provision of services and perhaps also their contractual terms for individual consumers, this could threaten the applicability of the UCTD, because this Directive only applies to terms that have not been individually negotiated. However, as Luzak points out correctly, personalised terms are not synonymous with individually negotiated terms, because individually negotiated terms require a real opportunity for the consumer to influence the content of such terms; J Luzak, ‘Tailor-made Consumer Protection: Personalisation’s Impact on the Granularity of Consumer

Artificial Intelligence, Contracting and Contract Law: An Introduction  37

X.  Contractual Liability If one of the parties breach the contract because of a malfunction of an AI system, the question arises as to whether this party is liable for non-performance of the contract or other damages caused by the ‘misconduct’ of such a system. When it comes to non-conforming goods or non-conforming digital content/ services, most legal systems and also EU consumer law do not require the trader to be at fault for the consumer’s claim for repair/replacement (or other measures to bring the good/digital content into conformity), price reduction, or termination of contract.86 According to both the Digital Content and Services Directive 2019/770 (DCSD)87 and the Sales of Goods Directive 2019/771 (SGD),88 the trader’s liability is, as a matter of principle, strict. Therefore, the consumer is not required to establish that the trader was aware or should have been aware that the AI system was likely to act in a way that led to a damage suffered as a consequence of a breach of contract. However, this form of strict contractual liability applies only to the above listed remedies under the said laws. The regulation of damages is, on the other hand, left to EU Member States.89 As a consequence, EU Member States90 remain free to maintain or introduce systems in which liability for damages is based on fault91 or force majeure as a defence.92 If the consumer’s right to damages for breach of contract is subject to these conditions, it is doubtful whether the trader can be held liable in cases where Information’, in M Corrales et al. (eds), Legal Design: Integrating Business, Design and Legal Thinking with Technology (Edward Elgar, 2021), 107–132, papers.ssrn.com/sol3/papers.cfm?abstract_id=3696483. 86 T Riehm and AM Abold, ‘Mängelgewährleistungspflichten des Anbieters digitaler Inhalte’ (2018) 62(2) Zeitschrift für Urheber- und Medienrecht 82–91, 88; F Rosenkranz, ‘Article  10 – Third-party rights’ in R Schulze and D Staudenmayer (eds), EU Digital Law – Article-by-Article Commentary (Beck/ Nomos/Hart, 2020) 196, para 55. 87 European Parliament and Council Directive 2019/770 of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services [2019] OJ L136/1. 88 European Parliament and Council Directive 2019/771 of 20 May 2019 on certain aspects concerning contracts for the sale of goods, amending Regulation (EU) 2017/2394 and Directive 2009/22/EC, and repealing Directive 1999/44/EC [2019] OJ L136/28. 89 Art 3(10) DCSD 2019/770; Art 3(6) SGD 2019/771. Additionally, recital (73) DCSD 2019/770 and recital (61) SGD 2019/771 state as a principle that the consumer should be entitled to claim compensation for detriment caused by a lack of conformity with the contract. At the same time, the recitals also stress that such a right already is ensured in all Member States and therefore the directives are without prejudice to national damages rules. 90 For a comparison between different legal systems, see Ebers (n 41) 941ff; A Schwartze, Europäische Sachmängelgewährleistung beim Warenkauf (Mohr Siebeck, 2000) 249ff, 331ff; C von Bar et al. (eds), Draft Common Frame of Reference (DCFR), Principles, Definitions and Model Rules of European Private Law, Full Edition, prepared by the Study Group on a European Civil Code and the Research Group on EC Private Law (Acquis Group), (European Law Publishers, 2009) vol 1, 774ff. 91 This is, for example, the case in Germany; cf § 280(1) BGB. By contrast, under English and Irish law, contract liability is strict liability, and will occur in most cases of non-performance unless the failure to perform is excused; GH Treitel, Remedies for Breach of Contract: A Comparative Account (Clarendon Press 1989). 92 Eg for French law, see H Beale et al., Ius Commune Casebooks for the Common Law of Europe: Cases, materials and text on Contract Law, 3rd edn (Hart Publishing, 2019) ch 28.3.

38  Martin Ebers the specific error and thus, the output of the AI system was unforeseeable and unavoidable in a specific situation. Domestic contract laws might come up with different answers.93 According to the predominant view, computer programs – including AI systems – are seen as mere tools which are used by human agents.94 Therefore, in order to hold a human accountable, what matters is not the corrupt output of the software, but the behaviour of the human actor involved. However, such an approach is problematic when it comes to autonomous systems. The higher the degree of automation, the lesser the human can be blamed for the specific behaviour of the AI system that led to a breach of contract and damages. If the user of an AI system can prove that the occurrence of damage was neither predictable nor avoidable in accordance with the state-of-the-art technologies, he/she cannot be held liable. Imagine, for example the following case:95 Apartment owner O commissions cleaning company C to clean his apartment weekly with the help of an AI vacuum cleaner. The AI navigates independently through unknown terrain by processing sensor data using a deep neural network, recognising obstacles, determining the need for cleaning and triggering the appropriate cleaning actions. In O’s apartment, there is a table whose legs are modelled after the trunk of a tree. The vacuum cleaner mistakes the table for a tree and bumps into it during its cleaning activity. There is an expensive vase on the table, which gets broken. The error of the neural network was not foreseeable from the outside. In such a case, cleaning company C would not be liable for damages in legal systems which require fault.96 In light of these considerations, a growing number of scholars wants to treat AI systems as ‘agents’ for which the human operator is liable according to the rules on vicarious liability,97 whereas others even call for autonomous AI systems to be granted limited legal capacity in order to close possible liability gaps in contract and tort law.98 Indeed, treating an AI system as an agent, leads in most cases to 93 For an overview on the different theories, cf Bert-Japp Koops et al., ‘Bridging the Accountability Gap: Rights for New Entities in the Information Society?’ (2010) 11(2) Minnesota Journal of Law, Science & Technology 497. 94 P Cerka et al., ‘Liability for damages caused by artificial Intelligence’ (2015) 31 Computer Law & Security Review 376–389, 384ff. For Germany, cf S Horner and M Kaulartz, ‘Haftung 4.0: Rechtliche Herausforderungen im Kontext der Industrie 4.0’ (2016) Innovations- und Technikrecht 22–27, 23; J Hanisch, ‘Zivilrechtliche Haftungskonzepte für Robotik’ in E Hilgendorf (ed), Robotik im Kontext von Recht und Moral (Nomos, 2014) 27–61, 32. 95 The following example is, again, taken from Hacker (n 69). 96 Similarly, company C would not be liable if they had used instead of the AI system a normal vacuum cleaner, which explodes for no visible reasons, although it was previously working perfectly. 97 For the international debate, cf n 54. For Germany, cf Hacker (n 69); J-E Schirmer, ‘Rechtsfähige Roboter?’ (2016) 71 Juristen Zeitung 660–816, 665; G Teubner, ‘Digitale Rechtssubjekte’ (2018) 218 Archiv für die civilistische Praxis 155–205, 186; C Wendehorst and J Grinzinger ‘§ 4 Vertragsrechtliche Fragestellungen beim Einsatz intelligenter Agenten’ in M Ebers et al. (eds), Künstliche Intelligenz und Robotik – Rechtshandbuch (CH Beck, 2020) 168ff, para 82ff. 98 L B Solum, ‘Legal Personhood for Artificial Intelligence’ (1992) 70(4) North Carolina Law Rev 1231; C E A Karnow, ‘Liability for Distributed Artificial Intelligence’ (1996) 11 Berkeley Technol Law Journal 147; T Allen and R Widdison, ‘Can Computers Make Contracts?’ (1996) 9 Harvard Journal of Law & Technology 26; G Sartor, ‘Agents in Cyber Law’ in: G Sartor and C Cevenini, Proceedings of

Artificial Intelligence, Contracting and Contract Law: An Introduction  39 contractual liability of the human operator for breaches of contractual obligations caused by machines, regardless of whether such conduct was planned or envisaged.

XI. Outlook The preceding tour d’horizon through AI-based contracting illustrates that the use of AI can bring a number of benefits to contracting parties. AI-driven systems can help make contracting faster, more effective and efficient. In addition, the predictions and decisions made by AI systems are generally more accurate and less biased than those made by humans. Thus, the AI systems used for contracting can help to reduce transaction costs, avoid biases, and enable more rational and sophisticated choices. Some legal futurists99 even speculate about whether AI systems could one day be used to create perfect, ‘complete’ contracts, ie contracts that would anticipate all future state contingencies, rendering contract law obsolete.100 If an AI system were able to consider the interests of both contracting parties alike and assign all risks that might arise to one of the parties, such a contract would be intrinsically effective and unassailable, since the contract itself would take into account all extraneous circumstances that might affect its validity. Such assumptions, however, arguably belong more to the realm of legal science fiction.101 In fact, the foregoing analysis shows that the use of AI systems for contracting does not lead to complete or perfect contracts, but rather creates new imbalances and disruptions in the contractual relationship – starting with new power and information asymmetries; the problem of price discrimination and exploitative contracts; the attribution of AI-generated declarations and (implicit) knowledge; defects in consent; the incorporation of standard terms; the interpretation of (coded) contracts; the subsequent review of AI-driven contracts; and the question of when someone is contractually liable for damages caused by an AI system. the Workshop on the Law of Electronic Agents (LEA02) (Bologna, CIRSFID 2002) 7; G Teubner, Rights of Non-humans? Electronic Agents and Animals as New Actors in Politics and Law’ (2006) 33 Journal of Law & Society 497, 502; A Matthias, Automaten als Träger von Rechten. Plädoyer für eine Gesetzesänderung, PhD Thesis, Humboldt-University, Berlin 2007; S Chopra and L F White, A Legal Theory for Autonomous Artificial Agents, (University of Michigan Press, 2011). 99 See especially D Hadfield-Menell and GK Hadfield, ‘Incomplete Contracting and AI Alignment’ (2018) USC CLASS Research Papers Series No. CLASS18-10, dx.doi.org/10.2139/ssrn.3165793. 100 Literature of law and economics generally acknowledges that all contracts in the real world are necessarily incomplete. Against this backdrop, default rules provided by contract law are perceived as a tool for filling the gaps in incomplete contracts, reducing at the same time transaction costs by (empirically) reflecting the parties’ preferences; cf I Ayres and R Gertner, ‘Filling Gaps in Incomplete Contracts: An Economic Theory of Default Rules’ (1989) 99 Yale Law Journal 87; J Cziupka, Dispositives Vertragsrecht (Mohr Siebeck, 2010) 291ff, 339ff. 101 Cf JM Lipshaw, ‘The Persistence of “Dumb” Contracts’ (2019) 2(1) Stanford Journal of Blockchain Law & Policy 1.

40  Martin Ebers For all these impending challenges, courts all over the world must rise to the occasion to posit a solution. Pertinently, it should not be forgotten that AI is a dual-use technology that can be used both for beneficial and harmful ends. The preceding analysis of ‘pathological’ cases should therefore not be misunderstood as an appeal for contract law to always intervene in a corrective and regulatory manner. Rather, we should strive for a contractual environment that is adapted to AI and that, on the one hand, allows for the equitable use of AI, while on the other hand, also takes into account its potential for abuse and error.

3 When AI Meets Smart Contracts: The Regulation of Hyper-Autonomous Contracting Systems? MIMI ZOU

I. Introduction ‘Code is Law’ – Laurence Lessig’s statement, made over two decades ago, regarding computer code as a form of regulation has become somewhat of a cliché. Lessig argued that there were four forces or ‘modalities’ of regulation that constrained individual actions: the law, social norms, markets, and architecture.1 According to Lessig, ‘Changes in any one will affect the regulation of the whole. Some constraints will support others; some may undermine others … A complete view, therefore, must consider these four modalities together.’2 Lessig’s subject of inquiry was the regulation of the Internet, the architecture of which was computer code. Code embodied the set of protocols and rules in the software and hardware of cyberspace that determined how people interacted and behaved in this space, just as the law regulated such interactions.3 The idea of ‘code is law’ has made frequent appearances in discussions related to the two most talked-about new technologies in recent years: artificial intelligence (AI) and blockchain. The software protocols and rules that constitute the architecture of these technologies enable varying degrees of autonomy, a characteristic commonly ascribed to various applications of AI and blockchain. An AI system is said to be acting autonomously where it has the functional ability of determining for itself how to undertake a particular task, with little or no human intervention.4 Presently, autonomous self-driving vehicles are probably the most well-known application of autonomous AI systems. A prominent application of blockchain technology to date are smart contracts, which enable the autonomous

1 L

Lessig, ‘The New Chicago School’ (1998) 27 The Journal of Legal Studies 661. Lessig, Code: And Other Laws of Cyberspace 2.0 (Basic Books, 2006) 123. 3 ibid 124–125. 4 R Abbott, The Reasonable Robot (Cambridge University Press, 2020) 34. 2 L

42  Mimi Zou execution of an agreement between two or more counterparties. Smart contracts are created in code and run on blockchain platforms like Ethereum. As soon as the parameters or conditions laid down in the code are met, a smart contract automatically executes the relevant transaction in a distributed manner by the nodes in the blockchain according to the preprogramed rules. Once set in motion, no single entity or third party may control their execution. In recent years, there have been emerging technological innovations that seek greater convergence between AI and blockchain. For example, machine learning techniques may be integrated with smart contracts to detect security vulnerabilities in smart contracts.5 Machine learning algorithms may support smart contracts to handle more complex and dynamic transactions, especially where oracles are involved to connect data from the outside world.6 Blockchain may also help to reduce AI’s black-box problem by offering greater transparency and visibility of AI decisions.7 Blockchain technology further offers the prospect of so-called ‘decentralised’ and ‘collaborative’ AI, where open-source machine learning models can be shared and trained by anyone within a publicly accessible, transparent, and secure framework.8 A notable example of potential AI-smart contract convergence is the evolution of Decentralized Autonomous Organizations (DAOs). DAOs are organisational structures created on the blockchain that operate according to a set of protocols through smart contracts. They can coordinate an array of social and commercial activities, such as transacting with third parties and performing managerial tasks. DAOs are autonomous in the sense that their functioning does not rely on the original creators or developers or a centralised server. Some have envisioned fullyfledged DAOs as ‘emancipated, AI-driven machines’9 running on blockchains whereby smart contracts allow Internet-connected devices to autonomously interact or transact value with one another without human intermediaries. The technological integration of AI and blockchain-based smart contracts may have profound effects on a variety of business and commercial transactions, creating what I will refer to as hyper-autonomous contracting systems. Such systems exhibit an extremely high degree of autonomy in their ability to create and execute

5 J Song, H He, Z Lu, C Su, G Xu and W Wang ‘An Efficient Vulnerability Detection Model for Ethereum Smart Contracts’ in Liu J., Huang X. (eds) Network and System Security. NSS 2019 (Springer, 2019) at doi.org/10.1007/978-3-030-36938-5_26. 6 R Maracini, ‘How to use machine learning algorithms as Oracles in Smart Contracts?’ (Artificial Intelliger, 1 July 2019) at medium.com/artificial-intelligence-for-blockchain-smart/ how-to-use-machine-learning-algorithms-as-oracles-in-smart-contracts-238c6353526a. 7 M Nassar, K Salah, MH Rehman and D Svetinovic, ‘Blockchain for explainable and trustworthy artificial intelligence’ (2020) 10 WIREs at doi.org/10.1002/widm.1340. 8 Microsoft has launched a research project on ‘Sharing Updatable Models (SUM) on Blockchain’ at www.microsoft.com/en-us/research/project/decentralized-collaborative-ai-on-blockchain. See further JD Harris and B Waggoner, ‘Decentralized & Collaborative AI on Blockchain’ (IEEE International Conference on Blockchain, 16 July 2019) at arxiv.org/abs/1907.07247. 9 P De Filippi and A Wright, Blockchain and the Law: The Rule of Code (Harvard University Press, 2018) 169.

When AI Meets Smart Contracts  43 transactions between and among humans and machines. Although the exact direction of travel for this emerging convergence remains difficult to predict, it is important for scholars, practitioners and policy-makers to begin investigating the potential ethical, legal, and societal challenges arising from this technological development and ways of addressing them. There are two good reasons for doing so, notwithstanding the fact that such investigation will inevitably involve some degree of speculation. First, the law too often struggles to keep pace with new and advanced technologies, even when such technologies become increasingly mainstream. Second, the law, as a regulatory modality, can also influence the trajectory of technological development and deployment. A pertinent illustration is the legal and regulatory landscape for autonomous vehicles that has quickly evolved in a number of the countries over the past few years. Some scholars have warned of a system of algorithmic control based on the ‘rule of code’, even predicting a ‘structural shift of power from legal rules and regulations administered by government authorities to code-based rules and protocols governed by decentralised blockchain-based networks’.10 This chapter probes the possibility of AI-enabled smart contracts as hyper-autonomous contracting systems with their own architecture of protocols and rules. However, code is unlikely to be the only mode of regulation.11 Building on insights from Lessig as well as specific scholarship related to relational contracts, I argue that the promise of contractual efficiency and ‘trustless’ automated execution of transactions through AI-enabled smart contracting should not be overstated. I examine the relevance of real-world contracting practices in their social settings and the likely implications of such practices for hyper-autonomous contracting. As hyper-autonomous contracting systems evolve in the near future, further empirical investigations into their operation in practice will be needed. Such investigations are necessarily interdisciplinary, involving relevant insights and methods from law, computer science, economics, and sociology. Beyond the ‘rule of code’, how such contracting systems will be shaped and constrained to varying extent by market forces, social norms and the law in day-to-day business transactions remain to be seen.

II.  Hyper-Autonomous Contracting: Super-Smart Contracts? A.  Smart Contracts The notion of smart contracts was articulated well before the advent of the blockchain. Perhaps the most cited definition of smart contracts to date comes from 10 ibid 7. 11 This chapter expands on some preliminary ideas from a review essay of the author. See M Zou, ‘Code, and Other Laws of Blockchain’ (2020) 40 Oxford Journal of Legal Studies 645.

44  Mimi Zou Nick Szabo, referring to a ‘computerized transaction protocol that executes the terms of a contract’.12 Szabo clarified that: The general objectives of smart contract design are to satisfy common contractual conditions (such as payment terms, liens, confidentiality, and even enforcement), minimize exceptions both malicious and accidental, and minimize the need for trusted intermediaries. Related economic goals include lowering fraud loss, arbitration and enforcement costs, and other transaction costs.13

Blockchain has made Szabo’s notion of smart contracts practically significant. With the ingenious combination of cryptographic techniques and economically incentivised consensus mechanisms, the blockchain provides a secure, decentralised digital ledger that contains accurate and virtually immutable records of transactions. As a form of distributed ledger technology, each computer or device (referred to as a ‘node’) in the distributed network maintains the same updated version of the record without the need for a master copy. The blockchain is generally resilient and resistant to alteration of the data stored due its distributed nature and the use of consensus mechanisms. These consensus mechanisms, involving all participating nodes in the network, enable the network to operate without a centralised authority or server to exchange, verify, and secure data of recorded transactions in the distributed and shared ledger. Furthermore, due to its cryptographic features, blockchain allows for pseudonymity whereby participants can engage in transactions without revealing their identity. Blockchain-based, open-source platforms such as Ethereum can support any application written in software code (commonly referred to as ‘Turing-complete’). Smart contracts that are built and run on the Ethereum blockchain network may allow the whole or part of an agreement between users to be memorialised in code and executed automatically via blockchain. Smart contracts benefit from blockchain’s security and tamper-resistance, which render transactions practically unalterable and irreversible in the distributed network of nodes. Importantly, there is no need to rely on a centralised authority or trusted intermediary to exchange, verify and secure the transaction, or any external mechanism to enforce the encoded obligations. No single party, by default, can control or halt the transaction once the smart contract is set in motion. It has been argued that this self-executing feature of smart contracts, which enables the automation of performance and enforcement, can increase contractual efficiency and mitigate the risks of opportunistic behaviour.14 Moreover, as Mik puts it, The commercial success of ‘smart contracts’ is also premised on the ability to translate contractual obligations into algorithms, a process aimed at the elimination of ambiguity 12 N Szabo, ‘Smart Contracts’ (1994) at www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/ CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html. 13 ibid. 14 De Filippi and Wright (n 9) 80–82.

When AI Meets Smart Contracts  45 and enhancement of legal certainty, as well as on the ability to produce error-free, reliable computer code.15

On the other hand, the autonomous nature of smart contracts can ‘lead to excessive rigidity and an inability to keep pace with changing circumstances’.16 This may make it difficult to manage any unforeseen problems or disputes involving smart contracts once they are executed. Moreover, once smart contracts are activated, the parties to the transaction generally lose any flexibility in performing their obligations, which can be problematic in longer-term relationships as will be examined later in this chapter. Moreover, it is often claimed that blockchain-based smart contracts are ‘trustless’. Since transactions are maintained and verified through distributed consensus of the nodes in the decentralised network, there is purportedly no need for a ‘trusted’ intermediary. In addition, predefined algorithms that self-execute do not require recourse to third parties such as courts to interpret, monitor, or enforce the encoded obligations. However, smart contracts do raise new issues related to trust.17 For instance, some smart contracts have built-in conditions that rely on the occurrence of an off-chain event. Here, third-party oracles can provide the data for verifying whether the relevant off-chain event has indeed occurred. This necessarily requires a high degree of trust on these external entities. In other words, the oracle must itself be trustworthy. As Werbach puts it, ‘blockchain is not entirely trustless. It may promote justified confidence, but not without vulnerability’.18 A well-known example of such vulnerability was the attack on The DAO, a decentralised autonomous organisation created on the Ethereum blockchain.

B.  Decentralised Autonomous Organisations (DAOs) Smart contracts are not only able to interact with humans, but also other smart contracts on the blockchain. DAOs represent the most complex form of blockchain-based smart contracts, which can be set up to perform different types of functions, transactions and interactions with multiple parties, including humans and machines. In theory, the activities of a DAO are entirely governed by a transparent, incorruptible, and self-enforcing set of pre-programmed protocols, implemented through smart contracts. DAOs may be deployed for a range of goals and purposes, which are determined by the members or participants in a DAO. To date, DAOs have been used for crowdfunding,19 coordinating grants to fund

15 E Mik, ‘The Resilience of Contract Law in Light of Technological Change’ in M Furmston (ed) The Future of the Law of Contract (Routledge, 2020) ch 7. 16 De Filippi and Wright (n 9) 209–210. 17 M Zou, G Cheng and MS Heredia, ‘In Code We Trust? Trustlessness and smart contracts’ (Computers and Law, 1 April 2019) at www.scl.org/articles/10493-in-code-we-trust-trustlessness-and-smart-contracts. 18 K Werbach, Blockchain and the New Architecture of Trust (MIT Press, 2019) 31. 19 See, eg, The DAO, as examined below.

46  Mimi Zou development projects,20 and building an automated company.21 DAOs generally have some kind of token mechanism and members of a DAO attain the network tokens by injecting funds into the organisation. Unlike traditional governance structures, DAOs are autonomous since there is no single person, group or governing body with the authority to make and enforce decisions. Smart contracts automatically execute the decisions of a DAO’s membership through consensus mechanisms, which are predefined in the organisational protocol. This autonomous decision-making feature of DAOs has raised important questions about their compatibility with the concept of legal personhood (although it is beyond the scope of this chapter to delve into this issue). For instance, who is legally responsible for the actions of a DAO, including contractual relationships that it may have entered into? There have been recent attempts at creating ‘legal wrappers’ around DAOs. Initiatives such as OpenLaw seek to ‘fully effectuate the rights, obligations, and legal status desired by all stakeholders’ through the creation of ‘hybrid organisations’.22 OpenLaw proposes deploying ‘limited liability wrappers’ for DAOs, which entails extending ‘onchain’ settlement to off-chain disputes among members and providing a corporate veil over a DAO’s business activities to protect its members from joint and several liability.23 It remains to be seen whether such ‘legal wrappers’ will be effective and how lawmakers, regulators and courts may respond. Notwithstanding recent progress, the evolution of DAOs so far has been thwarted by the infamous hacking of The DAO in 2016. Designed as a system of smart contracts built on the Ethereum network, The DAO functioned as an investor-directed virtual venture capital fund. Funds were raised from its members through the purchase of Ether tokens, which gave members the right to vote. Members (or token holders) would decide on the proposed projects that The DAO would invest in. Within a decentralised structure of governance, the decisions concerning the distribution and management of the fund were directly made by its members through consensus and not by any managers. The decisions were implemented by smart contracts. The source code underlying the predefined set of protocols was visible to all. The DAO went live in April 2016 and, within a few weeks, raised the equivalent of US$150 million in an exceptionally successful crowdfunding campaign. However, in June 2016, The DAO was attacked through the exploitation of a critical security vulnerability in its code. The attacker was able to repeatedly abuse a ‘split’ function (which would normally be triggered by members who disagreed with a proposal approved by the majority and wanted to retrieve their funds) to 20 See, eg, MolochDAO that ‘awards grants to advance the Ethereum ecosystem’ at www.molochdao. com. 21 Eg, Aragon Client enables anyone in the world to create a DAO on Ethereum, with several customisable governance templates for commercial and non-profit purposes at client.aragon.org. 22 OpenLaw, ‘The Era of Legally Compliant DAOs’ (OpenLaw, 26 June 2020) medium.com/@ OpenLawOfficial/the-era-of-legally-compliant-daos-491edf88fed0. 23 ibid.

When AI Meets Smart Contracts  47 siphon more than a third of its funds.24 The DAO attack itself was entirely legitimate according to the rules of the smart contract code. The inherent immutable nature of blockchain made the theft theoretically irreversible. It was only through the ‘centralised’ leadership of a number of core developers in the Ethereum ecosystem that the siphoned funds were retrieved through creating a ‘hard fork’ on the Ethereum network. This option was ultimately adopted by the majority of the token holders. In effect, the hard fork retroactively reversed previous transactions, including those related to siphoning attack. This enabled the siphoned funds to be promptly returned to the investors. The promise of DAOs’ seemingly decentralised, ‘trustless’ architecture enabled by smart contracts was greatly undermined by The DAO attack. This was not only because of the security vulnerabilities that gave rise to the attack but also the hard fork response that exposed the limits of blockchain’s distinctive characteristics such as immutability. The response to the attack also revealed the significant power of a core group of developers within the blockchain ecosystem. The hard fork response caused much controversy in the Ethereum community. Some argued that adopting this option fundamentally undermined the integrity of the Ethereum blockchain.25 The new blockchain created by the hard fork retained the name Ethereum. Some members decided to remain in the original chain, which was renamed Ethereum Classic.

C.  Making Smart Contracts ‘Smarter’ with AI? In recent years, there have been increasing efforts to merge AI and blockchain technology. More advanced forms of AI, such as machine learning, are being used to process large data sets for a wide range of functions, from identifying and analysing patterns to making predictions and decisions. Blockchain technologies can be deployed alongside AI to verify this process, offering benefits of immutability and security. Advocates of this convergence have heralded blockchain as a solution for ‘deliver(ing) the trust and confidence often needed for end users to fully adopt and rely on AI-based business processes’.26 Big players like IBM have already rolled out AI and blockchain integrated solutions. For example, IBM’s crypto anchor verifier utilises AI to establish the identity and authenticity of objects (such as gems) and records their authentication on the blockchain for the purpose of verification throughout the supply chain.27 Equally, a fast-growing number of start-ups are actively innovating in this space and 24 V Buterin, ‘Critical Update Re: DAO Vulnerability’ (Ethereum Blog, 17 June 2016) at blog.ethereum. org/2016/06/17/critical-update-re-dao-vulnerability. 25 ‘DAO hard-fork voting on ethpool & ethermine’ (Ethereum Community Forum, July 2016) at forum.ethereum.org/discussion/8382/dao-hard-fork-voting-on-ethpool-ethermine. 26 J Cuomo, ‘How blockchain adds trust to AI and IoT’ (IBM Blockchain Blog, 5 August 2020) at www. ibm.com/blogs/blockchain/2020/08/how-blockchain-adds-trust-to-ai-and-iot. 27 IBM, IBM Crypto Anchor Verifier at www.ibm.com/products/verifier.

48  Mimi Zou developing AI-blockchain solutions for organising massive databases, supporting smart city infrastructure, enhancing cybersecurity protocols, automating business processes and upgrading business infrastructure.28 Some scholars have argued that ‘blockchains may serve as an interoperable layer for AI or algorithmic systems to interact and potentially event coordinate themselves with other code-based systems through a set of smart contracts acting as a DAO’.29 Some have referred to smart contracts becoming ‘smart’ or ‘smarter’ with AI being integrated into the operation of blockchain-based transacting systems. According to this claim, AI functionalities can enhance the capability of smart contract to implement a wider range of real-world applications on the blockchain.30 Moreover, machine learning enables software to acquire knowledge from external sources of data and perform tasks that have not been pre-programmed. As such, machine learning models can potentially introduce more ‘dynamic and adaptive code-based rules’ to smart contracts by ‘replicating some of the characteristics of traditional legal rules characterized by the flexibility and ambiguity of natural language’.31 While there is a broad and ever-expanding range of projects integrating smart contracts with AI, three examples of recent developments are presented here to illustrate how technologists are trying to make smart contracts ‘smarter’. First, AI is being deployed to identify and detect security flaws and vulnerabilities in smart contracts. As The DAO attack demonstrated, the integrity and ‘trustlessness’ of the entire Ethereum ecosystem can be undermined when hackers exploit security loopholes in the smart contracts protocols. Vulnerabilities in smart contracts can include ‘exception disorder’, ‘gasless end’, timestamp dependency, transaction-ordering (block number) dependency, integer overflow/underflow, re-entrancy, among others.32 Existing methods of spotting these vulnerabilities are said to be heavily reliant on centralised, expert-defined hard rules, which limit the accuracy, flexibility, and scalability of detection.33 There are a number of emerging projects that deploy machine learning methods and techniques to detect security vulnerabilities in real-world smart contracts, which aim to improve the efficiency and accuracy of large-scale and automated vulnerability detection.34 28 See, eg, S Daley, ‘Tastier Coffee, Hurricane Prediction and Fighting the Opioid Crisis: 31 Ways Blockchain & AI Make a Powerful Pair’ (Builtin, 6 April 2020) at builtin.com/artificial-intelligence/ blockchain-ai-examples. 29 De Filippi and Wright (n 9) 148. 30 ‘AI Smart Contracts  –  The Past, Present, and Future’ (Hackernoon, 19 November 2018) at hackernoon.com/ai-smart-contracts-the-past-present-and-future-625d3416807b. 31 S Hassan and P De Filippi, ‘The Expansion of Algorithmic Governance: From Code is Law to Law is Code’ (2017) 17 Field Actions Science Reports 88. 32 N Atzei, M Bartoletti and T Cimoli, ‘A Survey of Attacks on Ethereum Smart Contracts’ (Proceedings of the 6th International Conference on Principles of Security and Trust, vol. 1020, 4 April 2017) 164–186. 33 Y Zhuang, Z Liu, P Qian, Q Liu, X Wang and Q He, ‘Smart Contract Vulnerability Detection Using Graph Neural Networks’ (Proceedings of the 29th International Joint Conference on Artificial Intelligence, January 2010) 3283–3290. 34 ibid; W Wang, J Song, G Xu, Y Li, H Wang and C Su, ‘ContractWard: Automated Vulnerability Detection Models for Ethereum Smart Contracts’ (IEEE Transactions on Network Science and Engineering, 23 January 2020) at doi:10.1109/TNSE.2020.2968505.

When AI Meets Smart Contracts  49 A second area of development concerns the use of AI in relation to third-party oracles, which allow smart contracts to access data from external systems. Such off-chain data may be necessary for triggering the conditions for a smart contract transaction by verifying the occurrence of real-world events. As mentioned earlier, the trustworthiness and reliability of the oracle is critical. In one academic project seeking to develop AI-based oracles for smart contracts, researchers are experimenting with the use of machine learning algorithms for classification and clustering tasks to analyse and verify event knowledge datasets that are open and publicly auditable, particularly news content.35 Another project by a start-up attempts to store and validate AI model inferences directly on-chain (while the key-value storage of the AI models is off-chain) so that the inference results no longer comes from a third-party oracle.36 Finally, the convergence of AI and smart contracts may see the further evolution of the Internet of Things (IoT), whereby billions of devices connected to the Internet establish an interface between the digital and physical worlds. In IoT networks, smart contracts may be used for the coordination and authorisation of a wide range of transactions and interactions by autonomous, intelligent devices enabled by AI. For instance, autonomous vehicles can find customers for rides, pay for its fuel recharge or parking, and check in for service and safety checks on their own – undertaking such tasks in accordance with the smart contracts. If the smart contracts were set up as a DAO, profits from the rides may be distributed among the members of the organisation according to the protocol. Slock.it, the German start-up that created The DAO, was originally developing a project to enabled ‘smart’ locks to be connected to smart contracts in the Ethereum blockchain, enabling users to rent, sell or share anything that can be locked without a third party intermediary.37 De Filippi and Wright have postulated that: [A]ssuming that blockchain technology and smart contracts continue to advance in sophistication, blockchains could help facilitate device emancipation. Because blockchains enable autonomous code, they could underpin new systems of code-based rules – lex cryptographia – enabling autonomous machines and devices to operate independently of their manufacturer or owner.38

For De Filippi and Wright, the concept of lex cryptographia is used to describe blockchain as a system of algorithmic control that entails ‘order without law’39 in 35 ‘Websensors + iExec (Part 1) Artificial intelligence-based oracles for smart contracts’ (Websensors) at websensors.net.br/iexec; Marcacini (n 6). 36 Cortex is a project that aims to create a decentralised, open-source AI platform that ‘provide the best in class machine learning models on the blockchain, enabling users to infer smart contracts on the Cortex blockchain’. Users submit tasks on the Cortex platform, submit AI models, make inferences with smart contracts, and create their own AI Decentralised Applications. See www.cortexlabs.ai. 37 G Prisco, ‘Slock.It To Introduce Smart Locks Linked To Smart Ethereum Contracts, Decentralize The Sharing Economy’ (Bitcoin Magazine, 5 November 2015) at bitcoinmagazine.com/articles/ slock-it-to-introduce-smart-locks-linked-to-smart-ethereum-contracts-decentralize-the-sharingeconomy-1446746719. 38 De Filippi and Wright (n 9) 166. 39 ibid 5.

50  Mimi Zou its architectural design, ‘implementing their own system of rules … enforced by the underlying protocol and smart contracts’.40 It can also be described as ‘a body of rules created by, coded in, and enforced by quasi-autonomous technological systems enabled by blockchain, which exists independently from state-created legal rules’.41 If the convergence of AI and blockchain-enabled smart contracts can potentially lead to hyper-autonomous contracting systems, an important question is how such systems, underpinned by lex cryptographia, will interact with other modes of regulation such as the law, markets, and social norms in governing contractual behaviour. The focus of this chapter is on the role of social norms in real-world contracting, which I explore in the next section.

III.  The ‘Rule of Code’? Insights from Relational Contract Scholarship A.  In Code We Trust? Trust underlies many different types of social and economic interactions, including those between complete strangers. This raises the critical question of where trust comes from. The idea of ‘trustlessness’ associated with blockchain is not about the elimination of trust, but a shift away from other traditional foundations of trust.42 Compared to previous modes of trust, namely centralised/ state-based ‘Leviathan’ trust, peer to peer trust, and intermediary trust, Werbach argues that blockchain represents a new type of trust. One can trust the ledger without needing to trust any actors in the system to validate the transactions. The very idea behind The DAO was that blockchain-based smart contracts ‘took the place of law, intermediaries, and personal relationships as the foundation for trust’.43 For many participants in the blockchain ecosystem, trust is placed in the architecture of the blockchain. This can be gleaned from the statement of the Slock.it founder, the creator of initial code for The DAO, who lauded the consensus mechanism that led to the hard fork response to The DAO attack: Separate from the discussion of whether a hard fork because of the DAO is a good or a bad idea, the very fact, that the Ethereum community (devs, miners, exchanges, researchers, …) has come together, often setting personal opinions aside, and successfully managed a hard fork in this situation is truly remarkable. Given the time constraint, the fact that we were able to come to consensus on this matter is an outstanding accomplishment.



40 ibid

50. (n 11). 42 Zou, Heredia-Soratia and Cheng (n 17). 43 Werbach (n 18) 67. 41 Zou

When AI Meets Smart Contracts  51 Although some do question the analogy ‘code is law’. I do not. We just found out that we have a supreme court, the community!44

The above statement has raised eyebrows,45 especially given the considerable controversy that the hard fork response caused within the Ethereum community which ultimately resulted in its split (as detailed earlier). Moreover, it was the vulnerability in the code itself that gave rise to the incident. It can be seen that the ‘architecture’ of code as well as its creators are not neutral. As Werbach puts it: In the technology world, many prefer to ignore the ways that software architecture grants the authority to shape behaviour. The power of courts and regulatory agencies is easy to see; that of code and its masters, less so. Yet both are power regulators. Poorly designed code can be as harmful as poorly designed law.46

While it is possible that AI can help to improve the detection of vulnerabilities in smart contracts, users must ultimately place their trust in a core group of coders and developers who design and build the architecture of hyper-autonomous contracting systems. Yet, there is inevitably a power imbalance between users and these ‘masters’ of the code, which may over time lead to distrust in the system. Vulnerabilities in the code, especially if they go undetected and result in incidents like The DAO attack, as well as the responses and actions of the core group of coders and developers, can exacerbate such distrust.47 This is analogous to the way we place trust in the key institutions that make and enforce the law. Law has also been viewed in political, economic, and social theory as crucial ‘for the establishment of those bonds of trust that underpin a market economy’, in the sense that ‘the presence of laws against betrayal and disappointment, enforced by courts armed with powerful state sanctions, enables trust to become widespread.’48 As Collins explains, ‘The regulation of contractual behaviour by courts as agencies of the state provides the system of rules and sanctions, which enables the necessary degree of trust between strangers to be constructed.’49 As such, the rule of law has been widely heralded as a vital mechanism of social and economic order, with the law of contract playing a pivotal role in the legal system of a market economy.50 In much of the debates on the regulatory implications of new technologies like AI and blockchain thus far, the rule of law is frequently pitched against the ‘rule of code’. De Filippi and Wright’s thoughtful thesis on lex cryptographia reflects

44 C Jentzsch, ‘What an accomplishment!’ (Slock It, 20 July 2016) at blog.slock.it/what-anaccomplishment-3e7ddea8b91d#.dj397q5yn. 45 F Coppola, ‘A Painful Lesson for the Ethereum Community’, Forbes (21 July 2016) at www.forbes. com/sites/francescoppola/2016/07/21/a-painful-lesson-for-the-ethereum-community. 46 Werbach (n 18) 233. 47 Zou, Heredia-Soratia, Cheng (n 17). 48 H Collins, Regulating Contracts (Oxford University Press, 1999) 4. 49 ibid. 50 This view is reflected in much of the institutional economics literature regarding the role of law in economic development, which is to offer secure and stable private property and contract rights.

52  Mimi Zou this preoccupation. Indeed, the architecture of hyper-autonomous contracting systems driven by the convergence of AI and blockchain-based smart contracts will present challenges to an array of legal principles that regulate contractual relations as well as the institutions that enforce this system of legal rules and sanctions. For instance, courts may find that what the parties had actually contracted was different to the terms of the smart contract. The failure of the smart contract to realise the intention of the parties may be caused by a range of malfunctions in the code, but also the limitations of transposing the terms of a legal contract into the language of code. As discussed later on, some contractual provisions may be intended by the parties to be applied flexibly or not be enforced at all. Importantly, the regulation of different actors’ behaviour in hyper-autonomous AI and blockchain-based contracting systems will also be influenced by social norms and the market. Here, one needs to assess empirically the day-to-day contractual behaviour in business transactions and market exchange. In practice, the role of the law or code may not necessarily be the most important sources of trust (or indeed ‘trustlessness’) that underpin traditional legal contracts or smart contracts. This is where relational contract scholarship can offer some new insights on the regulatory challenges of the hyper-autonomous contracting systems through providing an essential contextualisation of contracting practices and norms. There are a number of relational contract theories developed over the years as well as various interpretations of such theories. A common thread of such theories is the view of contracts as being relationally constituted rather than merely discrete exchanges.51 As a preeminent theorist Ian MacNeil argues, an understanding of contracts as a social phenomenon recognises that contracts are embedded in social structures and norms. This may be contrasted with classical theories of contract law that focus on the express contractual agreement instead of a broader context of the parties’ exchange.52

B.  Non-contractual Mechanisms in Contracting Practices In his seminal work in 1963, Stewart Macaulay studied what he called ‘noncontractual relations’ in business exchanges.53 For Macaulay, a contract was a ‘device for conducting exchanges’ with two elements: first, rational planning of See, eg, DC North, Institutions, Institutional Change and Economic Performance (Cambridge University Press, 2002); S Knack and P Keefer, ‘Institutions and Economic Performance: Cross-Country Tests Using Alternative Institutional Indicators’ (1995) 7 Economics and Politics 207. Overall, this school of thought significantly influenced various policies related to developing economies that were adopted by the World Bank, IMF, and other international institutions in the 1990s. 51 IR Macneil, ‘Relational Contract: What We Do and Do Not Know’ [1985] Wisconsin Law Review 483; J Braucher, J Kidwell and WC Whitford (eds), Revisiting the Contracts Scholarship of Stewart Macaulay On the Empirical and the Lyrical (Hart, 2013). 52 IR MacNeil, ‘Relational Contract Theory: Challenges and Queries’ (2000) 94 Northwestern University Law Review 877. 53 S Macaulay ‘Non-contractual relations in business’ (1963) 28 American Sociological Review 45.

When AI Meets Smart Contracts  53 the transaction as to provide for foreseeable future contingencies, and second, legal sanctions to induce performance of the transaction or compensate for non-performance. Based on an empirical study of business exchanges in the manufacturing industry, he found that non-contractual mechanisms such as social norms and customs played a more important role in ordering market exchanges than formal contracts. Parties commonly entered into contracts for other reasons besides legal enforceability that entails legal sanctions for non-performance. Macaulay also found that once a contract was entered into, the adjustment of the business relationship and the settlement of disputes between the parties seldomly involved any contractual mechanisms or other legal sanctions. Moreover, lawsuits for contractual breach were a rarity.54 Macaulay later conducted an empirical study of how the ‘paper-deal’ (what is written in the contract) often differs from the ‘real-deal’ in contractual relationships.55 He found that in many instances, the parties’ subjective understanding of their agreement did not align with the written text of their contract. Other contracting practices that were observed included parties assenting to terms and entering into agreements that they had not read or would not have understood even if they had read them. Parties also frequently provided oral assurances that contradicted or varied the written provisions in the contract. Importantly, what Macaulay found was that many businesspeople chose to rely upon relational norms behind the contract, including past dealings between the parties and industry customs, rather than opt for strict enforcement of the contractual terms if problems arose. Hugh Collins further builds on the rich empirical research on contractual practices spurred by Macaulay’s work.56 Collins questions a conventional view that the law of contract is a key component for establishing bonds of trust that underpin a market economy. This conventional view also posits that courts (as agencies of the state) regulate contractual behaviour through a system of rules and sanctions, thus enabling trust between strangers in undertaking transactions in the marketplace.57 However, Collins argues that that real-world contracting consists of three components. The first is the economic deal underlying a particular transaction. Second, there is the relationship between the parties. The third is the formal contract itself, which embodies the parties’ legal rights and obligations. According to Collins, when parties enter into a contract, their behaviour is not usually guided around a legal framework. Instead, their priority may be to sustain the business relationship and ensure that the deal is successful.58 Collins noted that firms used contracts as a form of insurance in case the business relationship broke down.59 54 ibid 56. 55 S Macaulay ‘The Real Deal and the Paper Deal: Empirical Pictures of Relationships, Complexity and the Urge for Transparent Simple Rules’ (2003) 66 Modern Law Review 44. 56 Collins (n 48). 57 ibid 3–4. 58 ibid 127–48. 59 ibid 256–86.

54  Mimi Zou Karen Levy is among the few scholars who have provided a detailed analysis of the limitations of smart contracts based on an understanding of the social settings of real-world contracting. Levy argues that the enthusiasm over smart contracts’ features of self-execution and automated enforcement ignores the social norms and relational contexts surrounding contracting practice.60 Like Macaulay and Collins, she considers contracting practices whereby parties deliberately include prima facie unenforceable or vague terms, or intentionally ignore contractual breaches. Levy considers the function of contracts as ‘social resources’, which are used by people to manage their relationships. As she puts it, ‘Contracts are deeply social tools as well as legal ones.’61 Levy presents three types of contracting practices to illustrate this argument. First, parties may draft and accede to contractual terms that they know or suspect to be unenforceable before a court, such as clauses that are excessively broad or invalid due to a variety of reasons. As Levy explains, the main purpose of doing so is to set norms and expectations for each other’s future behaviour, including behaviour that is beyond what is actually legally proscribed or behaviour that outside the reach of the formal contract.62 She argues that these legally unenforceable clauses can serve a social function that is beneficial to either or both parties. Second, parties may intentionally draft and accede to purposefully vague or open-ended terms, especially in long-term relationships.63 One common example is a duty on either or both parties to exercise their ‘best endeavours’ to perform the contract. The deliberate inclusion of these terms in the contract can be due to various reasons and motivations of the parties concerned. It may be because the inclusion of more specific and defined terms would be unreasonably burdensome if the circumstances of the parties’ agreement are likely to change in the near future. Parties may also prefer to conduct part of the transaction ‘unofficially’. Referring to Macaulay’s study, this practice of vague or underspecified terms was often linked to parties’ anticipation or hope that their relationship would be a ‘cooperate venture’.64 Detailed and carefully negotiated contractual terms may obstruct progress in an exchange relationship and undermine parties’ mutual trust. Third, parties may decide not to enforce their agreement, such as overlooking breaches of contractual obligations by the other party.65 In some cases, the non-enforcement may be part of a behaviour strategy to manage institutional and interpersonal relationships. Levy refers to the dynamic of ‘bargaining in the shadow of the law’.66 Instead of pursuing formal enforcement of the contract, such 60 KEC Levy, ‘Book-Smart, Not Street-Smart: Blockchain-Based Smart Contracts and the Social Workings of Law’ (2017) 3 Engaging Science, Technology, and Society 1. 61 ibid 10–11. 62 ibid 6. 63 ibid 7–9. 64 Macaulay (n 53) 64. 65 Levy (n 60) 9–10. 66 ibid 9.

When AI Meets Smart Contracts  55 as through litigation, the possibility or threat of legal enforcement can create a ‘backdrop of social expectations against which extralegal negotiations’ between the parties can take place.67 In sum, Levy’s argument is that the social aims underpinning these practices include setting norms for future behaviour, facilitating stable and flexible longterm relations, and providing a strategic resource for bargaining in the shadow of the law. Enthusiasm for the ‘transformative’ potential of smart contracts based on their ‘careful prespecification of terms and automated enforcement of ­obligations’68 ignores the social complexities of contracting practices. Parties to a range of transactions may not be seeking ‘frictionless’ self-execution of their agreements if there is a desire on the part of either or both parties to maintain flexibility (especially if future circumstances may change), preserve an ongoing and stable relationship or gesture particular social or behavioural norms.

C.  Recontextualisation of Autonomous Contracting In this chapter, I propose an understanding of ‘autonomous’ AI-enabled smart contracting systems to consider four intersecting forces that have varying degrees of importance in each transaction: the parties’ relationship (social norms), the economic deal (market), the law, and code.69 These forces can be viewed as coexisting mechanisms that support and maintain the necessary trust between people and create and sustain commercial relations in a market economy. However, as Lessig has pointed out, one modality may support or undermine another, and changes in one will affect the regulation of the whole. For example, the precision and rigidity of code may render it very difficult for the ‘real-deal’ underpinning the transaction to be enforced. Parties would need to represent on the smart contracts, ex ante, the ‘real-deal’ and all possible outcomes of the transaction. For many types of commercial transactions, this may be a challenging if not impossible feat. It may also be undesirable considering the potential for undermining the long-term relationship of the parties. In longer-term relationships, the limitation of human foresight and the greater potential for changes in the circumstances of such relationships can often mean that higher levels of interpersonal trust between the parties are required.70 Take a 10-year commercial lease as an example. Due to the adverse impacts of the COVID-19 pandemic on her business, the tenant (who is five years into the lease) finds herself unable to pay the agreed rent on time during this period. The cost of renegotiating the existing contract or entering into a new contract with third parties (ie the landlord tries to look for a new tenant or the tenant tries to find a

67 ibid

10. 10. 69 See Zou (n 11). 70 Zou, Heredia-Soratia, Cheng (n 17). 68 ibid

56  Mimi Zou more affordable option) is too high for either or both parties. The landlord and tenant may wish to maintain an ongoing, stable relationship. Accordingly, the parties decide to have an arrangement where the tenant is allowed to pay the rent late, the rent is reduced or even foregone altogether for a few months. Here, the landlord has intentionally chosen not to enforce the contractual term for a variety of reasons, including the costs of requiring performance or enforcement, the desire to maintain goodwill and preserve the relationship, and even fortifying mutual trust.71 In the above example, a smart contract or DAO set up to execute the rental payment on the fulfilment of predefined conditions (eg payment is due on the first of each month) would not be able to offer parties such flexibility. On the other hand, spot contracts for more straightforward types of financial transactions would be a more obvious candidate to benefit from smart contracting. Nevertheless, it is important to consider the social context in which all types of contracting takes place. For example, are the parties to the spot contracts already in an existing business relationship (or wish to form such a relationship) with each other that goes beyond the immediate transaction? If we consider the arguments from relational contract theory that formal contractual terms often do not reflect the parties’ subjective perception of their agreement, then code poses an even greater challenge. As Sally Wheeler argues, ‘Blockchain is designed to be rigid and inflexible, requiring no interpretative intervention because, at base, it is a computer code that scripts, literally, the relationship between many parties in a network.’72 Taking into account the context of real-world exchanges, hyper-autonomous contracting can reduce the scope for flexibility and adaptability to give effect to the intentions of the parties. While the promise of certainty and predictability offered by code is appealing, we need to find ways in which the architecture of AI-enabled smart contracts can respond to the ‘real-deals’ that people enter into on a day-to-day basis, recognising the role of relationality in business transactions that is unlikely to go away. Collins has suggested that for contract law to maintain its utility, it should be reconfigured to incorporate lessons from sociological and economic discourse of transacting in the marketplace.73 He argues: In seeking to pursue the objective of supporting contracts and markets, legal regulation should regard its task as one of protecting the expectations of the parties. This task requires legal regulation to examine the discrete agreement in its embedded context of a business relation and market conventions, which is a process that we described as recontextualising contractual agreements.

For Collins, this ‘recontextualisation’ of contract entails ‘open-textured rules and interpretative strategies’ instead of a formal set of legal entitlements. He argues

71 ibid. 72 S

Wheeler, ‘Visions of Contract’ (2017) 44 Journal of Law and Society S74. (n 56) 359.

73 Collins

When AI Meets Smart Contracts  57 that this is ‘vital if the law is to provide effective support for the establishment of the necessary trust and sanctions on which contracts and markets rest’.74 In a similar vein, I argue that there should be a ‘recontexutalisation’ of smart contracts, which would enable hyper-autonomous contracting systems to reach their potential. To do so, we need to go beyond understanding smart contracts and AI contracting systems as merely ‘technical artefacts’ that are devoid of broader social and relational contexts. Instead, as Levy suggests, smart contracts should be viewed as ‘social resources’ that can be used by people to manage their relations, like traditional contracts.75 As she puts it, Contracting … is a deeply social practice in which parties engage for all sorts of purposes, and the effects of contract negotiation reverberate outside of the ‘four corners’ of a formal agreement, both in time and space … contracts serve many functions that are not explicitly legal in nature, or even designed to be formally enforced.76

If one takes into account the fuller picture of the real-world functions of contracts and contract law in market transactions as well as ‘thinking carefully about the features of the social setting in which smart contracts are permitted to operate’.77 The ‘rule of code’ or lex cryptographica is unlikely to replace other modes of regulation, even in hyper-autonomous contracting systems. As highlighted in this chapter, the promise of ‘trustlessness’ in blockchain-based smart contracts may not be realised simply because the technology self-executes the parties’ obligations in code. Trust in code cannot replace other sources of trust in business dealings, including interpersonal trust as well as institutional trust. Like social norms and market forces, the law will remain a necessary mode of regulation in the context of hyper-autonomous contracting. For instance, the law can serve as an important remedial institution that functions as ‘insurance’ for parties to adjudicate grievances that may arise ex post, even when smart contracts have fully executed the agreement. The law will be used by parties where and when it is useful. For example, two of the most well-known blockchain companies, Ripple and R3 Consortium, still used ‘traditional’ contracts for a deal in 2016 that gave R3 the option to purchase Ripple’s cryptocurrency XRP in return for promoting Ripple’s technology to R3’s network of banks. Instead of specifying the number of banks to be introduced by R3, the technology promotion contract simply contained a clause that required both parties to act in good faith. In 2017, XRP’s value unexpectedly jumped 20 times its original value (when the parties entered into the deal). Ripple sought to unilaterally terminate the options contract, alleging R3 had not performed its obligations under the parallel technology promotion agreement and that R3 had ‘misrepresented their ability and intent



74 ibid

357. (n 60) 2.

75 Levy 76 ibid. 77 ibid

11.

58  Mimi Zou to deliver on their commitments’.78 An ensuing lawsuit and counter lawsuit were filed in Delaware and Californian courts respectively by R3 and Ripple. The parties ultimately settled a year later.79 It is hard to envisage how hyper-autonomous smart contracting would have operated in respect of R3 and Ripple’s agreements, which reflect the complexities of real-world contracting practices as highlighted earlier, such as the often-intentional inclusion of vague or unspecified contractual terms. Moreover, Ripple’s claim of misrepresentation in this case also illustrates another challenge posed by hyperautonomous contracting in relation to defects in the parties’ consent or capacity. The law will still play an important role in such instances to ‘correct’ the outcome of the self-executed code.

IV. Conclusion The integration of AI with blockchain-based smart contracts is a rapidly emerging innovation. This technological convergence has the potential to transform a range of business and commercial transactions through the creation of hyperautonomous contracting systems that allow ‘frictionless’ and ‘trustless’ automated transactions between and among humans and machines. Nevertheless, the hype associated with such systems must be carefully scrutinised. Extending Lessig’s theory of regulatory modalities (market forces, social norms, law, and code), this chapter has situated hyper-autonomous contracting based on AI-enabled smart contracts in a relational analysis of real-world contracting practices. Going beyond the debate on law versus code, insights from relational contract theories help to highlight the importance of the social context in which contracting practices take place, including the complexities of parties’ contractual behaviour. Such insights point to a promising future research direction that emphasises the need for empirical studies of real-world practices of hyper-autonomous contracting in diverse types of business and commercial transactions.

78 A Irrera, ‘U.S. blockchain startups R3 and Ripple in legal battle’, Reuters (8 September 2017) at www.reuters.com/article/us-r3-ripple-lawsuit/u-s-blockchain-startups-r3-and-ripple-in-legal-battleidUKKCN1BJ27I. 79 D Palmer, ‘R3, Ripple Settle Legal Dispute Over XRP Purchase Option’ (Coindesk, 11 September 2018) at www.coindesk.com/r3-ripple-settle-legal-dispute-over-xrp-purchase-option.

4 A Philosophy of Contract Law for Artificial Intelligence: Shared Intentionality JOHN LINARELLI*

I. Introduction The aim of this chapter is to offer a theory of contract law to account for the inclusion of artificial intelligence in contract practices. It offers a new direction for a philosophy of contract law to be able to give significance to mental states or psychological attributes increasingly relevant to contract practices and which come in the form of machine learning, deep learning, and neural networks associated with artificial intelligence. The objective is to produce a general theory about contracts to accommodate both human and artificial agency. The chapter identifies what makes contractual obligation distinctive, and the core concepts of contract law that are most relevant, when contracting involves the interaction of human and artificial agency. Any practical philosophy – moral, political, or legal – either starts with or presupposes a conception of the person. A philosophy of contract law must account in some way for the capacities of contract parties as more or less responsive to reasons relevant to contractual obligation. With few exceptions, this focus on the conception of the person has been missing in legal philosophy in general and in a philosophy of contract law in particular. The relevance of artificial intelligence to contracting brings this gap to our attention because, as we shall see, so much of what makes a contract a matter of legal obligation depends on a what the law identifies as a particular form of intention of persons to form a contract. An intent to enter contractual relations is what makes contracting possible in most * Associate Dean for Academic Affairs and Professor of Law, Touro University, Jacob D. Fuchsberg Law Center. I am grateful for comments on a presentation of this paper in the Contracting and Contract Law in the Age of Artificial Intelligence conference sponsored by the University of Turin Observatory on Economic Law and Innovation, the Robotics and AI Law Society, and the European Law Institute. I am also appreciative of comments at two Touro Law works-in-progress workshops. All errors are mine.

60  John Linarelli legal systems.1 Persons or other entities that cannot form or be understood to form an intent to contract lack the legal recognition to enter legally enforceable contracts. Intention, then, leads us directly to questions about the mental state of contract parties or their agents and how other contract parties come to recognise and accept that intent. Artificial intelligence in contract practices brings out that what makes contract law a distinctive form of legal obligation is shared intentionality. I refer to this as the shared intentionality thesis. Shared intentionality is the psychological capacity of one agent to share and pursue a joint goal with another agent. It is an attribute of human thought empowering human planning and the ability to share agency with others. To have this form of shared intention, one must have the mental capacity for a second personal point of view, to be able to know and give due recognition to the views and interests of others, and to attribute the same point of view to others. Shared intentionality leads to a focus on objective intent to enter contractual relations in Anglo-American contract law as the primary concept in understanding what is distinctive about contractual obligation. In demonstrating the shared intentionality thesis, theories such as ‘contract as promise’ and ‘contract as consent’ reduce to a theory about intent and mental states. When we add the role of artificial intelligence in contracting, conceptualising contracts around promise or consent do little or no work on their own. These concepts cannot direct us to what is distinctive about contractual obligation. Because we are dealing with artificial intelligence and in some future cases artificial beings with their own abilities to enter contracts independent of human agency, we must get underneath concepts like promise and consent. When artificial intelligence enters the mix of transacting, these intermediate concepts underdetermine what is going on. Section II in this chapter explains how artificial intelligence is involved in the process of contract formation today and how it may be involved in the future. The focus in this section will be on contract formation because we need to understand the role of intention to create legal relations in contract. Section III explains the relevance of objective intent in contract law to developing a philosophy of contract law to account for artificial intelligence. An intention to create a contract is key to understanding what is distinctive about contractual obligation. It is a necessary condition to persons cooperating in the form of a contract. The doctrine of objective intent in contract law operates as a Turing test for determining whether a contract has been formed. Section III further explains that it does not matter which ‘mind’ produces this intent – human or artificial. The issue is attribution of the right sort of mental state by one contract party to another. Philosophy and cognitive science can assist in developing this argument,

1 See, eg, H Beale, B Fauvarque-Cosson, J Rutgers and S Vogenauer, Contract Law Cases and Materials (Jus Commune Series) 3rd edn (Hart/Bloomsbury, 2019) 301-303. My focus is on the objective theory of contract in Anglo-American contract law.

A Philosophy of Contract Law for Artificial Intelligence  61 most prominently Daniel Dennett’s notion of the intentional stance,2 as well as work on ‘theory of mind’,3 and ‘mindreading’.4 The bottom line here is understanding how folk psychology can lead humans to the sort of recognition that is needed to engage with increasing levels of artificial intelligence in contracting. What gives artificial intelligence intent is not some internal workings of its programming but us – the ascription of intentionality to artificial intelligence by humans. Doctrines about capacity to contract are ruled out as a relevant locus for this discussion. The law on capacity to contract involves a set of policy decisions, developed along social preferences or conventions, for determining when the state will or will not enforce contracts. Capacity doctrines deal mainly with exceptional cases through rules of positive law external to whether intent to enter contractual relations can be formed. We want to cover the central cases of contractual intent. It may come to pass that forms of artificial intelligence may not be given the capacity to contract or have a simple prohibition on their operation as an agent for contracting purposes, but such determinations usually depend on some policy justification beyond the bounds of this chapter. The intent to enter a contract means more than the recognition of a capacity in an agent (human or artificial) to take actions directed by some end the agent has. That sort of intentionality is certainly a necessary condition for agency. But agents also must be planning agents. They need to be able to reciprocate intentions. The intentional stance on its own does not offer an adequate explanation of a special kind of intent that is needed for contract formation. It does not inform us about the sort of ‘we’ intentionality or the ability to engage in future directed intentions as elements of stable plans of action in the form of contracts. Section IV provides this argument. It argues that shared intentionality is the core of contractual obligation. It does not matter whether this shared intentionality comes from human or artificial minds. The role of intention in contract law comes in two steps. First, we must understand the role of attribution of intentional mental states from one agent to another. This attribution is the subject of Section III. Next, we need to determine if there is a role for a psychological capacity that only humans possess, to be able to share intent – to represent and act as ‘we’. The insight to explore here is whether agents can combine their future directed plans or intentions to produce mutual obligations in the form of a contract. This chapter will go beyond what is possible for artificial intelligence currently. The theory in this chapter will rest on some general assumptions about the state of machine learning today, but the goal is a theory that is sufficiently general, to remain relevant as artificial intelligence changes. Of course, artificial intelligence

2 DC Dennett, The Intentional Stance (MIT Press, 1987). 3 AI Goldman, ‘Theory of Mind’ in E Margolis, R Samuels and S Stitch, Oxford Handbook of Philosophy of Cognitive Science (Oxford University Press, 2012) 402. 4 S Nichols and SP Stich, Mindreading: An Integrated Account of Pretence, Self-Awareness, and Understanding Other Minds (Oxford University Press, 2003).

62  John Linarelli will change.5 Many have tried to predict its future and the predictions will continue.6 If one is constrained to write only on what is possible right now for artificial intelligence, what is written will be out of date as soon as the ink is dry on the page. Artificial intelligence makes what we think is ‘practical’ for discussion wrong: it is more ‘practical’ to focus on what will be rather than on what is.7 This chapter offers no predictions, but it is an attempt at an account that is resilient to change as artificial intelligence develops in the future.

II.  Artificial Intelligence in Contracting: Past, Present, and Future How is artificial intelligence involved in contracting? To answer this question, we need to look at the development of artificial intelligence, from its early days as symbolic artificial intelligence, to machine learning and big data, and finishing with some possibilities for the future of artificial intelligence. The various ways in which artificial intelligence interjects into contract practices can be understood by mapping contract practices to the various stages of the development of artificial intelligence. As this analysis proceeds, I will try to identify plausible uses of artificial intelligence in contracting that either have not yet occurred or which have not reached widespread use. The idea here is to lead from the technological discussion, to explore how artificial intelligence can be involved in contract practices and not necessarily how it has been involved in those practices.

A.  The Search for Agency A common presupposition in discussions about artificial intelligence and contractual obligation is the idea that at some point a boundary is crossed in which artificial intelligence becomes an ‘agent.’ The reason why this boundary exists is that a common understanding of contractual obligation is that it is a form of obligation in private law that is chosen, unlike obligations in tort.8 One cannot

5 See N Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2019). 6 S Armstrong and K Sotala, ‘How We’re Predicting AI – or Failing to’, in J Romportl, E Zackova and J Kelemen (eds), Beyond Artificial Intelligence. Topics in Intelligent Engineering and Informatics, Vol 9 (Springer, 2015) 11. 7 This approach to artificial intelligence research is common. See, eg, AJ Casey and A Niblett, ‘Self-Driving Contracts’ (2017) 43 The Journal of Corporation Law 101; FP Hubbard, ‘“Do Androids Dream?”: Personhood and Intelligent Artifacts’ (2011) 83 Temple Law Review 405; LB Solum, ‘Legal Personhood for Artificial Intelligences’ (1992) 70 North Carolina Law Review 1231. 8 D Markovits, ‘Theories of the Common Law of Contracts’, in EN Zalta (ed), The Stanford Encyclopedia of Philosophy (Winter 2019 Edition), plato.stanford.edu/archives/win2019/entries/ contracts-theories (last accessed 24 June 2021).

A Philosophy of Contract Law for Artificial Intelligence  63 choose unless one has agency. An agent is a being with the capacity to act with intentionality.9 Can artificial intelligence offer the sort of agency needed to be a normatively significant actor in contract formation? In other words, how is artificial intelligence any different from any other technology we might use to facilitate the formation of a contract? Pen and paper are a technology. Why might we consider artificial intelligence to be different from these other technologies? To understand how artificial intelligence is or might be involved in contracting, we first have a working conception or ‘definition’ of artificial intelligence. No agreed-upon definition of artificial intelligence exists. This is not a problem. There is no way to ‘capture the essence’ of artificial intelligence in a definition.10 Think of artificial intelligence, rather, as a set of practices or methods associated with relevant computer and engineering domains. Stuart Russell and Russell Norvig, authors of the definitive textbook on artificial intelligence, say that artificial intelligence can be understood along four dimensions: (a) acting humanly; (b) thinking humanly; (c) thinking rationally; and (d) acting rationally.11 They focus on category (d), ‘on general principles of rational agents and on the components for constructing them’.12 The focus on action in category (d) is the right focus as trying to conceptualise whether an entity is ‘thinking’ leads to a variety of problems whose solutions do not aid us very much in understanding artificial intelligence. But category (d) is problematic for artificial agency in contract law because humans are not rational or at least not always rational and so how could perfectly rational and ‘boundedly’13 rational agents cooperate to produce a contract that is sensible or that is not putting the human side of the bargaining process in a disadvantageous position. The fix to the rationality problem is to think of rationality as an archetype or idealised notion of human thought. For our purposes, we can rely on a minimalist conception of artificial intelligence as getting ‘machines’14 to act in ways that depend on cognitive functions like learning 9 M Schlosser, ‘Agency’, in EN Zalta (ed), The Stanford Encyclopedia of Philosophy (Winter 2019 Edition), plato.stanford.edu/archives/win2019/entries/agency/ (last accessed 13 May 2021). 10 Dictionaries offer something that differs substantially from, say, the work of semantics, which is about the uncovering of the meaning of sentences. Dictionary definitions are only one kind of definition. They aim to define words with enough information to provide a language user with sufficient information to be able to use a term in sentences. An example of this distinction is at work in the difference between the definition of a word in Black’s Law Dictionary and the development of the meaning of a word by frequent interpretation over time by common law courts. 11 S Russell and P Norvig, Artificial Intelligence: A Modern Approach 3rd edn (Pearson, 2009) 1–5. 12 ibid 5. 13 I use ‘boundedly’ here in a broad sense to refer to any departures in the social and behavioural sciences from conceptions of persons as perfectly rational. This is the clear direction now of the sciences that study human thought and action. See L Fridman Podcast, ‘Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI, lexfridman.com/daniel-kahneman/ (last accessed 18 May 2021); T Adams, ‘Daniel Kahneman: “Clearly AI is going to win. How people are going to adjust is a fascinating problem”’, The Guardian, 16 May 2021. 14 NJ Nilsson, The Quest for Artificial Intelligence (Cambridge University Press, 2010): xiv (on how the computer has greatly expanded our notion of what a machine is, how software alone is often referred to as a ‘machine,’ and how the distinction between hardware and software has become blurred).

64  John Linarelli and problem-solving, in a way that would be recognisable to humans, particularly if they are shorn of the psychology that makes them prone to depart from rational thought.15 The best description of artificial intelligence might be Nils Nilsson’s, in his definitive work on the history of artificial intelligence: artificial intelligence is ‘that activity devoted to making machines intelligent,’ with ‘intelligence’ being ‘that quality that enables an entity to function appropriately and with foresight in its environment’.16 Nilsson’s definition works well for understanding how artificial intelligence might work in contracting – and differ from other sorts of technologies deployed in the contracting process – because of its focus on functionality and foresight, or what might be understood as intentionality in contract formation, explored in sections III and IV below. Nilsson’s focus on agency is what is needed to take the discussion of artificial intelligence into the future of contracting, where an agent is understood not in its conventional legal sense as the representative of a principal but psychologically or philosophically as a system or organism with the capacity to act with intentionality.17 The ability to possess or be seen by others to be able to act as an agent with intentionality is key for understanding if artificial intelligence could have a distinctive role on its own, independently of human agency. An agent is a being with the capacity to act, with action requiring intention. Very simply, an agent is an entity that can take actions on its own and that are directed to some end the agent has.18 Agents take actions for reasons. These reasons can be based on beliefs, desires, and attitudes. That an agent has reasons for actions means that we ascribe intentions to the agent. Some philosophers, most notably Donald Davidson, have argued that the intentions of agents are the causes of the actions of the agent.19 But we must be cautious here: an artificial intelligence agent may be acting because it was programmed or directed to act by some external force, such as through the direction of a principal or a computer programmer. Does the agent have a reason for action in such a case? In a sense, yes. Its reasons for actions are the direction of its programming, though it might be difficult to describe programming as a ‘reason.’ The agent may intend to comply with the direction of another. But what if it has no choice but to comply with is programming? We do not know whether human agents face this same difficulty, or whether their ‘programming’ comes from the evolution of their psychologies. This brief discussion is meant to expose the difficulties in trying to know whether an agent acts with the requisite 15 SM Liao, ‘A Short Introduction to the Ethics of Artificial Intelligence’ in SM Liao (ed), Ethics of Artificial Intelligence (Oxford University Press, 2020) 3 (‘we can broadly understand AI as getting machines to do things that require cognitive functions such as thinking, learning, and problem solving when done in intelligent beings such as humans’). 16 Nilsson (n 14) xiii. 17 Schlosser (n 9). 18 S Chopra and LF White, A Legal Theory for Autonomous Artificial Agents (University of Michigan Press, 2011) 5–28. 19 D Davidson, Essays on Actions and Events (Oxford University Press, 1980).

A Philosophy of Contract Law for Artificial Intelligence  65 ‘aboutness’, self-originating direction, or reasons independent of the influence of external actors or technologies.20

B.  GOFAI Contracts An early form of artificial intelligence is symbolic artificial intelligence, more colloquially known as ‘good old-fashioned artificial intelligence’ (GOFAI).21 GOFAI is based on symbolic (human readable) representations of problem solving and the rules of formal logic. The animating idea behind GOFAI was that a physical system would provide the necessary and sufficient conditions for intelligent action.22 Symbolic artificial intelligence required programming in the form of if-then statements for every step in a chain of reasoning to solve a problem. It is the realm of the logicist23 or logician, in which code is to be designed to produce deductive forms of reasoning for every conceivable task or problem the artificial intelligence in question was meant to do or solve. Symbolic artificial intelligence was the main form of artificial intelligence until the mid-1980s. It does not involve any machine learning. The machine or system does not learn from data. Rather, all its decisions must be explicitly programmed in advance. Logic comes first data comes second, the very opposite direction of most artificial intelligence today in the post-symbol area, for which data comes first and logic second. The most advanced forms of artificial intelligence are based on the notion of inductive inferences drawn from massive amounts of data.24 Symbolic artificial intelligence offers a comfortable zone for traditional coders, but it is now understood to be an ‘old’ form of artificial intelligence vastly outpaced by machine learning, clearly not symbolic or logicist in approach. But it has many uses today in contracting. One application of symbolic artificial intelligence in use today is the smart contract.25 It is well-accepted that smart contracts do not rely on artificial intelligence.26 It might be more accurate to say that some smart 20 See the discussions in Chopra and White (n 18); S Chopra and L White, ‘Artificial Agents and the Contracting Problem: A Solution Via Agency Analysis’ [2009] University of Illinois Journal of Law Technology & Policy 363. I have avoided any discussion of free will here as that leads us to a larger set of metaphysical questions that will add more questions than answers. The same goes for consciousness. 21 J Haugeland, ‘Farewell to GOFAI?’, in P Baumgartner and S Payr, Speaking minds: Interviews with Twenty Eminent Cognitive Scientists (Princeton University Press, 1995) 101. 22 Nilsson (n 14) 331. 23 ibid 331–346; P Domingos, The Master Algorithm (Basic Books, 2015) 30, 49, 80–83. 24 Domingos (n 23). 25 ‘At present, the input parameters and the execution steps for a smart contract need to be specific and objective. In other words, if ‘x’ occurs, then execute step ‘y.’ Therefore, the actual tasks that smart contracts are performing are fairly rudimentary, such as automatically moving an amount of cryptocurrency from one party’s wallet to another when certain criteria are satisfied.’ SD Levi and AB Lipton, ‘An Introduction to Smart Contracts and Their Potential and Inherent Limitations’ Harvard Law School Forum on Corporate Governance, corpgov.law.harvard.edu/2018/05/26/an-introduction-to-smartcontracts-and-their-potential-and-inherent-limitations/, 26 May 2018 (last accessed 13 May 2021). 26 See ‘What are Smart Contracts on the Blockchain?’, www.ibm.com/topics/smart-contracts (last accessed 13 May 2021); M Mylrea, ‘AI Enabled Blockchain Smart Contracts: Cyber Resilient Energy Infrastructure and IoT’, The 2018 AAAI Spring Symposium Series.

66  John Linarelli contracts rely on the most rudimentary form of symbolic artificial intelligence, but many do not. The term ‘smart contract’ is ambiguous. It is not a legal concept. Very simply, a smart contract is a contract for which some or all contract performance is executed and enforced digitally and without the need for human intervention except at the level of writing code to automate contract performance.27 Distributed ledger technology has advanced substantially the ability of contract parties to write and use smart contracts. The combination of the distributed ledger, the network, and the consensus mechanisms built into distributed ledger technology facilitate trust between contract parties and replace humans in institutions operating as intermediaries.28 In short, smart contracts substitute algorithmic for human contract performance and enforcement. Symbolic artificial intelligence will probably never be able to develop to the level of ‘agent’, if we understand an agent to be a being with the capacity to act with intentionality. It can only be a tool for agents.29 It therefore cannot take artificial intelligence very far into contract practices. The reasons why have to do with the limits of symbolic logic. Contracts in the classical or traditional sense are understood from the perspective of mutual assent based on the shared meanings of contractual language.30 This conception of contract is now under considerable threat in the form of automated contracts of adhesion between consumers and firms that move the point of normative significance for contract formation from mutual assent to constructive notice.31 Regardless of this shift, there will have to be some use of natural language, either for purposes of assent or notice, for a

27 Various authors have offered definitions of a smart contract. Nick Szabo is credited with inventing the phrase. K Werbach and N Cornell, ‘Contracts Ex Machina’ (2017) 67 Duke Law Journal 102. Szabo defines a smart contract as a ‘set of promises, specified in digital form, including protocols within which the parties perform on these promises’. N Szabo, ‘Smart Contracts: Building Blocks for Digital Markets’ (1996), available at www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/ Literature/LOTwinterschool2006/szabo.best.vwh.net/smart_contracts_2.html (accessed 7 Feb 2019). Max Raskin describes smart contracts as ‘agreements wherein execution is automated, usually by computers’. M Raskin, ‘The Law and Legality of Smart Contracts’ (2017) 1 Georgetown Law Technology Review 305, 306; Werbach and Cornell define a smart contract as an ‘agreement in digital form that is self-executing and self-enforcing’. Werbach and Cornell (n 28) 108. Jeffry Lipshaw describes a smart contract as ‘simply computer code that automatically execute agreed-upon transactions’. JM Lipshaw, ‘The Persistence of ‘Dumb’ Contracts’ (2019) 1 Stanford Journal of Blockchain Law and Policy 1, 4. 28 Werbach and Cornell (n 27) 118. 29 I am grateful to Mimi Zhou for this distinction. See LH Scholz, ‘Algorithmic Contracts’ (2017) 20 Stanford Technology Law Review 128. 30 See Restatement (Second) Contracts ss 18–20, 205. 31 See RB Kar and MJ Radin, ‘Pseudo-Contract and Shared Meaning Analysis’ (2019) 132 Harvard Law Review 1135; MJ Radin, Boilerplate: The Fine Print, Vanishing Rights, and the Rule of Law (Princeton University Press, 2014). The recent debates about the new Restatement of the Law of Consumer Contracts focused on the move from mutual assent to notice as an animating principle for contract formation and enforceability. For a summary of the legal (as opposed to empirical) issues associated with this move, see M Eisenberg, ‘The Proposed Restatement of Consumer Contracts, if Adopted, Would Drive a Dagger Through Consumers’ Rights,’ Yale Journal of Law and Regulation Notice and Comment Blog, 20 Mar 2019, www.yalejreg.com/nc/the-proposed-restatement-of-consumer-contractsif-adopted-would-drive-a-dagger-through-consumers-rights-by-melvin-eisenberg/ (last accessed 13 May 2021).

A Philosophy of Contract Law for Artificial Intelligence  67 contract to be binding on a human agent. Symbolic artificial intelligence, however, focuses on the meaning of sentences, presupposing that symbols can have selfcontained or concrete meanings without the need for any further information. It suffers from the classic philosophical problem of reference. Human users of language rely on the meanings of speakers, on what philosopher Paul Grice calls conversational implicature, to be distinguished from logical implicature.32 The Gricean insight is that humans rely on social contexts and cooperative norms about language to convey meanings, and not on the literal meanings of terms and sentences. For example, using the contract in the famous case of Raffles v Wichelhaus,33 if contract parties say that a shipment of cotton is to arrive ‘ex Peerless from Bombay’ what is necessary to understanding the meaning of this sentence is a set of norms about context. No amount of focus on the meaning or logic of the sentence will solve the problem of shared meaning that is necessary for a contract to exist, at least a contract in the traditional sense. Artificial intelligence that can reliably deal with conversational implicature will not be symbolic. It will more likely be probabilistic in approach, as it will have to learn through many repeat instances the cooperative norms that humans use when they use natural language. It will be machine learning.

C.  Machine Learning and Contract Machine learning is artificial intelligence that can learn and adapt without following explicit coding or instructions, by using learning algorithms and statistical methods to draw inferences from patterns in data. The result is that a learning algorithm produces another algorithm to perform a particular task or solve a particular problem. In machine learning, a computer program writes its own computer program in an iterative process through the study of large amounts of data. Machine learning is in ubiquitous use in commercial and financial contexts to detect fraud, in automated trading, in filling in price terms with dynamic pricing, to providing financial advisory services, to identifying risks and predictive analytics for construction, and in many other uses.34 Perhaps most significantly, machine learning has been used to make firms, or what we know as ‘merchants’ in the Uniform Commercial Code,35 ever more powerful contract parties because they can exploit the use of big data to take advantage of significant information

32 HP Grice, ‘Logic and Conversation’, in P Cole and JL Morgan (eds), Syntax and Semantics (Academic Press, 1975) 3, 41. 33 [1864] EWHC Exch. J19. 34 Scholz (n 29); OECD, Artificial Intelligence in Society, 11 Jun 2019, www.oecd.org/publications/ artificial-intelligence-in-society-eedfee77-en.htm (last accessed 13 May 2021); M Ebers, ‘Regulating AI and Robotics: Ethical and Legal Challenges’ in M Ebers and SN Navarro (eds), Algorithms and Law (Cambridge University Press, 2019). 35 U.C.C. s 2-104.

68  John Linarelli asymmetries when contracting with consumers.36 In these contexts, humans use machine learning algorithms to determine who to contract with and on what terms. For example, Amazon and Uber use dynamic pricing. Perhaps the most sophisticated machine learning ongoing right now in the contracting context is Google’s ad exchange, which fills in advertising space on websites in milliseconds as the website is loading.37 Still, the framework for these contracts is human produced. Artificial intelligence can be used to assist in contract drafting.38 Here is an example that to my knowledge has yet to be developed. Choice of law clauses are in ubiquitous use in contracts. They are often coupled with an alternative dispute resolution clause mandating arbitration. They are in frequent use in contracts with consumers, usually because the merchant (or its lawyers) know that the law of, say, South Dakota, is more favourable to credit card companies than say, the law of California.39 This is a relatively easy call for contract planners who represent the credit card company. But what about negotiated contracts, for, say, the purchase of the assets of a business, or in construction contracts, or in contracts that cross national borders? Artificial intelligence has much more computing power to determine which law (and courts) are best for a contract party. Lawyers often insert these clauses into contracts without much deliberation or based on intuition or ‘judgment’ that a particular jurisdiction has the more favourable law, but machine learning can probably inform us with a much higher degree of accuracy which is the better law and courts, if it has access to the right sort of data on the law and courts of the relevant jurisdiction. Still, humans control this assistive role in contract drafting to the point where artificial intelligence does not rise to the level of an agent in the philosophical sense in which that concept is understood here. Except in some limited and rudimentary ways, as in speech recognition technologies, artificial intelligence is not yet at a point where it can interact directly with humans using a natural language interface in any way close to what might be needed for contract formation for contracts that are negotiated or what Karl Llewellyn called ‘dickered’.40 The natural 36 See, eg, S-Ann Elvy, ‘Contracting in the Age of Internet of Things: Article 2 of the UCC and Beyond’ (2016) 44 Hofstra Law Review 839. 37 D Srinivasan, ‘Why Google Dominates Advertising Markets Competition Policy Should Lean on the Principles of Financial Market Regulation’ (2020) 24 Stanford Technology Law Review 55. 38 See, eg, KD Betts and KR Jaep, ‘The Dawn of Fully Automated Contract Drafting: Machine Learning Breathes New Life into a Decades Old Promise’ (2017) 15 Duke Law and Technology Review 216; I Ng, ‘The Art of Contract Drafting in the Age of Artificial Intelligence: A Comparative Study Based on US, UK, and Austrian Law’, TTLF Working Papers (2017), law.stanford.edu/publications/ the-art-of-contract-drafting-in-the-age-of-artificial-intelligence-a-comparative-study-based-onus-uk-and-austrian-law/ (last accessed 13 May 2021). For an investment treaty drafting example, W Alschner and D Skougarevskiy, ‘Can Robots Write Treaties? Using Recurrent Neural Networks to Draft International Investment Agreements’, in F Bex and S Villata (eds), JURIX: Legal Knowledge and Information Systems (IOS Press, 2016) 119. 39 A Sullivan, ‘How Citibank Made South Dakota the Top State in the U.S. for Business,’ The Atlantic, 10 July 2013. 40 KN Llewellyn, The Common Law Tradition: Deciding Appeals (Little Brown, 1960) 363–372.

A Philosophy of Contract Law for Artificial Intelligence  69 language interface may be less important for ‘notice’ type contracts that do not rely on negotiation, though the boilerplate terms still must be in a natural language for a contract to be formed.

D.  The Future Is there some possibility that artificial intelligence can rise to the level of an actual agent in the contract formation process, or at least share some of that agency, properly delegated, with a human contract party? What must happen is for algorithms to produce algorithms to draft and negotiate contracts. An artificial intelligence would learn without human intervention to select contract parties and contract terms. It requires another step towards automation, towards agency in some form of action by artificial intelligence unsupervised by humans. Promising areas might be coded or partially coded contracts in which artificial intelligence is responsible for the ‘drafting’ of the code and the code translates into a natural language equivalent for human understanding, a natural language – a code interface of some sort. Humans express intention through linguistic communication and any interaction with humans in a contracting context would seem to require some use of language. Intelligent assistants, disembodied agents that assist humans when mobility is not required, are already in use.41 Artificial intelligence may plausibly develop natural language abilities. This would be task-level artificial intelligence with natural language abilities.42 Finally, in some far future, there may be an artificial general intelligence with natural language abilities and full-blown agency.43 At some point, humans will attribute mental operations and intentionality to artificial intelligence in the contracting process. The intent needed for contract formation will either be shared with a human-level principal or co-party, or humans will attribute the ability to produce such intent entirely to artificial intelligence. How this will occur will depend on the folk psychology at work when humans typically attribute intentionality to others. We are not there yet but will be. It is to these issues that I will now turn.

III.  Objective Intent and the Attribution of Intentionality Returning to Nilsson’s definition of artificial intelligence, how do we go about determining for contracting purposes whether an entity can function appropriately and

41 Nilsson (n 14) 522. 42 ibid 525–534. 43 J Linarelli, ‘Artificial General Intelligence and Contract’ (2019) 24 Uniform Law Review 330; Bostrom (n 5).

70  John Linarelli with foresight in its environment? In other words, when can we say when an artificial intelligence can have the needed ‘aboutness’ that will give humans reasons to believe that an artificial intelligence involved in contracting has the intentionality needed to contribute to the formation of a contract? This is the question for this section. Section IV deals with the question of whether this intentionality can be shared between a human and an artificial intelligence.

A.  Turing Test Intentionality in the Common Law of Contracts In his 1950 article, Computing Machinery and Intelligence, Alan Turing starts by stating his question ‘can machines think?’44 He eventually finds this question ‘too meaningless to deserve discussion’45 and replaces it with ‘are there any imaginable digital computers which would do well in the imitation game?’46 The imitation game proceeds as follows. The players in the game are a human being and a machine. The game also includes a human interrogator. The interrogator is in a separate room from the human and the machine. The interrogator knows the others are labelled X and Y, knows one is human and the other machine, but does not know which. The object of the game is to test whether the interrogator can tell the difference between the human and the machine through a series of questions. The interrogator is to ask questions to the machine and the individual through a text channel to avoid revealing which is the human and which the machine. If the interrogator cannot reliably distinguish the human from the machine, then the machine passes the test. The test is meant to assess whether a machine can give answers that consistently resemble those a human would give such that humans cannot tell the machine apart from a human, in terms of cognitive performance.47 The Turing test has been effectively embedded into Anglo-American contract law, in the objective theory of contract.48 The focus of this tradition in contract law is on outward appearances – on what can be proven as a matter of evidence independent of the mental states the parties may or may not have. The objective theory of contract tells us that intention to be bound to or from a contract is determined

44 A Turing, ‘Computing Machinery and Intelligence’ (1950) LIX Mind 433. 45 ibid 442. 46 ibid. 47 See S Harnad, ‘The Turing Test Is Not A Trick: Turing Indistinguishability Is A Scientific Criterion’ (1992) 3 SIGART Bulletin 9. 48 See JM Perillo, ‘The Origins of the Objective Theory of Contract Formation and Interpretation’ (2000) 69 Fordham Law Review 427; TAO Endicott, ‘Objectivity, Subjectivity, and Incomplete Agreements’, in J Horder (ed), Oxford Essays in Jurisprudence Fourth Series (Oxford University Press, 2000) 151.

A Philosophy of Contract Law for Artificial Intelligence  71 by evidence external to the actual intentions of the parties. Judge Learned Hand has said: A contract has, strictly speaking, nothing to do with the personal, or individual, intent of the parties. A contract is an obligation attached by the mere force of law to certain acts of the parties, usually words, which ordinarily accompany and represent a known intent. If, however, it were proved by twenty bishops that either party, when he used the words, intended something else than the usual meaning which the law imposes upon them, he would still be held, unless there were some mutual mistake, or something else of the sort.49

Judge Frank Easterbrook has explained that intention to be bound ‘does not invite a tour through [a contract party’s] cranium’ but must necessarily be derived from a consideration of the words, written and oral, and actions of the part parties.50 Often quoted on the objective theory of contract formation and interpretation is the New Hampshire Supreme Court, itself quoting Oliver Wendell Holmes Jr.: A contract involves what is called a meeting of the minds of the parties. But this does not mean that they must have arrived at a common mental state touching the matter at hand. The standard by which their conduct is judged and their rights are limited are not internal but external. In the absence of fraud or incapacity, the question is: What did the party say and do? ‘The making of a contract does not depend upon the state of the parties’ minds; it depends upon their overt acts.’51

The US Restatement (Second) of Contracts, intended to reflect a consensus about contract law in the US, does not contain any section explicitly titled on intention to form a contract. It advises us that American contract law has likely abolished the idea of intention to be legally bound. Restatement (Second) section 21 provides that ‘neither real nor apparent intention that a promise be legally binding is essential to the formation of a contract …’52 but the objective theory of contract still prevails. American contract law relies on what is known in American law as manifestation of mutual assent, which requires each party either to commit, objectively understood, or perform.53 Mutual assent is objectively determined. While English law does not reflect this Restatement (Second) language of manifestation

49 Hotchkiss v National City Bank, 200 F. 287, 293 (S.D.N.Y. 1911), aff ’d, 201 F. 664 (2d Cir. 1912), aff ’d, 231 U.S. 50 (1913). 50 Skycom Corp. v Telstar Corp., 813 F.2d 810, 814 (7th Cir. 1987). 51 Woburn National Bank v Woods, 77 N.H. 172, 89 A 491, 492 (1914) (citation omitted), quoting OW Holmes, Jr., The Common Law (Little Brown, 1881) 307. 52 Restatement (Second) of Contracts s 21 (1981). 53 ibid, s 18. The ‘manifestation’ language is pervasive in American contract law and reflects the notion of contracting making sense only in the form of external representations to other persons. See, eg, Restatement (Second) of Contracts s 2 (1981), which defines core concepts such as a promise as ‘a manifestation of intention to act or refrain from acting in a specified way, so made as to justify a promisee in understanding that a commitment has been made’. Comment b explains that a manifestation of intention is an ‘external expression’ as opposed to ‘undisclosed intention.’ Restatement (Second) of Contracts s 2 cmt. b (1981).

72  John Linarelli of mutual assent, it is substantially similar in adhering to an objective theory of contract formation and interpretation.54 In English law, intention to create legal relations is traditionally used to distinguish promises the parties want the law to enforce and promises they do not want the law to enforce. The intent or manifestation of mutual assent to enter a contract is thus not a matter of investigating some inner mental operations of a contract party but about whether one contract party can reasonably conclude that the other contract party has the requisite objective intent. If artificial intelligence is responsible for all or some of that intent, the issue therefore becomes when a human contracting party would come to recognise the artificial intelligence as providing the intent on the other side of the transaction. When will artificial intelligence reach the point of development that it can be considering as having or contributing to having the sort of intent necessary for contract to come into existence?

B.  The Intentional Stance The presence of artificial intelligence in contracting requires us to compare human and artificial mental operations. We have never had to do this before when we only had to think about human intention in contract relations. We can do this in functionalist terms – what makes something a mental state does not depend on its internal constitution but on the way if functions, on the role it plays, in the system in which it is a part.55 A fruitful way to engage questions about intentionality is to look to what is known in the philosophy of mind as a theory about the content of mental representations. A theory of content of mental representations is an explanation of how humans can form thoughts about concepts or things. For example, how do we explain how we think about ‘computer?’ Think of the various propositional attitudes that can come to one’s mind. I may entertain a belief that I need to be at the law school every Monday for office hours. This belief may lead to a desire that I get in my car and travel to my law school every Monday. These belief-desire states reflect my intentions. My belief about going to the law school is about me intending to go to the law school. This is a very basic sketch that suppresses discussion and the many debates in philosophy of mind,56 but it is enough for us here.

54 See, eg, E Peel, Treitel on the Law of Contract 14th edn (Oxford University Press, 2015) 1 (s 1-002, ‘The objective principle’). 55 For an accessible explanation of functionalism, see DC Dennett, Consciousness Explained (Penguin Books, 1991) 30–32. 56 A place to start would be F Adams and K Aizawa, ‘Causal Theories of Mental Content’, in EN Zalta (ed), The Stanford Encyclopedia of Philosophy (Spring 2021 Edition), plato.stanford.edu/ archives/spr2021/entries/content-causal (last accessed 17 May 2021).

A Philosophy of Contract Law for Artificial Intelligence  73 So far, we have evaluated my own beliefs from my own perspective, a firstperson perspective. My own beliefs, desires, and attitudes may explain my own behaviour. But a link is missing here, the attribution of my intentions to me by another person. Philosophers and cognitive scientists have identified a set of mental capacities of humans, consisting of the capacity to explain and predict human behaviour, attribute mental states to other humans, and explain the behaviour of humans in terms of mental states. Having this capacity is necessary to understand how intent to enter a contract operates, because that intent must be externally manifested and attributed, in the form of something like an imitation game, for the objective intent necessary for a contract to be formed to come into existence. Some capacity for belief attribution is necessary for contract formation. When it comes to evaluating intentionality in contract law, Daniel Dennett’s theory of content, his so-called intentional stance, holds substantial promise.57 Dennett argues that humans can take three explanatory stances about a complex system. We can take a physical stance, to predict behaviour by understanding how a system or entity is built.58 We might, for example, profitably use a physical stance to understand how coding for a smart contract works. We can take a design stance, to predict behaviour by understanding how a system or entity is designed. Again, using the smart contract example, a person may have no idea how the coding of a smart contract works, but she does know that when she presses ‘I agree’ that a contract is formed in accordance with any automated terms included in the contract. Finally, the intentional stance is the position we take to predict the behaviour of a system or entity that we believe has beliefs and goals and the capacity on its own to achieve its goals based on its beliefs. In Dennett’s words, we are looking for ‘true believers:’ a system that acts or at least appears to act based on its beliefs.59 Think about the intentional stance from an adaptation of Dennett’s own explanation of his account, in the context of humans. Humans may be unique in their use of natural language communication. We use sound linguistically in the form of syntax and semantics, which lead to many of our communications as being evaluable as true and false. This is a big move, according to Dennett, a ‘radical reconstrual of the data, and abstraction from its acoustic and other physical properties’.60 Humans make sense of the sounds as words. It is a record of speech acts, not just sounds but assertions, declarations, questions, answers, promises,

57 Several have so found. Chopra and White (n 18) 11–13; Giovanni Sartor, ‘Cognitive Automata and the Law: Electronic Contracting and Intentionality of Software Agents’ (2009) 17 Artificial Intelligence and Law 253. 58 Dennett (n 2) 16–17. From the perspective of cognitive science, see A Brook and D Ross, ‘Dennett’s Position in the Intellectual World,’ in A Brook and D Ross (eds), Daniel Dennett (Cambridge University Press, 2002) 3. 59 Dennett (n 2) 13. 60 DC Dennett, Consciousness Explained (Penguin, 1991) 74–75.

74  John Linarelli comments, requests for clarification, critiques and so on.61 Dennett characterises the evaluation of these utterances as a process of interpretation that involves the taking of an intentional stance. According to Dennett, we must treat the noise emitter as an agent, indeed, as a rational agent, who harbors beliefs and desires and other mental states that exhibit internationality or ‘aboutness,’ and whose actions can be explained (or predicted) on the basis of the content of these states.62

The utterances are interpreted as propositions the entity wanted to make, based on reasons the entity holds as content in its thought. There is no theoretical obstacle to extending the possibility of someone taking an intentional stance about an entity that does not ‘speak’ words verbally or more generally does not use natural language to communicate. It is about belief-desire attribution to a ‘system’ that may or may not be a human. The account seems deliberately left open to be able to accommodate artificial intelligence. The mode of communicating the belief-desire attribution may indeed matter in determining the reasonableness of the attribution but it will not make it impossible. But a system may be sufficiently rational or a true believer for the task at hand.63 The intentional stance and similar theories about belief attribution64 rely on what philosophers and cognitive scientists call ‘folk psychology’, an array of mental concepts that humans have known since childhood and that they effortlessly deploy as part of being human, such as beliefs, desires, knowledge, pain, fear, hope, expectation, intention, imagination, concern, and so on.65 As society progresses toward more automation our folk psychology will likely accommodate more leniency in the application of the international stance to artificial life. Humans routinely attribute intentions to non-human animals. Particularly with robotic forms of artificial intelligence, we often want to believe they express intention, particularly if they look or move like us. Another question that has not yet been asked is whether artificial intelligence can also possess intention attribution capacities. Shall we simply assume humans have intentions and that there is no need for an intentional stance to move from artificial to human agent. This seems wrong, because, as explained above, intent in contractual relations must work both ways. Of course, an algorithm in an artificial

61 ibid 76. 62 ibid. 63 Dennett (n 2) 21. 64 Another way to understand the capacities to predict and explain behaviour is known as mindreading. Philosophers and psychologists have identified a set of cognitive capacities in humans, consisting of the capacity to explain and predict human behaviour, attribute mental states to humans, and explain human behaviour in terms of mental states. This research has focused on humans, but it is common for humans to attribute mental states to non-human animals (dogs and cats), machines, computers, toasters, etc. Some of these may be metaphorical but there is substantial evidence that some primates other than humans might be able to predict the behaviour of others in their species. See Nichols and Stich (n 4). 65 See J Fodor, Psychosemantics: The Problem of Meaning in the Philosophy of Mind (MIT Press, 1987); Dennett (n 2) 7–11.

A Philosophy of Contract Law for Artificial Intelligence  75 intelligence could inform it that humans possess the ready capacity to intend to enter a contract. We do need another step to connect intentionality to contract – that is – can we share these intentions in the form of a cooperative effort? And if so, can we share them in a way that creates mutual obligation? I deal with these questions in section IV. Contractual intent is a special form of intent, often involving planning about future conduct. It requires parties to be able to believe they have a legal obligation to do something in the future. How contract parties that include some kind of artificial agent can share such an intention is the subject of the next section.

IV.  Shared Intentionality as the Core of Contractual Obligation Section III covered belief attribution capacities of humans to attribute intention to artificial intelligence. It also dealt tentatively with the need in contract formation for that intent to work in both directions. To the extent that an artificial intelligence would somehow have or share agency with a human in the process of contract formation, it may need to be able to take an intentional stance about the human on the other side of the transaction. This tells us something about the capacity of artificial intelligence to mind read or recognise the intentional stance taken by other entities, in this case humans. This dual intentionality does not go far enough, however, to describe the sort of intentionality needed for contract formation. Contract formation requires a particular kind of intention. We need to understand its semantics. The special kind of intention needed for contract formation is the intention to share a goal or plan. The goal or plan is to complete contract performance, a cooperative venture between contract parties. At present, only humans have this facility. What is needed for agents to be able to engage in contracting is for them to be planning agents. A planning agent is an agent with the cognitive ability to have a future directed intention. There is a literature in philosophy and psychology that claims that future directed intentions play an important role in human practical reasoning.66 An influential account of practical reasons is that of Donald Davidson, who claims that practical reasoning is the weighing of reasons, in the forms of beliefs and desires, for and against differing actions.67 The idea here is that

66 See ME Bratman, Intentions, Plans, and Practical Reason (CSLI, 1999); ME Bratman, Faces of Intention: Selected Essays on the Intention and Agency (Cambridge University Press, 1999); ME Bratman, ‘What is Intention?’, in PR Cohen, J Morgan and ME Pollack (eds), Intentions in Communication (MIT Press, 1990) 15–31; M Gilbert, On Social Facts (Princeton University Press, 1989); R Tuomela, The Philosophy of Sociality: The Shared Point of View (Oxford University Press, 2010). 67 Davidson (n 2).

76  John Linarelli my intentions provide me with reasons for actions which lead to action. So, intentions are causes of actions. But what Michael Bratman and others have argued is that accounts like Davidson’s do not go far enough because they do not account for intentions about the future. They only deal with intentions as reasons for actions for the moment of deliberation and the action to ensue. They do not account for the holding of a prior intention about achieving an end that would lead to further intentions about means or steps towards achieving those ends. I do not see it as fatal to Davidson-type accounts that future directed intentions are not accounted for, but nevertheless the insight of Bratman and others on future directed intentions direct us to an important point of great relevance to contracting. And that is that a focus on future directed intention accommodates two different functions humans undertake in their lives, the need to be able to deliberate about future commitment and the need to plan our future actions together.

A.  Shared Intentionality That humans are planning agents means we engage in what Bratman calls shared cooperative activity or shared intentions.68 Bratman’s shared intentionality thesis is set forth below. Assume J to be a joint activity, such as contracting: We intend to J if and only if 1. (a) I intend that we J and (b) you intend that we J. 2. I intend that we J in accordance with and because of 1a, 1b, and meshing subplans of 1a and 1b; you intend that we J in accordance with and because of 1a, 1b, and meshing subplans of 1a and 1b. 3. 1 and 2 are common knowledge between us.69 Stated in the context of contracting: We intend to contract if and only if 1. (a) I intend that we contract and (b) you intend that we contract. 2. I intend that we contract in accordance with and because of 1a, 1b, and meshing subplans of 1a and 1b; you intend that we contract in accordance with and because of 1a, 1b, and meshing subplans of 1a and 1b. 3. 1 and 2 are common knowledge between us. Bratman makes no appeal to a notion of obligation, such as a legal or moral obligation, but this is no obstacle. Contract law specifies a legal obligation when shared intent is present in the right context for contracting.

68 Bratman (n 66), Essay 5 on ‘Shared Cooperative Activity’ and Essay 7, ‘Shared Intention and Mutual Obligation.’ 69 ibid, 131.

A Philosophy of Contract Law for Artificial Intelligence  77 Shared intentionality is a uniquely human ability.70 It is the kind of intentionality that seems necessary for contracting that is what anthropologist Michael Tomasello calls joint intentionality, to distinguish it from the broader notion of collective intentionality.71 The account from evolutionary anthropology is good for informing us about the sort of intentionality we are talking about for contracting and what might be needed for an artificial intelligence to be an entity with the agency to contract. Individual intentionality is classic Davidsonian intentionality.72 Interestingly, great apes have this capacity, though it will be expressed very differently because they do not use language. This first form of intentionality, known as ‘I’ intentionality, is about an agent having the ability to self-regulate, in situations in which an individual can recognise novel situations and deal reflectively with them, with an understanding of the causal relations between intentions to actions. ‘I’ intentionality is adequate for attributing an intentional stance to an artificial intelligence, but it is not enough for contract formation. Joint intentionality for Tomasello, or shared intentionality for Bratman, differs significantly from individual intentionality because it is about cooperation, albeit a cooperation for small groups. For humans, it is structured around linguistic communication. Evolutionary anthropologists trace its origins to small scale collaboration in human foraging. This kind of intentionality is commonly known as ‘we’ intentionality. It is unique to humans. When humans engage in ‘we’ intentionality, they engage in cooperative activity. Think of it in the context of its ancient origins. Chimpanzees hunt in parallel. They will pursue prey on they own, considering the behaviour and possibly the intentions of other chimpanzees. Each chimpanzee has an individual goal to separately capture the prey. But humans developed the ability to hunt cooperatively, to capture it together with other humans as a joint goal.73 Humans developed the second-personal point of view and to use that second-personal point of view to engage in a relationship of mutual recognition.74 Finally, collective intentionality in Tomasello’s framework is a massive form of uniquely human cooperation we know of as states, societies, and communities. Common cultural social practices such as law, including contract law, derive from collective rationality. In the words of anthropologist Pascal Boyer, ‘minds make societies’.75 The cognitive capacities associated with collective intentionality have to do with the ability of entities to engage in self-governance that is responsive to

70 M Tomasello, A Natural History of Human Thought (Harvard University Press, 2014) 35–36. 71 ibid 32–79. 72 See ibid 7–31. 73 ibid 35–36. 74 See S Darwall, The Second-Person Standpoint: Morality, Respect, and Accountability (Harvard University Press, 2000); TM Scanlon, What We Owe to Each Other (Harvard University Press, 1998). 75 P Boyer, Minds Make Societies: How Cognition Explains the World Humans Create (Yale University Press, 2018).

78  John Linarelli a culture’s norms of rationality.76 Collective intentionality is a kind of intentionality that any entity must have to participate in a common culture. It is necessary for what Lon Fuller’s characterises as ‘subject to’ law.77 It concerns a broader set of issues I will leave for future discussion. Are we at the point where an artificial intelligence could share intentionality with a human to form a contract? Could two artificial intelligences share an intent to form a contract? These issues need substantial exploration beyond the scope of the theory set forth in this chapter. The question for the theorist is, is it plausible to theorise around these questions. I believe the answer to be yes, because it is plausible to expect the technology of artificial intelligence to reach the appropriate level of advancement.

B.  A General Theory of Contract Once we engage in the sort of ground clearing this chapter does around the distinctive features of contractual obligation, it seems clear that other theories of contract reduce to questions about intent. I will make just a few brief remarks here, to lay some groundwork for future discussion. Consider Charles Fried’s theory of contract as promise.78 It is probably the most influential theories about contract. But promise cannot describe what is salient or distinctive about contractual obligation. Many contracts are routinely formed without a basis in promise.79 Of particular relevance for us here, however, is that there are many promises that are not legally binding but which may be morally binding or binding as a matter of social consensus outside of the law. If we take a hard look at what makes a promise a feature of contracting, we will discover that it is the intention to create a legal relation with the promise, in the form of an objective manifestation of mutual assent to contracting, that makes the promise a contractual one, not the promise itself. We intend many promises to be not legal in nature, for which we reasonably can expect compliance. For example, if I promise my colleague to return a book I borrowed from her, we both may believe I have made a promise that is morally obligatory but not legally enforceable. We may share in an intent to create a moral obligation but not a legal one. It is the intent supporting the promise that provides the criteria for legal recognition of a particular kind of promise as a contractual obligation. A similar argument can be made about Robert Barnett’s theory of contract as consent.80 The consent theory of contract explains contract law as a means 76 Tomasello (n 70) 80–123. 77 LL Fuller, The Morality of Law (Yale University Press, 1966) 162–163. 78 C Fried, Contract as Promise: A Theory of Contractual Obligation (Harvard University Press, 1982). 79 Radin (n 31); Kar and Radin (n 31). 80 RE Barnett, ‘A Consent Theory of Contract’ (1986) 86 Columbia Law Review 269; RE Barnett, ‘Contract is Not Promise; Contract is Consent’ in G Klass, G Letsas and P Saprai, Philosophical Foundations of Contract Law (Oxford University Press, 2014) 43–57.

A Philosophy of Contract Law for Artificial Intelligence  79 to transfer property rights. It relies on the objective manifestation of an intention to be legally bound to transfer a property right as a core feature of contract. Its articulation appears to conflate consent with intent.81 But certainly, consent and objective intent to contract are different. An agent can manifest an objective intent to contract and yet not consent to contract. Conversely, a person can consent to a contract and not manifest an objective intent. In fact, a theory of subjective intent about contract formation would seem to be required if actual consent to a contract is required, because there may be situations in which an agent provides sufficient objective indicia of intent but in truth does not want to enter a legally enforceable contractual obligation. More fundamentally for purposes of artificial agency, consent would take us too close to a requirement for what has been characterised as ‘strong AI’: for an entity to qualify as intelligent in a strong AI sense, it actually has to think and have actual intentions associated with its actions and not just offer an intentional stance to humans.82 What we have dealt with in this account is what may be understood as weak AI, which is reflected in Turing’s imitation game and Dennett’s intentional stance, and which is the standard conception of artificial intelligence by its developers.83 For contracting purposes, it is enough that we be able to attribute intent and not actual consent to putative contract parties.

V. Conclusion This chapter has set forth a philosophy of contract law formed around the most basic elements of the contracting process, that of the manifestation of assent to contract based on the notion of objective intent to enter contractual relations. Objective intent has special qualities that make it the most direct route to understanding the core of contract formation and contract obligation. The focus on objective intent allows us to connect other minds – not just human minds – in melding both artificial and human agency in contract formation. As artificial intelligence progresses, so too, I believe, will my approach to understanding contractual obligation become more salient.

81 Barnett ‘Contract is Not Promise’ (n 80) 48. 82 A longstanding disagreement in philosophy of mind is between those who argue that AI, or at least what is known as strong AI, is impossible. Strong AI is AI that actually thinks, is conscious, has a phenomenology of the particular experiences of life, and has the properties of intentionality that humans have. Weak AI is AI that acts as if it is thinking, conscious, and acting with intentionality. See Russell and Norvig (n 11) 1020–1033. While staking out a middle ground David Chalmers offers a good summary of the arguments. DJ Chalmers, The Conscious Mind: In Search of a Fundamental Theory (Oxford University Press, 1996) 313–332. This is an argument about whether artificial intelligence must have some form of inner life, some phenomenology of conscious experience, or true understanding or whether a simulation of these things will suffice, is beyond our scope here. 83 Norvig and Russell (n 11) 1020–1033.

80  John Linarelli The philosophy in this chapter is analytical in the sense that I have not asked whether contract law and contract as an institution in a society ought to accept participation in its practices by artificial agency. This normative question is beyond our scope here. It is an important question that is in need of being addressed. It is a question of justification. Law makes a claim to authority or legitimacy to humans. It takes humans as having primacy of place as the subjects of law. How do we account for artificial intelligence if it reaches an appropriate level of agency and what would that level of agency be? That we can possibly share intention does not mean we should. These are questions that relate to the common rationality we share on a mass scale in the form of collective intentionality. This is perhaps our next discussion.

5 From Document to Data: Revolution of Contract Through Legal Technologies SILVIA MARTINELLI AND CARLO ROSSI CHAUVENET*

I.  A New Season of the Contract The contract in its unlimited and inexhaustible possibilities of adaptation seems eternal, just as the need for cooperation between men is eternal, however, contract law is not eternal but transitory, and so are the values it contains.1 The legal tech software for contract drafting, contract analytics, and contract lifecycle automation is changing the way we write, conclude, and manage contracts and they will do it even more in the next few years. The question that arises is how these changes will impact the contract and contract laws. In the opinion of the writers, there are two main disruptive factors of change. The first one is the application to the legal sector and to the contract of a larger phenomenon: the servitisation or productification/mercification of professional performance. The second disruptive factor introduced by these technologies, less underlined, is the possibility to use data to manage, conclude and analyse contracts.2 Many economic activities have already been impacted by the information revolution. First the communication industry (information, music, video): ie the problems that publishing and the music industry have faced for years, up to business models based on access to subscription content such as Netflix and Spotify. Then, comes the turn of trade: from the spread of e-commerce with the first websites, the first exchanges between users mediated by the eBay platform, up to the affirmation of large platforms such as Amazon, which aim to offer the widest range of products and services. * Silvia Martinelli mainly worked on sections 1, 3, 5 and 6; Carlo Rossi Chauvenet worked on sections 2 and 4. 1 G Alpa, Le Stagioni Del Contratto (Il Mulino, 2012) 186, which in turn quotes and re-elaborates Francesco Santoro-Passarelli, in the conclusion of his monography dedicated to the analysis of the evolution of contract and contract law during the years. 2 See S Van Erp, ‘Management as Ownership of Data’ in S Lohsse, R Schulze and D Staudenmayer (eds), Data as Counter-Performance – Contract Law 2.0? (Hart Publishing, 2020). He underlines that when we use the word ‘disruptive’, ‘we really mean the impact of data, data processing, data analysis, data profiling and data transactions, put it short: the data economy’.

82  Silvia Martinelli and Carlo Rossi Chauvenet At the same time, from the sale of goods, the online market has moved towards the digitisation of services: first tourism and websites for online reservations, up to the ‘sharing economy’ or ‘platform economy’, ie services such as Uber or Airbnb. These business models exploit data and algorithms to innovatively rethink traditional business models. They exploit the digitisation of resources, goods, and people, now minutely identified and described by data, to reorganise economic activities with technology. The legal profession – and, more generally, the law – has so far been touched only marginally by these changes, still predominantly structured in a traditional way and, even when digitised, still organised according to logic and procedures designed for the ‘world of paper’. However, even the jurist3 is now asking himself how to exploit software, data, and algorithms to revolutionise the ways in which he carries out his activity, identifying inefficiencies and potentials that new technologies can, respectively, eliminate and exploit. With ‘Legal Tech’,4 therefore, software and innovative solutions are identified – both with reference to all existing legal professionals (and not only), and with respect to the individual activities that the jurist performs – which aim to either solve specific problems or to eliminate inefficiencies by organising activities in a new way, adopting solutions that new technologies enable.5 Legal work ‘can be improved, becoming cheaper and more accessible for a greater proportion of population’.6 For example, Iubenda was the first legal tech software developed in Italy, started with the automation of privacy policies (www.iubenda.com/it/) combining questionnaire and code-based policy generation. The privacy policy is an example of a legal document that is often very repetitive. The technology allows standardisation through the computerisation of checklists. The user answers some questions relating to the activities carried out, the types of processing and the data processed, and, by moving in a decision tree whose turns are determined by the user’s own responses, the software creates a document drawn up automatically.7 The final

3 See V Janeček, R Williams and E Keep, ‘Education for the Provision of Technologically Enhanced Legal Services’ (2021) 40 Computer Law and Security Review, where the authors identified an educational gap that prevents lawyers from implementing AI and digital technology in the provision of legal services and suggests concrete models for education and training in this area focusing on: 1) Mindset understanding; 2) Data- oriented thinking; 3) Agile systems and design thinking; 4) Commercial awareness; 5) Digital ethics and the law of AI and digital technology. See also S Caserta, ‘Digitalization of the Legal Field and the Future of Large Law Firms’ (2020) 9 Laws 14, on the importance of multidisciplinary. 4 See also G Messori, ‘Legal Tech’ in G Ziccardi and P Perri (eds), Dizionario Legal Tech (Giuffré Francis Lefebvre, 2020); M Hartung, ‘The Digital Transformation’, in M Hartung, M-Manuel Bues and G Halbleib (eds), Legal Tech. A Practitioner’s Guide (Beck, 2018). 5 See Hartung (n 4), which identifies three steps to increase efficiency of existing legal products: 1) Identify the potential for increasing the efficiency of your key products and services; 2) Analyse the workflow process of the most important legal products (workflow analysis); 3) Implement necessary technical and organisational measures. 6 M Hartung, M-Manuel Bues and G Halbleib, Legal Tech. A Practitioner’s Guide (Beck, 2018). 7 See also S Zimmeck, R Goldstein and D Baraka, ‘PrivacyFlash Pro: Automating Privacy Policy Generation for Mobile Apps’ [2021] Network and Distributed Systems Security (NDSS) Symposium 2021 dx.doi.org/10.14722/ndss.2021.24100.

From Document to Data  83 document will therefore be standardised, but at the same time customised on the basis of the answers entered in the system. Legal tech software is used to implement and automate different types of legal activities. Among the many classifications it is possible to distinguish between: a) lawyer marketplace – lawyer-to-lawyer outsourcing – social and referral networks; b) document automation and assembly; c) practice management; d) legal research; e) predictive analytics and data mining; f) e-discovery; g) online dispute resolution; and h) data security technologies.8 The most popular applications of legal tech software are those that focus on the contract, which can be divided into three main categories: contract drafting, contract lifecycle management, and contract analysis. Contract Lifecycle Management (CLM) is the management of an organisation’s contracts from initiation through execution, achieved through the use of contract lifecycle management software. The contract drafting moves from the simple decisional tree based on questionnaires to intelligent and personalised drafting solutions; sometimes also offering collaboration tools (ie document sharing, version comparison). Finally, contract analysis is data analysis and machine learning applied to contracts. For example, Icertis reads and analyses documents and third-party information to provide authorised users with detailed risk reports, automatic obligation tracking, and smart notifications; at the same time, it streamlines and systemises all types of contracts and associated documentation in a single solution. With regards to the second disruptive factor introduced by these technologies, the possibility to use data to manage, conclude and analyse contracts, it is useful to underline that this software enable the possibility to go beyond the contract as a document and look at the relevant information. In addition, it is possible to consider, analyse and control at the same time a group of contracts or all the contracts concluded by a company. Contracts are no longer represented by paper documents or pdf, but with a digital model made of orders – state of work (stable) and terms and conditions or framework agreements (modifiable during the relationship). This has two main consequences: first, the representation, the form of the contract, is not only the document but also the data, in the representation offered by the contract automation software; second, the whole contract process or entire groups of contract can be managed and analysed together with a granular vision, to extract relevant information in an efficient way. The contract is therefore moving from document to data, represented as a set of fluid information and it can be organised and managed by technology for the efficient (and peaceful) relationship of the parties, but also analysed to identify corporates’ risks and compliance or develop new businesses.

8 S Praduroux, V de Paiva and L di Caro, ‘Legal Tech Start-Ups: State of the Art and Trends’ [2016] Nuance Communications, Sunnyvale and University of Turin, Italy 10.

84  Silvia Martinelli and Carlo Rossi Chauvenet First, it is important to notice two potential different approaches regarding the use of data for contracts. The first approach, also chronologically, is to encode knowledge about the law so that a machine can engage in legal reasoning on the particular legal subject matter in order to provide advice and guidance. This activity is typically the funding design of a ‘smart contract’, a tool that, thanks to the use of blockchain technology, allows the parties to reduce the risk of human intervention whenever a covenant (including the payment of a sum) is due. In order to reduce the risk of a breach, in a smart contract the parties decide to delegate the machine to comply with contractual provisions. This model, known as rules-driven AI, requires a perfect definition of all possible cases that could potentially occur: the omission of some conditions may transform the tool into something very far from the solutions the parties would have implemented if they were able to predict the event. Data-driven AI tools are, on the other hand, tools that use a statistical approach to data and are able to analyse large pools of data to identify interesting patterns and or to make predictions about certain legal outcomes. The smartness of a contract does not depend on the use of a specific technology. A contract is smart if it is a permanent and rigid arrangement between the parties, a tool capable of automatically acquiring information and adjusting to any swift change of the context in which the agreement was originally conceived by the parties. Data enables new forms of information acquisition and management which can also be applied to improve contracts and the relationship between the parties.9 Contracts should be not just drafted by lawyers and signed by unconscionable parties, but used as a continuous reference of business activities. The streamline

9 See also CR Chauvenet and S Martinelli, ‘Data Valley e Il Dialogo Tra PMI e Big Tech per Lo Sviluppo Di Nuovi Servizi Smart’ in L Bolognini (ed), Privacy e libero mercato digitale (Giuffré Francis Lefebre, 2021) regarding our conception of ‘smart’ and ‘smart product’. Our new phones are not smart simply because they are more ‘powerful’, but because they respond intelligently, elegantly, and quickly to the needs of their ‘users’. Technology takes complexity away from the surface and moves it to a deeper level, giving us a beautiful, elegant, and easy-to-use product that can perform complex tasks and solve problems in a seemingly simple and increasingly fast way. However, it is not just a matter of hiding a complex service behind a simple and user-friendly interface. The flow of user-generated data and the ability to receive, manage and analyse it in real-time enable decision-making or the implementation of responses or proposals to user needs instantly. Demand and production of services are related in a new, faster, and more direct way, regulated and verified by data and user interactions. Finally, a third key feature of the new smart products and services is personalisation. Data makes it possible to profile and target customers in a targeted manner to customise the solution according to their specific needs. The product becomes ‘custom’, adapting precisely to our habits and needs, to our way of life. Just like a suit ‘cut’ to measure by an expert tailor, smart tools ergonomically wrap our lives, making it easier and faster, offering elegant and ‘design’ solutions to our needs. See also ME Porter and JE Heppelmann, ‘How Smart, Connected Products Are Transforming Companies’ [2015] Harvard Business Review 96; M Zeng, Smart Business. I Segreti Del Successo Di Alibaba (Hoepli, 2018); GG Parker, MW Van Alstyne and SP Choudary, Platform Revolution. How Networked Markets Are Transforming the Economy and How to Make Them Work for You (W W Nothon & Company, 2016); M van Alstyne and G Parker, ‘Platform Business: From Resources to Relationships’ (2017) 9 Platform Business 24 www.degruyter. com/downloadpdf/j/gfkmir.2017.9.issue-1/gfkmir-2017-0004/gfkmir-2017-0004.pdf.

From Document to Data  85 offered by legal tech software can expand the number and categories of individuals able to draft and manage contracts. In addition, legal tech software enables new monitoring and analysing possibilities: agreements could be constantly listed and monitored (and not lay in the dusty closet of some unknown lawyer) and they can also be better used for determining the value of a company, which in M&A transactions mainly depends on the quality and stability of their agreements. The adoption of these technologies in the legal field brings with it some significant changes in the ways in which the legal activity is performed and impacts on contract practices and contract law. In the next paragraphs, the authors will try to identify the main impact in a phase where the diffusion of these technologies is still in the early stages. The next paragraph will analyse how the legal market and the legal services are changing, focusing on the ‘servitisation’ and ‘productification’ of the legal activities and on the new role of lawyers as legal tech integrators. Then the analysis will move to the new emerging actor in the contracting process, the company which creates and manages the legal tech software, describing how it interferes in the contracting, the challenges, and applicable law. Next, the chapter will focus on the new possibilities for compliance and regulation through contract software and on the consequences (but also new possibilities) for asymmetries. Legal Service has changed and also the contract is changing, from the stable definition of the interests of the parties in a given moment to the balancing tools of their relationship. In a time in which everything needs to be smarter (from phones to TV), while the contract is already an excellent reference tool for managing relationships, it can be smarter by applying the new possibilities on offer from data to contracts and contract law.

II.  Legal as a Product and Lawyers as Legal Tech Integrators Susskind describes the evolution of legal services as a ‘mercification’: the market moves from a personalised and tailored consultancy to a standardised and automated service available online.10 The expression ‘as a service’ is used to indicate the use of cloud applications to realise different functions and offer it to the final user as a software product. Further authors underline the ‘marketisation and commodification of legal practice’.11 Also, legal activity is increasingly being implemented in the form of a service, which the customer can use. Therefore, the specific and individual request for advice is replaced by a software subscription, an app where multiple functions are offered, or the provision of a standard but personalised solution. The legal

10 R

Susskind, Tomorrow’s Lawyers. An Introduction to Your Future (Oxford University Press, 2013). (n 2).

11 Caserta

86  Silvia Martinelli and Carlo Rossi Chauvenet activities are transformed into software furnished to a final user, as we saw in the case of Iubenda. This transformation impacted significantly on the organisational structure, ideology of professionalism, and working culture of legal professionals.12 Further, it has also had impact on the recipients of legal services. Starting from the last consideration, just as the codifications have brought to the inexperienced citizen the knowledge of the law, which has become readable by all, now the computer code has implemented a new revolution allowing even those who have not studied law to carry out activities previously reserved for professionals using new software that simplify, standardise and personalise complex activities. The recipients of the new services and tools that combine law and technology are not only lawyers and law firms, but also, directly, businesses and citizens. The software can, for example, manage and facilitate bargaining within a company or directly allow the citizen to check the chances of winning a case based on the precedents in the matter. The legal work became more accessible and efficient, but also client-centred and user-friendly. Legal tech software dedicated to contracts present the same characteristics. As for the first role attributed to the contract, it is clear that the new regime require agreements to be more understandable and usable by the parties to regulate their relationship. Contracts are no longer the result of an artisanal work produced in a word document format by a single lawyer in the interest of a client, but the result of an ‘industrial’ activity carried out within the legal department of a company or a law firm. The process has different steps and each of these may be activated or concluded through a specific tool. The process starts with the template selection and the drafting, respectively the choice of the correct contract model from an external provider’s library or the internal ‘playbook’ and the online collaboration on the draft of the lawyer and its client thanks to some common collaborative tools (for instance Google Docs or Office365). The process continues with the negotiation of the agreement using another collaborative tool and finishes with the e-signature of the agreement (for instance Docusign13) and storage of the signed document by both parties. The same industrial approach is followed in contract review and litigation. Thanks to initial use of AI tools, lawyers may substantially reduce their working hours to red flag anomalies in contract clauses (ie Luminance14) or provide

12 ibid. 13 Docusign is a software company that developed an e-signature technology to prepare, sign, act on, and manage agreements: www.docusign.com/. 14 Luminance is an artificial intelligence platform for the legal profession that developed a machine learning technology to rapidly read and form an understanding of documents, before displaying the results of this analysis to the user: www.luminance.com/.

From Document to Data  87 evidence in litigation (ie e-discovery tools such as Everlaw15). In this respect, a lawyer’s job has moved from material drafting and reviewing to a mere controlling and monitoring activity. The legal services evolution described has a direct impact on the market and on lawyers’ roles and activities. The legal business model already moves to a ‘commercially oriented professionalism’: the emergence of large law firms ‘caused a radical shift in the nature of the work of (elite) lawyers from courtroom advocates to business advisers’.16 Market forces increased competition among firms, the increase of in-house counselling, and ‘while in the early decades of the 20th-century large law firms lawyers were associated with patrician airs and professional nobility, they were now businesslike organizational men devoted to the interests of clients’.17 The digitisation and the emergence of a legal tech market changed the profession in two directions: a) the legal market became a product and the final legal product that can be used by a multitude of users; b) the creation of a legal tech company requires high investment. Law firms and legal departments of big corporations understood this new element with the evolution and spread of ERP software, but were able to get access to adequate tools to manage contracts only in recent times. Legal Technologies have interfered in four different areas: 1. Managing the business: people, resources, finance, operations, and customer relationship management. 2. Managing and/or performing the business: in relation to the management of legal knowledge, legal matters, risk, and compliance of the firm. 3. Performing the work: in relation to documents and contracts analysis litigation and transaction management 4. Consumer services: in relation to the creation of tools that move to the clients. A clear pattern that can be observed in the market is the integration between ERP software and legal tech software: accounting data needs to be close and correlated to the contracts that have generated them. This evolution is clearly identifiable from some recent collaborations signed by and between Icertis Partners and SAP to transform the so-called ‘lead to cash process’. Another important point is to understand whether the firm should invest in the process. The value of legal tech companies depends more and more on the ability to use data to: (i) understand and apply legal rules; and (ii) understand the market and create new business opportunities. Lawyers’ role in the ecosystem can surely be on the side of the legal tech company, working in the creation and evolution of the software, but the number of 15 Everlaw is a collaborative, cloud-based litigation platform for corporate counsels, litigators and government attorneys that enables teams to discover, illuminate, and act on information to drive internal investigations and impact the outcome of litigation: www.everlaw.com/. 16 Caserta (n 2). 17 ibid.

88  Silvia Martinelli and Carlo Rossi Chauvenet legal practitioners involved is limited. However, new roles and consultancy professionals are emerging in the choice and implementation of software. The legal tech market is already a reality, but it is vastly fragmented. Deciding to buy a solution able to automatise part of the legal process is most of the time extremely costly; employees need training in order to change their work attitude towards a new tool and the solution will help you solve only part of your legal operations. Furthermore, companies have to deal with a great amount of change management in order to be innovative. The first element of the strategy is the correct definition and representation of the legal work and its size. Automation is a cost, not just for the tools but for the implementation of a newly organised model and it is important to carry out an assessment of workflow, identify the relevant KPIs and assess the situation before and after implementation. The analysis should be made shareable and understandable thanks to the use of specific tools, such as ‘swimlanes’ in which all steps are clearly indicated with the actual and envisaged timeframe. In order to align all the tools necessary to redefine the original work of lawyers, law firms have invented and employed a new type of technician, originally called ‘CLIO’ Chief Legal Innovators Officers (the same name is also used by a famous software of matter management) or, as we tend to prefer, the ‘legal integrators’. This new role is usually assumed by professionals without a strong legal background but with operational and organisational skills who are capable to carry out project management activities, add metatags in order to evaluate and re-engineer all the phases to ensure quality and speed to the legal service. The first activity of a CLIO or Legal Integrator is the correct choice of the tool(s) which is the natural consequence of a proper evaluation of the most interesting software to make the process automatic. In the event the firm works in a niche and no specific tools are identified, the CLIO or the head of the legal function may decide to invest in the development of new tools that are capable of speeding up the process and integrate these with previous software adopted by the firm. If it is true that data are the main content of a contract, it is important to use a digital infrastructure that is not excessively fragmented in different tools and requires a continuous ‘jump in and jump out’ from different software. On the contrary, a proper contract life cycle management system shall integrate and concentrate as much information as possible, minimising the effort by the user.18

III.  The Legal Tech Software as a New Intermediary Susskind wrote about ‘breaking down legal activities’: we can break down (others would say ‘disaggregate’ or ‘unpack’) jobs into various tasks, each of which should be 18 Sweet Legal Tech is an Italian start-up offering consultancy and education services in the fields of legal innovation and legal technology with the purpose of integrating efficiency and digital transformation in legal teams: www.sweetlegaltech.com

From Document to Data  89 done as effectively as possible.19 The legal tech software enables a re-intermediation allowing the organisation of resources, goods, and people, in a new way, outsourcing the costs and risks of the activities, charged to the ‘service provider’. An example of this in the legal field is Rocket Lawyers20 or LegalZoom21 platforms that sell documents with the possibility to receive assistance from selected lawyers. In this case, the model is similar to Uber or Airbnb, the choice of the software is the main driver in the selection of the lawyer, who is ultimately chosen through software. The main common element between different software and models is the introduction of a new actor in the negotiation. Ironically, customers look for tools that would liberate them from hiring a lawyer and ending up using a platform that will intermediate their actions. These software companies appear to be the new intermediaries of the legal profession. Each software has its own mask and filter that allows the automation of some tasks of the contract drafting and execution: a) assessment of the problem; b) choice of the contractual model; c) customisation of the clauses; d) signature of the parties and execution of the final draft. The contractual models included in the software necessarily influences the behaviour and the expression of the common intentions of the parties. The introduction of this new actor and its weight on the free expression of the will of the parties may have an influence on the type and contents of the agreement the parties choose. On the other hand, the acquisition of large amounts of reserved data by contract automation companies may pose additional concerns on the control of the relationship by these new intermediaries. Second, looking at the direct consequences of contracting, the question is whether it is possible to consider the software as a mediator or as a new intermediary. The problem is not totally different from the one that arises more in general in the platform economy (ie Uber, Airbnb, Amazon), where the business model of the platform is to facilitate the match and the conclusion of contracts between the users. Platforms operate by organising production resources more efficiently, exploiting data, algorithms, and connections. They are new organisational tools organising resources indirectly, using technologies, but with a high level of control

19 R Susskind, Tomorrow’s Lawyers. An Introduction to Your Future (Oxford University Press, 2013). 20 Rocketlawyer is a legal services website that provides access to a variety of legal help and connects customers with attorneys. The services are available to individuals and small businesses for a low fixed cost: www.rocketlawyer.com/. 21 Legalzoom is a provider of online legal solutions for families and small businesses and developed a technology platform that gives anyone access to professional legal advice www.legalzoom.com/. See also E Hartman and C Rampenthal, ‘LegalZoom: Fighting the A2J Crisis’, in M Hartung, M-Manuel Bues and G Halbleib (eds), Legal Tech. A Practitioner’s Guide (Beck, 2018).

90  Silvia Martinelli and Carlo Rossi Chauvenet on the people and goods involved in the platform ecosystem. In some platforms, the core interaction22 between users consists in an exchange, involving the conclusion and then the execution of a contract, and the question that arises on the role and liability of the platform in relation to the contract concluded between the users is still open. Platforms are attractive, as they allow agile access to a wide range of potential buyers and/or sellers of goods and services, with high matching possibilities created by algorithms, not feasible in a traditional market or feasible only with great economic expenditure, thus offering reduced transaction costs and governance. In fact, transaction costs related to research, bargaining and execution are reduced. Promotion, marketing, and branding, but also negotiation are largely enabled, determined and managed by the platform itself.23 Furthermore, the platform also plays a role with regard to issues concerning execution, through mechanisms such as reputation systems, complaint mechanisms, and alternative dispute resolution systems. The platform acts as a judge of disputes arising between users, deciding on customer complaints, and following the decision directly implements the consequent monetary transfers thanks to the preauthorisation provided by users at the time of the conclusion of the contract. In addition, the platform uses ‘quality standards’ in evaluating the ‘conduct’ of users who offer goods or services. The quality standards, described in the ‘terms of service’ determine the characteristics and quality of the ‘final service’ offered to users/customers, even in detail, including the maintenance of certain levels of customer satisfaction assessed through the reputational feedback systems, as well as the absence of complaints and disputes. More generally, from the analysis of the contracts of eBay, Amazon, Uber, and Airbnb, it emerged that: 1) the service performed by users constitutes the fulcrum of the platform’s economic activity; 2) the platforms, through the definition of quality standards, determine (albeit to a various extent) the type and quality of the services offered; 3) the performances are evaluated by the platform, with the possibility of termination of contracts, suspension of accounts or reduction of privileges where quality standards are not maintained. In view of the above, it does not seem feasible to affirm the neutrality of the platform with regard to the contract concluded between users. In contract automation software the aim is to facilitate contract drafting, negotiation, conclusion, and analysis. The difference is that here the interest is not on the single match and conclusion, but on the ‘contracting services’ as a whole. 22 GG Parker, MW van Alstyne and SP Choudary, Platform Revolution. How Networked Markets Are Transforming the Economy and How to Make Them Work for You (W W Nothon & Company, 2016). 23 KA Bamberger and O Lobel, ‘Platform Market Power’ (2017) 32 Berkeley Technology Law Journal 1051; T Rodríguez-de-las-heras Ballell, ‘Rules for Electronic Platforms: The Role of Platforms and Intermediaries in Digital Economy A Case for Harmonization’ [2017] UNCITRAL 1 www.uncitral. org/pdf/english/congress/17-06783_ebook.pdf; V Hatzopoulos, The Collaborative Economy and EU Law (Hart Publishing, 2018).

From Document to Data  91 However, the influence of the software can be significant, orientating the contract structure, clauses and contents. The first point to evaluate is the liability of the contract automation company and its qualification as a simple provider – with the application of all the more general concerns regarding the provider’s liability, or as an agent, mediator or intermediary, depending also on the software functions. Furthermore, on the iure condendo perspective, emerges the opportunity to evaluate if new regulation is needed, in particular regarding transparency and independence. The independent position as a third party of the legal tech services leads to the consideration of the points of contact with mediation, characterised by the intervention of a third party, unrelated to the parties, who helps to allow or facilitate the conclusion of a deal, without the obligation to conclude contracts. The contract automation software could therefore be considered mediation contracts, although different from the traditionally regulated hypotheses. De iure condendo, a new specific discipline, based on transparency, professionalism, and impartiality could be introduced. In addition to the figure of the mediator, other figures of ‘intermediaries’ are variously regulated by the different legal systems and only partially harmonised by EU law. Although these disciplines cannot be considered directly applicable, some ideas can be drawn from them to better understand the nature and function of contract automation software, as well as in view of future regulatory interventions. The European and national regulation on insurance intermediaries focuses on registration, professional and organisational requirements, transparency and information duties and conflict of interest regulation. In financial intermediation also, European and national legislators establish transparency and fairness rules in relations with customers: publicity of the economic conditions relating to the transactions and services offered and the indicators that ensure transparency of information to customers, the possibility of supervisory bodies to determine criteria and parameters, information duties on advertisements and also provisions relating to the form of contracts, the unilateral modification of the contractual conditions, periodic communications to customers, withdrawal and portability rights. The absence of conflict of interest is not sufficient and often the intermediary role requires the independence of the operator. It is the case, for example, of the ‘credit mediator’, which pursuant to Article 128-sexies and sections of the Italian T.U.B., is ‘the person who puts banks or financial intermediaries envisaged by Title V in relation, including through consultancy activities, with potential customers for the granting of loans in any form’. He must carry out this activity exclusively and is required to carry it out ‘without being linked to any of the parties by relationships that could compromise his independence’. It must be registered in the appropriate lists, it can be subject to inspections and it is jointly and severally liable for damages caused in the exercise of its activity by the employees and collaborators.

92  Silvia Martinelli and Carlo Rossi Chauvenet The rules on financial intermediation remain peculiar. Specific to these activities, however, are provisions aimed at responding to the need to protect independence and transparency, which can be inspiring for the regulation of contract automation software. As already mentioned, it is not only the intermediation role that raises challenges but also the connected data processing. The contract automation company has a wealth of information to which the user does not have access and to which he relies upon. New rules may be introduced, regulating the transparency and independence of these new operators.

IV.  Compliance and Regtech Through Contract Software As mentioned above, data enables new possibilities of analysis and comprehension of the relevant information. Both the information and the software’s capabilities to extract and analyse information can be used not only by companies but also by legislators and regulatory authorities. ‘RegTech’ is defined as ‘the use of technology, particularly information technology, in the context of regulatory monitoring, reporting and compliance’ and it focuses on ‘the digitization of manual reporting and compliance processes’.24 It could also ‘enable a close to real-time and proportionate regulatory regime that identifies and addresses risk while also facilitating more efficient regulatory compliance’. First, focusing on companies, they can use the data and the software itself to align the company’s activities to the business goals they want to achieve, but also for compliance, to take the necessary steps to comply with laws, regulations and policies. Thanks to the usage of AI, we are moving from a reactive, on demand model to a predictive, fully automated system of deployment of legal services. Compliance is the automatic result of the correct setting of a series of flows, processes and triggers that automatically provide actions such as sending emails, letters and documents. The legal advisor becomes the ‘automator’. Second, the legislator or regulatory authorities can look at the developments in software and legal tech to introduce new rules applicable to the legal tech company (ie to avoid the use of a certain clauses or to introduce a new model). To better understand the innovation and the potential it is useful to extend the concept. Using coding it is possible to introduce new rules on the software. The software will apply the new rules created to all of the users, whether the particular condition described is verified. This is the normal function of coding but it can be used 24 DW Arner, JN Barberis and RP Buckley, ‘The Emergence of Regtech 2.0: From Know Your Customer to Know Your Data’ [2017] SSRN Electronic Journal.

From Document to Data  93 also to introduce legal rules. The law employs technology and new tools are used to regulate the conduct of all the individuals who use the software. The legal rule, even that deriving from the contract and from individual autonomy, become part of the technology itself. This form of regulation is considered in European Regulation 679/2016 of 27 April 2016 ‘on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/ EC (General Data Protection Regulation (GDPR))’. Article 25 of the Regulation is dedicated to the ‘data protection by design and by default’ and it introduces an obligation for the data controller to implement ‘appropriate technical and organisational measures’ and to ‘integrate the necessary safeguards into the processing’ in order to meet the requirement of the GDPR and protect the right of the data subjects. As examples, the Regulation cited pseudonymisation and data minimisation. The measures shall be implemented ‘taking into account the state of the art, the cost of implementation and the nature, scope, context, and purposes of processing, as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing’, in line with the accountability principle settled out in the GDPR. The obligation is due both at the time of the determination of the means for processing and at the time of the processing itself; it is, therefore, necessary to control the appropriateness of the measures as long as the processing continues. In addition, Article 25 of the GDPR introduces the ‘by default’ principle. This principle states that ‘appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed’ and such ‘measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons’. An easy example to understand this principle is the consent to process data given with the pre-ticked checkbox which the user must deselect to refuse his consent; the default option, the pre-selected one, shall be the one that most respects the right of the data subject. The European Court of Justice (ECJ), although not specifying it, uses the principle in the case of ‘Planet 46’25 where, regarding the consent on cookies, noted that ‘the requirement of an “indication” of the data subject’s wishes clearly points to active, rather than passive, behaviour’. In particular, the ECJ specifies that the consent ‘given in the form of a preselected tick in a checkbox does not imply active behaviour on the part of a website user’. The expression ‘by design’ is introduced by European Regulation 679/2016 for data protection (GDPR), but similar applications can spread in all areas of law (for example in the field of consumer protection or cybersecurity) together with the greater diffusion and capillarity of the new tools and in relation to the possibilities they offer.26 25 Court of Justice, Case C-673/17. 26 EDPB, ‘Guidelines 4/2019 on Article 25 Data Protection by Design and by Default Adopted on 13 November 2019.

94  Silvia Martinelli and Carlo Rossi Chauvenet Another example of by design regulation can be found in Directive 2161/2019 of 27 November 2019 ‘amending Council Directive 93/13/EEC and Directives 98/6/ EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules’. The Directive adds a new information duty for online marketplaces: ‘providers of online marketplaces should require third-party suppliers on the online marketplace to indicate their status as traders or non-traders for the purposes of consumer protection law and to provide this information to the provider of online marketplace’ (recitals 28).

The duty, in this case, is not related to the content and the platform does not have to verify the information, but only to require the supplier to specify this information. Directive 2161/2019 also modified Article 7 of Directive 29/2005 adding a new letter (f) to the misleading omission list which are considered to be misleading practices. The same legislative technique can be used to regulate contract automation software. These are only two examples of the possibilities offered by the use of software for regulation. The European Commission is also interested in ‘the role of modelling and how an efficient method for modelling contracts could facilitate the efficient functioning of the European economy as well as its measurement and regulation’. In this respect, it parallels enterprise efforts at ‘contract management’, ‘supply chain management’ and ‘digital ecosystem management’.27 In particular, the EU institutions want to have access to the data for better policy making: In a world dominated by the internet and technology giants, a challenge for the European Commission and other policy and regulatory bodies is related to the inaccessibility of data and inconsistency of current indicators in supporting the legislative and decision activities in many different policy areas.28

In law we use typifications, situations which are on closer inspection heterogeneous are typified as being homogeneous; big data, thanks to the granular vision offered ‘could fundamentally change the design and structure of legal norms and thus the legal system itself ’, by using ‘Big Data and algorithm-based regulation could lead to a shift from impersonal law based on the widespread use of typifications to a more personalized law based on ‘granular legal norms’ that are tailored to the individual addressees.’29 The personalised law enabled by data can be applied both to private or public regulation.

27 European Commission and Exrade Srl, Modelling the EU Economy as an Ecosystem of Contracts (2020). 28 ibid. 29 C Busch and A De Franceschi, ‘Granular Legal Norms: Big Data and the Personalization of Private Law’ in V Mak, ETT Tai and A Berlee (eds), Research Handbook on Data Science and Law (Edward Elgar, 2018).

From Document to Data  95

V.  Asymmetries and Clauses Evaluation Asymmetries are frequent in contracts and the legislators, Italian as well as other European countries, have intervened on several occasions to protect the weaker party in the relationship, introducing information duties and regulating the relationship between the parties. Does use of legal tech software for contracting improve the position of the weaker party? Does it increase the chances of negotiation or decrease it? Does it protect the weak or strengthen the strong? The technology is neutral and it all depends on who uses it so that such software could be used by the strong to their advantage. However, it is also possible that the software will help the weaker party. Looking at the form of communication of the information, software can help with the comprehension of a contract, ie underling relevant information, requiring attention to a specific clause or comparing clauses, or giving simple or wide explanations accompanying the clause. Furthermore, the by design regulation can be introduced as described in the previous paragraph regarding marketplaces information duties: ie if the party is a consumer the consumer’s law can be assured with the automatic insertion of the standard information of consumer’s rights or giving evidence if a clause seems vexatious. In the second paragraph above, we already underlined the change on the recipients. Contract automation software can be used by lawyers, legal departments and vendors but also by consumers and citizens. This led to research of clarity and simplicity, to extend the number of potential users. Generally, software created for a final user tends to be customer-oriented and the first need of the user here is to quickly and easily conclude and manage contracts. The language also reflects the new recipients by becoming as simple, intuitive and effective as possible. The search for new communication methods, the so-called ‘legal design’,30 responds both to the need for speed, which has become more urgent due to the overabundance of information to which we are exposed, and to the search for customer-oriented or customer-friendly solutions, already developed in other sectors and to which the user is progressively accustomed to. The wishes and needs of users are more easily intercepted by new technologies and data to provide, both at the communicative level and in the substance of the service offered, solutions that meet the requests and needs of the recipients. 30 See also S Martinelli, ‘Legal Design’ in G Ziccardi and P Perri (eds), Dizionario Legal Tech (Giuffré Francis Lefebvre, 2020); M Curtotti, H Haapio, ‘Interdisciplinary Cooperation in Legal Design and Communication’ (2015) 462 Co-operation. Proceedings of the 18th International Legal Informatics Symposium IRIS 455; R Yankovskiy, ‘Legal Design: New Thinking and New Challenges’ (2019) 5 Zakon 76; H Haapio and M Hagan, ‘Design Patterns for Contracts’ (2016) 388 Internationales Rechtsinformatik Symposion IRIS 8 www.lexpert.com; M Hagan, ‘Design Comes to the Law School’ [2020] Modernising Legal Education 109; H Haapio, DA Plewe and R de Rooy, ‘Next Generation Deal Design: Comics and Visual Platforms for Contracting’ (2016) 380 Internationales Rechtsinformatik Symposion IRIS 8.

96  Silvia Martinelli and Carlo Rossi Chauvenet A further interesting tool offered on contract drafting software is the possibility to give an evaluation of a clause, which can be represented through the use of colours such as traffic lights (green or red) or with a score, as in the reputational feedback system used in platforms. The evaluation can be internal, referred to the single company, or collective, with the suggestions of all the users of the software. This could lead to an evaluation of the same legal production that becomes a product by its users. Qualified evaluators, such as consumer or trade associations or authorities, could also be included. Finally, the data collected by the software offers new possibilities to look into contract relationships and can be analysed to identify new weaker parties or new unfair clauses.

VI. Conclusion The use of legal tech software in contracting, automated contract drafting and contract management is changing contracts and it will impact upon contract law. The revolution is the application of data analysis and data management in legal works and contracting practices. As it happens in other markets, platforms and software creators are emerging as new intermediaries in the relationship between users/parties, simplifying and governing the relations. The use of this software multiplies data processing and accessibility in contracts and contract practices and these data, as well as the software/platform architecture itself, can be used to manage the relationship: monitor, analyse, regulate and govern contracts. Finally, observing this phenomenon considering the contract practices, it is clear that the use of such software can influence the content and contractual uses, probably also combining a broader circulation of uses and models between countries. The applicable law and the language used are an obstacle to a wide diffusion of contract automation software in Europe, comparing it to the US where the potential number of users in the market is significantly larger than every single European Member State. The effort spent in the creation and evolution of the software becomes more profitable where the level of complexity can be managed with low effort and when the number of potential users increases. On the other hand, the diffusion of contract automation software can also boost and stimulate the circulation of laws and contracts between the countries. As it happens more in general with data and artificial intelligence, the evolution of legal tech in Europe could lead to the import of models coming from non-European states or can be an opportunity to develop European technologies to improve the European economy and to save European businesses data. The differences between contract law in the Member States, as well as the different languages, are a concrete obstacle, but the EU can be a promoter of new responsible but smart models and legal tech could be also an opportunity to redefine European law, models, and principles for contracts.

part ii Drafting, AI Tools for Contracting and Contract Analysis, Management

98

6 Legal Tech Solutions for the Management of the Contract Lifecycle GIULIO MESSORI

I.  Introduction and Definitions Relationships between individuals have always been governed by contracts. Over time, these types of relationships changed shape, thus necessitating an upgrade in the means through which contracts were created, reviewed, subscribed, amended, discussed and archived. Most importantly, contracts have become documents that need to be handled not only by lawyers but by multiple parties – even from nonlegal backgrounds. This chapter will investigate some of the ways through which contracts are today managed thanks to the use of technology. To this end, several tech providers and their features will be investigated in order to understand which functionalities are the most widely used. Two premises before starting. First, a few years ago the trend was to develop software that would address (and solve) one of the necessary phases for the management of the life span of a contract (the Contract Lifecycle). Technologies able to provide solutions to just one of those phases can be referred to as point solutions. On the contrary, as technology evolved, so did companies operating in this field by integrating more and more between each other. Legal tech providers today increasingly tend to be qualified as Contract Lifecycle Management Systems (CLMs), ie they tend to provide solutions for the entire spectrum of phases: The term ‘contract lifecycle management’ refers to applications used for managing contracts from initiation through ongoing management and eventual renewal or termination. CLM solutions manage any legal documents containing obligations that affect an organization.1 1 See Gartner, Magic Quadrant for Contract Life Cycle Management, February 2020, available for download at: www.gartner.com/en/documents/3981321/magic-quadrant-for-contract-life-cycle-management.

100  Giulio Messori The chapter will then follow a structure intended to explore the technologies and the functionalities dealing with the phases of the Contract Lifecycle, namely: contract drafting (section II), contract review and negotiation (section III), e-signing (section IV), contract storage and post-signing management (section V) and contract analysis through AI and Machine Learning (section VI). A dedicated appendix (section VII) will lastly be focused on understanding the other available possibilities to access Contract Lifecycle Management and legal tech systems other than third party providers and outsourcing. Second, we consider point solutions and CLMs as part of the wider category of legal technologies, which for the purpose of this chapter will be intended as follows: Applications and technological solutions, mainly in the form of software, dedicated to digitising, automating, streamlining or simplifying activities and processes within the legal professions.2

Digitising, automating, streamlining and simplifying are activities that should correspond to one ultimate goal: to make administrative and repetitive tasks less burdensome and to allow lawyers to focus on the high-value legal activities.

II.  Drafting Phase The drafting phase is the first that one or both of the parties must contemplate when initiating the Contract Lifecycle. There are three ways to draft a contract nowadays: the first is to write it from scratch using a text editor; the second is to search for a template or similar document (which can then be duplicated in its most relevant parts or in its entire structure); the third is to use contract drafting systems. With regards to the first point, it is beyond doubt that text editors – still – host the majority of contracts drafted by lawyers worldwide, and since the advent of those technologies, lawyers have always been linked to the most successful solution within that area: Microsoft Word. But ever since the first contracts were written with Word, users have been aware of one crucial element: contracts consist, depending on the purpose of use, of a certain set of provisions that can remain fixed, while a number of other provisions can be seen as variable. Probably no lawyer today proceeds to write a lease from scratch, but will most likely look up the last similar lease he or she wrote for a client, drawing on his or her library of templates, or get an idea of what structure to give his or her lease by surfing the internet.

2 G Messori, ‘Legal Tech’ in G Ziccardi and P Perri (ed), Dizionario Legal Tech. Informatica Giuridica, Protezione dei Dati, Investigazioni Digitali, Criminalità Informatica, Cybersecurity e Digital Transformation Law (Legal tech dictionary. Legal Informatics, Data Protection, Digital Investigations, Computer Crime, Cybersecurity and Digital Transformation Law) (Giuffre Francis Lefebvre, 2020) 584.

Legal Tech Solutions for the Management of the Contract Lifecycle  101

A.  Contract Marketplaces and Contract Outsourcing The idea of drafting a contract by starting from a template is so well established that contract marketplaces and outsourcing platforms have begun to flourish. Examples are Rocket Lawyer3 and LexDo.it,4 popular websites – and companies – where users can explore the entire catalogue of templates, have an understanding of what is the purpose of each contract (eg a Non Disclosure Agreement), answer a few simple questions and then get access to the template for a limited price (compared to lawyers’ fees or hourly rates). Within these platforms, contracts are sold as products. The distinction between variable and fixed elements of the contract is crucial to those business models, as users will probably need a completely handcrafted lease agreement very few times; in all other cases they can limit themselves to completing the few variable elements that are relevant to their case, while all other clauses will be the standard ones provided by this type of contract. These companies are part of the ‘do it yourself’ culture, a trend that implies a clear message: people who need to handle contracts do not necessarily need a lawyer. Instead, they can do much of the work by themselves thanks to these types of solutions. To this regard, another clear example is LegalZoom5 or, when it comes to Privacy and Terms and Conditions, Iubenda.6 Moreover, the most premium type of plans for Contract Marketplaces provide for an amount of monthly hours with a dedicated lawyer to craft the clauses which need to be adapted on a caseby-case basis. However, most of the time, by looking at the Terms and Conditions, Contract Marketplaces advance liability waivers stating that the service ‘does not correspond to legal assistance’, thus falling into the realm of mere consultancy, where it is the client’s responsibility to modify what is the object and the structure of the contract to the real use case. Last but not least, more and more legal teams and lawyers in fact believe that efficiency begins with a solid organisational base. Hence, another important feature when it comes to template selection is internal template management or knowledge management. A professionally-organised template library accelerates template selection rules and the subsequent phases in the Contract Lifecycle. This can be done by implementing: i) the designation of specific tags and metadata to the template; ii) an organisation into categories and strategic folders.

B.  Contract Drafting Solutions The distinction between variable and fixed elements is also the same used by contract drafting solutions (also referred to as Document Assembly) to make the

3 See

www.rocketlawyer.com/. www.lexdo.it/. 5 See www.legalzoom.com/. 6 See www.iubenda.com. 4 See

102  Giulio Messori contract a ‘fillable’ and intelligent document, able to adapt to conditional logics according to the case. This is what happens with third party providers like Contract Express by Thomson Reuters,7 Juro8 or Bigle Legal.9 Within those solutions, the user is asked to identify variable contract elements which then need to be replaced by a simple but specific code syntax or code placeholder. As an example: the parts (words) of the contract which need to contain the name of the counterparty could be replaced manually or automatically with a variable like {counterparty_name}. The user will then have to set up the information that will allow the placeholders to be filled, and this is typically done through the preparation of questionnaires. To the question ‘What is the name of the counterparty?’, the user will provide an answer (eg Acme S.p.A.) that will be entered whenever the solution recognises the placeholder {counterparty_name}. ‘Name of counterparty’, ‘place of jurisdiction’, ‘duration of contract’ are some of the simple elements of the contract that can be replaced automatically with words, numbers, sentences or even multiple choices. These types of drafting functionalities also allow for entire clauses to be inserted depending on the answer given in the initial questionnaire and then worded in different ways. An example can be the one of an exclusivity clause in a licensing agreement. The user, by choosing the right answer to the question ‘Is the contract exclusive or non-exclusive?’ will then instruct the software to include in the text of the contract – where the placeholder is inserted – a clause designed for the specific answer. Some contract drafting solutions also allow the insertion of pre-approved clauses from a clause library,10 while others provide a drafting assistant that allows cross-check citations, formatting, references in other documents and analyses defined terms. The use of contract drafting technologies can be useful for a variety of reasons. First, consistency. By reviewing and controlling fixed and variable elements right from the drafting phase and by structuring questionnaires to compile them within Contract Drafting solutions allows a high degree of control by those legal professionals who thought about the document structure and clauses. Second, the economic relationship between the parties involved is ensured thanks to greater uniformity of the contractual text that has been consolidated upstream by those who have engineered the drafting process within contract drafting solutions. Third, the questionnaire structure allows non-legal profiles to ‘fill in’ the contracts with little effort and great explanations by those who prepared the questionnaires. It is for this reason that the rhetoric and sales slogans of providers in this field go beyond the proposition of only the legal team of companies,

7 See

mena.thomsonreuters.com/en/products-services/legal/contract-express.html. juro.com/. 9 See www.biglelegal.com/. 10 See www.clausebase.com/. 8 See

Legal Tech Solutions for the Management of the Contract Lifecycle  103 but also extend to procurement, finance, sales, departments, etc. In fact, it is said that contracts are everywhere, because they govern relationships between people. The solution providers in this field believe that ‘anyone should be able to fill them out’. One last element: structuring contracts with these types of solutions involves a number of additional skills beyond simply using Microsoft Word or buying already filled contract templates. The use of contract drafting solutions sometimes implies that there should be professionals able to ‘prepare’ the work for others who need to assemble documents as quickly as possible. Those experts need to be able to understand the law while being able to engineer a contract for a correct Contract Drafting solutions process. Again, professionals who will need to use those technologies will need to understand what are the variable and fixed terms of the contract and be able to insert the aforementioned code placeholders. Some contract drafting solutions allow them to be inserted automatically via specific commands, while others require the user to insert specific text. This will require learning a certain syntax and a certain degree of change management in approaching the required skills.

III.  Review and Negotiation Once the contract has been compiled (either manually, through an ad hoc legal tech solution or bought via a Contract Marketplace) the parties will in some cases need to proceed to a phase of discussion, review and amendment of the provisions initially proposed. As opposed to the drafting phase, the review and negotiation phase is primarily concerned with the problem of logistics. As a matter of fact, the negotiation modalities nowadays foresee the combined use of calls, instant messaging and the sending via email of static .docx files, to which the counterparty is requested to add revisions and comments which shall be sent back later. This is true both for an internal approval or review of the contract by colleagues and for a negotiation outside, to the counterparty organisation. This is the first of the problems in modern negotiation (the back-and-forth problem): by adopting these modalities it is often impossible to keep a tidy track record of the interactions that happened between the parties. In addition, while redlining and adding comments, static .docx files are duplicated, thus creating multiple versions renamed by date or by reviewer (the versioning problem). This creates unreasonable loss of time or even blocks in what should be a streamlined process. The advent of simultaneous collaboration on documents has slightly solved this problem. Thanks to providers like Microsoft 36511 and Google Documents,12

11 See 12 See

www.microsoft.com/microsoft-365. www.google.com/docs/about/.

104  Giulio Messori stakeholders can send their contract via link to the counterparty and restrict access to view-only or commenting privileges only. This definitely solves the problem of duplicates, but it does not help to export and attach to the original file a proper audit of the document, even though collaborative documents allow you to trace the document’s modification history. The current Legal tech systems that offer negotiation modules are able to provide audit trails. This means that it is possible to reconstruct (and preserve, by exporting a .pdf file for the purposes of an eventual and subsequent litigation) the entire actions and modifications that took place on the document and in the negotiation process. Furthermore, these types of systems allow for the communications to be sent to the other party directly within the application and in some cases allowing for retention of communications (eg emails), an equally useful aspect in case of litigation. The problem of tracking changes to the document is a problem affecting external counterparts, but also internal counterparts within the same organisation, as the final approval process of a contract, especially in larger companies, sees changes being made or at least checked by multiple parties. Within those cases, it is crucial to allocate responsibilities and keep track of the decisional process that brought to a certain choice within the text. Examples of the aforementioned features can be found in Juro,13 a Contract Automation Platform that allows to define approvers, signatories or other simple recipients (who maybe only need to see the contract) as different stakeholders involved, while keeping track of the different versions of the document and the activities such as who just saw the contract, who made changes and comments etc; this results in a visual timeline, allowing each party to understand the path which has been followed in the creation and negotiation of the contract. Other tech providers such as Bigle Legal,14 LawGeex,15 Black Boiler16 and Thought River17 provide for negotiation modules.

IV. e-Signing Once the contract has been created and all the crucial aspects in the text have been approved, reviewed and negotiated to define the final version, the next step is the signing phase. Even today, in some countries, signature methods remain mainly linked to the use of wet signatures, with the normal process for the signing of a contract being structured as follows: one party sends the contract to the other, who will print it,

13 See

juro.com/legal. www.biglelegal.com/. 15 See www.lawgeex.com/. 16 See www.blackboiler.com/. 17 See www.thoughtriver.com/. 14 See

Legal Tech Solutions for the Management of the Contract Lifecycle  105 sign by hand, scan the signed document and re-upload it to the email that will be sent to the first party to complete the process all over again in order to have a fully executed version. The alternative to this type of process is the use of electronic Signature (e-Signature) systems,18 a term used to cover the whole range of solutions with the aim to create signatures in electronic format, be it the application of simple photographs of a signature on a .pdf document to systems that allow the identification of the subject and the application of a computer code that corresponds to his signature and his identity certificate to the document. The problem of the use of e-Signature systems is two-fold. First, the legal validity of electronic signature standards varies from country to country. Second, the use of e-Signature depends on the type of contract or legal act that must be signed. Some countries, as an example, require higher technological standards of signatures when it comes to documents like deeds. It is therefore important to remember that the whole point about the use of e-Signature revolves around the possibility of assimilating the legal validity and enforceability in court of a wet signature to an electronic signature. With regards to the first point, it must be said that throughout Europe electronic signatures are regulated by Regulation (EU) No 910/2014 (eIDAS Regulation),19 which is the European Regulation governing electronic identification and trust services for electronic transactions in the European market replacing the previous Directive 1999/93/EC. The eIDAS Regulation defines the types of signatures provided and usable in the EU, namely: simple electronic signature (SES), advanced signature (AES) and qualified signature (QES). The SES does not require any special requirements for its establishment: the eIDAS Regulation leaves the determination of its legal effects to the national law of the individual countries. The AES is instead the result of a validation process and is suitable to guarantee the identity of the signer and the integrity of the document on which it is affixed. In order to guarantee the certified identity of the owner, it is necessary to use a recognition process that can be implemented by whoever offers the service or by a QTSP (Qualified Trust Service Provider). The advanced electronic signature has the effectiveness of a wet signature and integrates the written form ad substantiam, that is a necessary form to give substance to a contract. A QES must be formed on a qualified certificate and must be generated using a secure device. As with the AES, a procedure is required to ensure a unique connection with the signer. In addition, this type of signature must comply from

18 The digital signature technology was invented in the 1970s as a system to solve the problem of sending secure, unmodifiable messages. The use of this technology to sign took off only about ten years later. 19 See Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC.

106  Giulio Messori a technological point of view with the standard protocols defined by the ETSI (European Telecommunications Standards Institute), must be created by suitable means over which the signatory can retain exclusive control and must be linked to the data to which it refers in such a way as to allow detection of any subsequent changes. As for the AES, the probative effectiveness of the private writing integrates the written form ad substantiam, but in this case there is a reversal of the burden of proof. Therefore, in case of disavowal of the signature of the document it will have to be the same subject who contests the affixing of the signature to have to prove the above. The premise above is fundamental when evaluating the functionalities of e-signing solutions. To date, in fact, the providers of electronic signatures on the market are countless, with the most renowned ones on a global sale being DocuSign,20 HelloSign,21 AdobeSign22 and PandaDoc.23 The functionalities offered by the latter providers are more or less the same: all of them start from a basic plan by offering Simple Electronic Signatures (SES). This is also the easiest business model to implement, since most of these players are US companies and therefore have less stringent signature standards when it comes to identification and security (compared to the EU and what has already been said on eIDAS regulation). Most of the providers also offer AES plans, where the signer has to give proof of identity before proceeding to the signature (eg by showing his or her identity card to the camera so that the system can attach the ID to the contract and can provide an additional level of identification24). However, the most popular solution is to offer an ‘enhanced’ SES signature, where the signer is always required to log in with a unique email, receive a one-time password (OTP) to sign and her IP address is registered. Electronic signature providers thus generate a certificate that collects this latter information and is attached to the signed document as future proof of the signer’s identity. The above-mentioned providers are not the only ones to make signature systems available. As explained in the introduction to this chapter, the tendency is more and more to move away from the proposal of a point solution to cover the entire spectrum of features. There are therefore two possibilities that can be found on the market today: i) CLMs that offer signature systems integrated with the original drafting and negotiation functionalities; ii) CLMs that decide to integrate with the most famous electronic signature providers via Application Programming Interfaces (APIs). It is important to consider all of these capabilities depending on the type of documents that need to be signed in a day-to-day business, thus assessing

20 See

www.docusign.com/. hellosign.com. 22 See acrobat.adobe.com. 23 See www.pandadoc.com/. 24 See www.docusign.com/how-it-works/electronic-signature/digital-signature. 21 See

Legal Tech Solutions for the Management of the Contract Lifecycle  107 the types of required signing standards (eg SES, AES, QES) and especially to evaluate the integration of these signature systems within Contract Lifecycle solutions.

V.  Storage and Post-signing Management Once the contract is drafted, approved, negotiated and signed, it must also be stored. The storage phase shall be considered as important as the others and moves from the assumption already anticipated above that sees the contract as a dynamic document. Following a fairly well-established trend of document dematerialisation and where there is no paper-based document management, contracts are stored in cloud systems. This is true in the case of private clouds, hybrid clouds, or even mainstream SaaS systems such as Google Drive,25 Dropbox,26 Box,27 etc. Storage within these systems allows for completely static archiving, ie the system is not able to recognise the provisions within the contract that may require constant monitoring, such as the terms of termination of the contract. Some specific legal tech systems, on the other hand, provide for the possibility of extracting and keeping available contract metadata: key elements of the contract that can – and must – be monitorable even without opening the document or going to look for the specific clause within the file (as mentioned, for example, date of subscription, expiry date, renewal date, jurisdiction, etc). An example of a company that implements the latter type of feature is SpeedLegal,28 a San Francisco-based startup which provides with a dashboard-view all the contract expiration dates and allows to identify upcoming renewals with an alert system.29 SpeedLegal also allows for an automatic organisation and storage of documents by recognising the content of the document and by creating automatically renamed folders (‘Smart Folders’). In general, however, Contract Lifecycle Management Systems offer a much more structured filing system than the normal storage systems just highlighted, providing the ability to find relevant and critical clauses when needed without the need for classic manual searches. The most advanced storage features include version tracking, tagging according to the document and the necessary parameters and reporting activities that can help in the management of the volumes of stored documents.



25 See

drive.google.com. www.dropbox.com. 27 See www.box.com. 28 See speedlegal.io/. 29 See speedlegal.io/platform/reminders-expiring-renewals. 26 See

108  Giulio Messori Moreover, after a contract is drafted, signed and stored in a database, another essential feature of CLM to be taken into account is the auditing activity. As a matter of fact, contracts can be terminated, be renewed or be renegotiated: then, there is the necessity to follow up on each party’s rights and obligations and to implement modern monitoring and alert functions. In particular, those monitoring functions must be kept in one dedicated and single platform or section of an organisation’s workspace, and this is for two main reasons: i) increasing alert efficiency, by allowing team members to assign alerts to other team members of the organisation within the contracting platform; ii) risk diversification, by not losing track of upcoming actions when a person leaves a company; iii) increasing trust and loyalty to other parties, as missed renewals mean lost opportunities to continue a relationship and lost revenues. Last but not least, another essential element is the reporting activity. Typically, general counsel need to get reports on the number and types of contracts in storage and know who accessed and made changes to them.

VI.  Analysis Through AI & Machine Learning The contract analysis phase is a step in the Contract Lifecycle that occupies a space on the side with respect to the linear succession of phases described up to now. Document analysis systems are able to intelligently understand the content of documents, understand content placement, extrapolate the necessary information (the metadata mentioned earlier in the storage phase), and provide specific overviews in a short time. The main use case when it comes to the application of these systems regards M&A transactions, where the lawyers are typically involved in the analysis of large volumes of documents for each of the parties and need to spend a lot of time in reviewing specific clauses of each of them. Other examples of data extraction through such systems include automatic renewals, assignment rights, governing law, and terminations. The strengths of these systems are multiple. First, the ability to visualise information, thanks to the use of dashboards where the user is able to ‘unpack’ the information of large volumes of documents and to retrieve the information needed with a degree of granularity that reaches even the smallest clause of a document set consisting of thousands of documents. Second, the ability to launch advanced analysis capabilities on documents thanks also to the use of Machine Learning (ML), compared to rule-based systems, where the first can be defined as a statistical system that is capable of learning the relationships between different words and concepts, and finds where those concepts appear in different documents and the second as a search system that looks in documents for certain keywords, or combinations of words in a certain order.

Legal Tech Solutions for the Management of the Contract Lifecycle  109 The first project and now well-known industry-leading company in this field is Luminance,30 which provides a very clear explanation of the use of ML and the difference between unsupervised and supervised machine learning: Unsupervised machine learning does not require any a priori knowledge of labels or tags in order to solve a problem. Instead, the machine is exposed to vast datasets and by analysing these in their entirety, the underlying patterns of the data are revealed. The system quickly builds up a pattern of what is normal – and thus what is abnormal. Only unsupervised machine learning, which does not rely on understanding documents in relation to given labels, surfaces the ‘unknown unknowns’ – issues that the reviewers did not know existed and thus never searched for them or labelled them, but that nonetheless present as a significant anomaly.31 Within supervised machine learning, the system is exposed to examples of data that are described and defined. The system forms an understanding of what makes up each classification, and can apply these to new datasets that it sees.32

Systems like Luminance need to be trained continuously, so that the platform, starting from a base of unsupervised machine learning, can increasingly learn from the choices the legal professional makes when interacting with the system (supervised machine learning). Other examples of technologies in this field are Kira Systems33 and Eigen Technologies.34

VII.  Choosing the Right Solution. Outsourcing, Custom Build Applications, No Code Solutions and Enterprise Marketplaces The choice of the most suitable legal tech solution is a time-consuming activity that must be evaluated by taking into account several elements.

A. Outsourcing It is important to say that in this chapter we have had the opportunity to analyse solutions available on the market and accessible via an outsourcing model. For these types of systems, there are a number of key aspects to take into account when considering their integration (in addition to the type of functionality), namely: i) the organisation that is going to use the technology, ie a law firm,

30 See 31 See

www.luminance.com/. www.luminance.com/technology.html.

32 ibid. 33 See 34 See

kirasystems.com/. eigentech.com/.

110  Giulio Messori an in-house legal department or another type of department. This is crucial as some solutions available on the market have been designed with a precise target user and specific functionality in mind; ii) the people in the team who are going to use the technology, who will act as enablers for all other colleagues; iii) the digital competencies of the people involved. Where there is a low capability of interacting, some players provide for change management plans, ie a structured methodological approach to the period of time required to allow the workforce to gradually acquire new skills and thus ‘change’; iv) the processes currently in place within the legal team that wishes to implement the technology; v) the other existing technologies with which it may be necessary to interface and vi) the integration possibilities of each legal tech solution; vii) the jurisdiction of the software provider and an assessment on the applicability of the designed features to the user’s legal system (eg one solution can be designed in the UK and provide features for law firms operating in a common law legal system, while it might not be suitable for civil law systems); viii) the deployment model of the legal tech solution, namely cloud, on premise or an hybrid of the last two.

B.  Custom Build Solutions Some lawyers and companies have decided not to follow the path of outsourcing but to use internal, strategically important use cases to invest and build their legal tech solutions. Recent examples include law firm Reed Smith LLP, which created the Gravity Stack project first within the same law firm and then as a separate company. Gravity Stack now offers contract-applied AI solutions,35 a knowledge management system for the legal industry36 and Periscope™,37 an eDiscovery Platform. The process of creating custom software is interesting because users and investors can have a clear understanding of what is the initial need. The opposite risk, however, is the difficulty related to the investment to be made and the testing of the solution, with the risk of writing lines of code for a solution that may not be complete or ‘usable’ by a lawyer.

C.  No Code Builders The intermediate solution between outsourcing and custom build software is the increasingly recent and new trend of Low Code/No Code builders. Some law firms and corporate legal departments have decided to rely on solutions such



35 See

gravitystack.com/contract-intelligence/#diligence. gravitystack.com/legal-technology-consulting/#insight. 37 See gravitystack.com/litigation-investigation/#periscope. 36 See

Legal Tech Solutions for the Management of the Contract Lifecycle  111 as BRYTER,38 which provides a platform to easily build digital solutions and to automate complex services, by connecting without coding, and to deploy and test them in a small amount of time when compared to hours of custom code development.

D.  Enterprise Marketplaces The last, interesting way to access and deliver legal technologies is through app marketplaces and the clustering of legal tech solutions. The best example in this area is Reynen Court.39 Law firms and corporate legal departments can run and self-manage the Reynen Court, a platform that can be run on premises or in a virtual private cloud by law firms or corporate legal departments and where they can subscribe to a full range of legal tech applications depending on the need, just as App Stores are have been designed for smartphone apps. Reynen Court also lets the final user manage subscriptions and provisioning from one place, provides telemetry and enhanced interoperability between applications and supports a decentralisation of IT sourcing, purchasing and knowledge sharing.40

VIII. Conclusions The Association of Corporate Counsel (ACC) conducted the 2021 survey of Chief Legal Officers (CLOs), asking a sample of Corporate Counsel members who were planning to integrate legal technologies for their department what was the first technology they were thinking about or willing to implement in the next 24 months. Sixty-seven per cent of respondents said ‘Contract Management’, 41.4 per cent said ‘Document Management’ and 33.5 per cent said ‘eSignature’.41 It is interesting to note that when it comes to legal technologies, the major interest affecting the market all revolves around the Contract Lifecycle. However, it must also be noted that as of the time of writing the market is driven by the motto that change is a necessity, and that change must be accompanied by technology. Some market studies refer to this period in the exercise of the legal practice as the ‘Wild West’: We are clearly somewhere in the period which could be considered the ‘Wild West’. Massive opportunities, great advancements, but very rarely there is a clear direction or guiding principle other than ‘more tech is good’ […] Also, the fragmentation of internal



38 See

bryter.com. reynencourt.com. 40 See reynencourt.com/platform. 41 See Association of Corporate Counsel, Chief Legal Officers Survey (2021) 584 39 See

112  Giulio Messori procedures has often led to systems trying to work around those processes, leading to disjointed data flows and struggling with low levels of user adoption.

So, in addition to the timely exploration of the capabilities of legal tech, in the writers’ opinion, legal professionals need to be able to look at their work from a process perspective first, in order to understand how and where to change their business with technology. And as it has been said, where there is a change there is always the need for the management of that change, with the inclusion of new skills for a job that until the day before was conducted differently. Legal tech is here to stay. However, technologies will be used more frequently by the increasing demand of people able to handle the technology. This is one of the reasons for the increasing demand in the market for profiles such as Legal Tech Specialists, Legal Engineers, Innovation Officers and Legal Designers to be included in the legal departments or law firms. A certain degree of technological adoption often corresponds to the implementation of new skills. Legal teams of the future are and will be increasingly composed of professionals who will be responsible for structuring contracts according to its variable elements and within contract management solutions, but the relationship regarding the formation of these subjects will always be hybrid. On the one hand, these professionals will necessarily need to know the law, because structuring a contract according to different substantial provisions is a knowledge that cannot be undertaken by an operator who only knows the programming language. On the other hand, these professionals need to undertake and play an integral part with skills such as project and process management, IT and low-code skills and change management. Education will still play, as in all eras of history, the most crucial role of all.

IX.  Appendix: Phases of the Contract Lifecycle and Providers Quoted in this Chapter Chapter and phase

Name of the provider

II.A. Template marketplaces and contract outsourcing platforms

• Rocket Lawyer

www.rocketlawyer.com/.

• LexDoIt

www.lexdo.it/.

• LegalZoom

www.legalzoom.com/

• Iubenda

www.iubenda.com.

• Contract Express

mena.thomsonreuters.com/ en/products-services/legal/ contract-express.html.

II.B. Contract Drafting

• Juro • Bigle Legal • Clause Base

Websites

juro.com/. www.biglelegal.com/. www.clausebase.com/.

Legal Tech Solutions for the Management of the Contract Lifecycle  113 Chapter and phase

Name of the provider

III. Review and Negotiation

• Juro

juro.com/.

• LawGeex

www.lawgeex.com/.

• BlackBoiler

www.blackboiler.com/.

• Thought River

www.thoughtriver.com/.

• DocuSign

www.docusign.com/.

• HelloSign

hellosign.com.

• AdobeSign

acrobat.adobe.com.

• PandaDoc

www.pandadoc.com/.

• Drive

drive.google.com.

• DropBox

www.dropbox.com.

• Box

www.box.com.

• SpeedLegal

speedlegal.io/.

• Luminance

www.luminance.com/.

• Kira Systems

kirasystems.com/.

• Eigen

eigentech.com/.

• Gravity Stack (Custom build)

gravitystack.com.

• Bryter (No code)

reynencourt.com.

IV. e-Signing

V. Storage and post-signing management

VI. Analysis through AI & Machine Learning VII. Outsourcing, Custom build Applications, No Code solutions & Enterprise Marketplaces

• Reynen Court (Enterprise Marketplace)

Websites

bryter.com.

114

7 Building a Chatbot: Challenges under Copyright and Data Protection Law ALEKSEI KELLI, ARVI TAVAST AND KRISTER LINDÉN

I. Introduction Chatbots have become an integral part of everyday life. A chatbot (conversational agent, dialogue system, virtual assistant) is defined as ‘a computer system that operates as an interface between human users and a software application, using spoken or written natural language as the primary means of communication’.1 Such use of natural language understanding and production makes chatbots one of the most demanding and comprehensive applications of natural language processing (NLP). The idea behind chatbots is to simulate conversation with another human. In the current stage of technical development, the resulting conversation is not on a human level, and it is obvious for users that they are interacting with an artificial system. Regardless, chatbots are becoming useful in practical applications, especially in clearly defined domains like voice-based search, appointment scheduling or controlling home appliances. The authors in this chapter focus on copyright and personal data protection challenges relating to building chatbots. The chapter reflects previous research2 and develops it further. The author’s main focus is on building models for chatbots which takes place before the chatbot service is offered. The authors rely on the EU acquis and also use Estonian law to exemplify legal requirements. The article constitutes interdisciplinary analysis where the authors integrate legal and technological domains. 1 B Galitsky, Developing Enterprise Chatbots: Learning Linguistic Structures (Springer, 2019) 13. 2 A Kelli, A Tavast and K Lindén, ‘Vestlusrobotid ja autoriõigus (Chatbots and Copyright)’ (2020) 5 Juridica; A Kelli, A Tavast, K Lindén, K Vider, R Birštonas, P Labropoulou, I Kull, G Tavits, A Värv, P Stranák and J Hajic, ‘The Impact of Copyright and Personal Data Laws on the Creation and Use of Models for Language Technologies’ in K Simov and M Eskevich (eds), Selected Papers from the CLARIN Annual Conference 2019 (53−65). (Linköping University Electronic Press, 2020) 53−65. ep.liu.se/en/ conference-article.aspx?series=ecp&issue=172&Article_No=8.

116  Aleksei Kelli, Arvi Tavast and Krister Lindén From the legal perspective, chatbots, as such, are computer programs. According to Article 1(1) of the Computer Programs Directive,3 computer programs are protected by copyright as literary works. Chatbots rely on language models that are copyright-protected databases. A computer program compiles language models (databases) from data snippets. It is not usually possible to extract original data used for the creation of language models. However, the main challenge is the lawful acquisition and use of the training data needed to create language models since the training data could contain copyright-protected works, objects of related rights and personal data.4 The adoption and implementation of the DSM Directive5 have a significant impact on the creation of language models used for chatbots. Articles 3 and 4 of the DSM Directive, which regulate the text and data mining (TDM) exception, are particularly relevant since they establish legal grounds to use training data. Before the DSM Directive, the InfoSoc Directive6 was the main legal instrument regulating TDM. When it comes to personal data protection, the adoption of the General Data Protection Regulation7 (GDPR) has implications for creating chatbots. Personal data protection concerning chatbots is such a relevant and complex issue that the European Data Protection Board (EDPB) has adopted the Guidelines on Virtual Voice Assistants (Guidelines on VVA),8 systematically analysing the creation and use of chatbots. Scientific literature identifies the following interaction points between generating machine learning models and personal data protection: 1) models cannot be trained from personal data without a specific lawful ground; 2) data subjects should be informed of the intention to train a model; 3) the data subject has the right to object or withdraw consent; and 4) in case of automated decisions, individuals should have meaningful information about the logic involved.9

3 Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on the legal protection of computer programs (Codified version) (Computer Programs Directive) [2009] OJ L111. 4 The reference to copyrighted content or works also covers objects of related rights. Depending on the context, the terms ‘language data’, ‘training data’ and ‘data’ are used as synonyms. It is presumed that language data contains personal data and copyrighted content. 5 Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (DSM Directive) [2019] OJ L130. 6 Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society (InfoSoc Directive) [2001] OJ L167. 7 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119. 8 European Data Protection Board. Guidelines 02/2021 on Virtual Voice Assistants. Version 1.0. Adopted on 9 March 2021 (Guidelines on VVA). edpb.europa.eu/sites/edpb/files/consultation/ edpb_guidelines_022021_virtual_voice_assistants_adopted-public-consultation_en.pdf. 9 M Veale, R Binns and L Edwards, ‘Algorithms that remember: model inversion attacks and data protection law’ (2018) Philosophical Transactions of Royal Society A, 2. doi.org/10.1098/rsta.2018.0083.

Building a Chatbot: Challenges under Copyright and Data Protection Law  117 The first part of our chapter deals with the following technical issues of chatbots: use of machine learning,10 modality of conversation (spoken or written), location of data processing, length of memory of the chatbot, and purpose of use. The second part concentrates on legal regimes of data used to create a chatbot and deals with the legal basis for the use of data. The authors’ focus is on the process of building a chatbot rather than its intended use. It should be pointed out that building and improving a chatbot is a continuous process that does not end with the launch of the service.

II.  Technical Background of Chatbots A.  Use of Machine Learning Most chatbots from the first attempts11 to current systems rely substantially on explicitly programmed rules and manually written responses. For example, if the words ‘movies’ and ‘tonight’ can be detected in the user’s utterance, the rule could be to output movie listings near the user’s location. In this simple but widespread case, the chatbot has nothing to do with machine learning or related copyright issues. Instead, it is a traditional computer program protected by copyright as a literary work. The main difficulty with such simple chatbots is preparing rules to cover a sufficient variety of possible user inputs with acceptable precision, which can easily become unrealistic unless the domain is narrowly restricted. Objectives for developing chatbot technology include both improving the quality and reducing the amount of manual labour involved. To get to the legal issues related to this development, let us first briefly describe the working principle of chatbots. Modern chatbots consist of a range of generic NLP components, ie not specific to chatbots, performing the following three main functions: a) Understanding the user’s utterance. Due to variability and under-specification in natural language, this is a non-trivial task. Necessary components may include language detection of which language the user is speaking, text cleaning and error correction, tokenisation identifying sentences and words in an utterance, part-of-speech tagging and syntactic parsing determining the form and function of each word in a sentence, synonym detection in case the user’s word choice is slightly different from the system creators’, anaphora resolution

10 See, eg, I Goodfellow, Y Bengio and A Courville, Deep Learning (MIT Press, 2016) 96–161. www. deeplearningbook.org/ (31.10.2020). 11 J Weizenbaum, ‘ELIZA – A Computer Program for the Study of Natural Language Communication Between Man and Machine’ (1966) 9(1) Communications of the ACM 36–45.

118  Aleksei Kelli, Arvi Tavast and Krister Lindén determining what or whom pronouns in a sentence refer to, named entity recognition whether ‘Smith’ at the beginning of a sentence is a general noun or a surname, multi-word expression detection recognising that ‘give permission’ may mean the same as ‘allow’, etc. Speech recognition is additionally needed for chatbots using the spoken modality. b) Preparing the response. As already mentioned, chatbots are a type of user interface between a human and an information system. Functions of that information system may range as widely as functions of information systems in general: given a system that can be interacted with by a human, it is at least theoretically conceivable that the interaction takes place in the form of a chatbot. c) Expressing the response to the user. The task of natural language production is somewhat simpler than natural language understanding and may include the generation of grammatical sentences and the production of naturalsounding text. Speech synthesis or text-to-speech conversion is added in the spoken modality. In multimodal communication, such as a ‘talking head’ or an animated avatar, an additional task is to generate human-like facial movements to accompany the synthesised speech. Item b) above is outside the scope of this chapter because information systems, as well as their legal issues, if any, are too heterogeneous to be covered in any detail here. Items a) and c), however, cover all of what is currently considered the domain of NLP.12 It is not a coincidence that a talking fridge has previously been used13 as an example application for discussing legal issues of NLP: a state-of-the-art talking fridge would have to use most if not all technological achievements of the field. In the current state of machine learning technology, models are primarily trained while developing the chatbot. The complexity of language models and the amount of computational resources involved in their training has increased significantly during recent years. Interactive training, ie an arrangement where the system continues to learn while interacting with the user, utilising their utterances as additional training data, is also possible and is used in so-called recommender systems learning the user preferences and storing them in a user profile for giving personal recommendations. See Figure 1 below for a simplified process diagram of creating a machine learning model.

12 See, eg, A Clark, C Fox and S Lappin, The Handbook of Computational Linguistics and Natural Language Processing (Wiley, 2013). 13 A Kelli, A Tavast, K Linden, R Birstonas, P Labropoulou, K Vider, I Kull, G Tavits, A Värv and V Mantrov, ‘Impact of Legal Status of Data on Development of Data-Intensive Products: Example of Language Technologies’, in Legal Science: Functions, Significance and Future in Legal Systems II (The University of Latvia Press, 2020) 383-400. doi.org/10.22364/iscflul.7.2.31.

Building a Chatbot: Challenges under Copyright and Data Protection Law  119 Figure 1  Creating and using a machine learning model

120  Aleksei Kelli, Arvi Tavast and Krister Lindén Training data for language models consists of written text and speech recordings organised into corpora. Depending on the chosen machine learning technology and resources available to the model creator, these corpora may be manually or semi-automatically annotated or augmented with additional information layers. Creating the model starts from choosing, developing or customising the technology. As models grow, the training process itself has become so computationally intensive that reducing model sizes and training times is currently a major topic in machine learning research.14 Human intellectual contribution in the training process may include selecting and pre-processing the training data, hyper-parameter tuning, model testing and optimisation. It has been shown that hyper-parameter choices significantly impact the final results and even state of the art models may easily contain overlooked possibilities.15 In case of unacceptable results, the process can be restarted at some earlier step. After reaching a model that is considered good enough, it can be deployed in a real product. A modern chatbot contains several such models performing various functions for understanding and generating language.

B.  Modality of the Interaction Current chatbots communicate with the user in written or spoken form. For the spoken modality, speech recognition and speech synthesis models require human speech for training. Since a person’s voice is personal data, model creation is subject to personal data protection regulations. Voice samples remain personal data unless they are transformed, compressed and stored as speaker-independent data samples, in which case the only remaining personal data may be in the content of the voice sample. In this case, the sample is similar to a text data sample. An additional issue with voice-activated chatbots like Amazon Alexa or Google Home is the need to recognise the activation command. The chatbot must continually listen to speech around it to know when it is being addressed. Especially at home or elsewhere in the private sphere, such listening may cause justified privacy concerns. Of course, it is technically possible for the chatbot to ignore all other speech apart from its activation command, but knowing this may not be sufficient to alleviate the uneasiness associated with someone constantly listening.

C.  Location of Data Processing The technical detail amplifies privacy concerns that, especially speech models, are currently so resource-hungry that running them on edge devices such as 14 See, eg, X Jiao, Y Yin, L Shang, X Jiang, X Chen, L Li and FW Qun Liu, ‘TinyBERT: Distilling BERT for Natural Language Understanding’ (2020) arXiv:1909.10351 [cs.CL]. 15 Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer and V Stoyanov, ‘RoBERTa: A Robustly Optimized BERT Pretraining Approach’ (2019) arXiv:1907.11692 [cs.CL].

Building a Chatbot: Challenges under Copyright and Data Protection Law  121 phones, watches, smart speakers, etc, may not be realistic. It is easier to send the speech signal to the service provider’s server to be processed using more computational power, returning the complete response to the user device. Guidelines on VVA also explains that since data is transferred to remote servers, the chatbot service provider as the data controller16 needs to consider both the e-Privacy Directive17 and General Data Protection Regulation. If future chatbots can provide services locally, the applicability of the e-Privacy Directive needs to be reassessed.18 These legal issues associated with such transmission and processing of data have been among the factors impeding the adoption of chatbots. This is part of the motivation to develop smaller and faster language models that can be run on less powerful user devices.

D.  Length of Memory Considering that chatbots respond to user queries, they can be regarded as similar to search engines. Initially, the user query was the only piece of information that the search engine used for determining its response. The current default behaviour of mainstream search engines is different, however. Data such as the user’s location and search history are used for improving the relevance of results.19 Likewise, a chatbot may or may not store earlier utterances of the user with the aim of obtaining a more comprehensive view of the user and providing better reactions to future utterances. Such storing itself must have some legal basis (eg consent, legitimate interest), considering that the system has no way of ensuring that user utterances do not contain personal data or copyrighted works. In this context, it is essential to follow the personal data processing principles such as transparency20 (the user is informed about different processing activities), purpose limitation21 (processed for a specific purpose), data minimisation (processed data is limited to what is 16 Article 4(7) of the GDPR defines controller as ‘the natural or legal person, public authority, agency or other body which, alone or jointly with others, determines the purposes and means of the processing of personal data’. 17 Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) [2002] OJ L201 as amended by Directive 2006/24/EC and Directive 2009/136/EC (e-Privacy Directive). 18 Guidelines on VVA, 2, 12. 19 See, eg, S De Conca, ‘GC et al v CNIL: Balancing the Right to Be Forgotten with the Freedom of Information, the Duties of a Search Engine Operator (C-136/17 GC et al v CNIL)’ (2019) 5/4 European Data Protection Law Review 561–567. 20 See Art 29 Working Party. Guidelines on transparency under Regulation 2016/679. Adopted on 29 November 2017. As last Revised and Adopted on 11 April 2018. ec.europa.eu/newsroom/article29/ item-detail.cfm?item_id=622227. 21 See Art 29 Working Party. Opinion 03/2013 on purpose limitation. Adopted on 2 April 2013. ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf.

122  Aleksei Kelli, Arvi Tavast and Krister Lindén strictly necessary) and storage limitation (data is kept as long as strictly needed) as provided by Article 5 of the General Data Protection regulation. Considering the storage limitation principle, Guidelines on VVA point out that some chatbot service providers keep personal data that requires deletion. This approach violates the principle of storage limitation. Personal data should not be retained for longer than it is necessary for the specific processing.22 Multiuser systems have the additional technical challenge of recognising which utterances originate from the same user. Four main approaches can currently be used to achieve this, each with their technical, usability-related and legal drawbacks: explicit registration and authentication of the user, browser session, cookies stored on the user’s device, or IP address of the device. The inclusion of the latter among personal data23 has reduced but not completely obliterated its use. Guidelines on VVA require chatbot designers to consider the use of technologies for filtering out unnecessary background noise (eg a third person’s speech) to ensure that only the user’s voice is recorded.24

E.  Purpose of Use of Data According to the Guidelines on VVA, the four most common purposes for which chatbots process personal data are: 1) executing requests; 2) improving the machine learning model; 3) biometric identification; and 4) profiling for personalised content or advertising. Of these four, if data is processed in order to execute the user’s requests, i.e. as strictly necessary in order to provide a service requested by the user, data controllers are exempted from the requirement of prior consent under Article 5(3) the e-Privacy Directive.25 Conversely, such consent as required by Article 5(3) of the e-Privacy Directive would be necessary for the storing or gaining of access to information for any purpose other than executing users’ request.26

22 Guidelines on VVA, 3. 23 According to the EU case law (Case C-70/10 Scarlet Extended SA v SABAM [2011] ECR I-11959) IP addresses constitute personal data since they allow to identify the users. It was later specified that ‘a dynamic IP address registered by an online media services provider when a person accesses a website that the provider makes accessible to the public constitutes personal data’ (Case C-582/14 Patrick Breyer v Bundesrepublik Deutschland [2016] Digital reports). 24 Guidelines on VVA, 3. 25 Article 5(3) e-Privacy Directive read as follows: ‘Member States shall ensure that the storing of information, or the gaining of access to information already stored, in the terminal equipment of a subscriber or user is only allowed on condition that the subscriber or user concerned has given his or her consent, having been provided with clear and comprehensive information, in accordance with Directive 95/46/EC, inter alia, about the purposes of the processing. This shall not prevent any technical storage or access for the sole purpose of carrying out the transmission of a communication over an electronic communications network, or as strictly necessary in order for the provider of an information society service explicitly requested by the subscriber or user to provide the service’. 26 Guidelines on VVA, 2–3.

Building a Chatbot: Challenges under Copyright and Data Protection Law  123

III.  Legal Framework for Building Chatbots Chatbots do not contain different or separate components from NLP or entail other legal issues. The authors have previously discussed legal problems in NLP.27 In this section, the authors address challenges relating to legal restrictions on language data used for building chatbots, lawful bases to use the data and the legal status of the models.

A.  Data Used to Build a Chatbot The creation of chatbots requires the use of training data (language data). Without doubt, it is preferable and more convenient to use data that does not have any legal restrictions. Anonymous data sis preferable since it is outside the scope of the GDPR.28 It is also possible to anonymise personal data.29 However, it is often unavoidable to use data containing personal data or copyrighted content. Article 4(1) of the GDPR defines personal data30 as follows: ‘any information relating to an identified or identifiable natural person (‘data subject’)’. It is a real challenge to draw a line between data relating to an indirectly identifiable natural person and anonymous data.31 The GDPR also regulates special categories of personal data, for which processing is even more restricted.32 Guidelines on VVA acknowledges that data processed by chatbots could be sensitive since ‘It may carry personal data both in

27 A Kelli, A Tavast, K Linden, R Birstonas, P Labropoulou, K Vider, I Kull, G Tavits, A Värv and V Mantrov, ‘Impact of Legal Status of Data on Development of Data-Intensive Products: Example of Language Technologies’, in Legal Science: Functions, Significance and Future in Legal Systems II (The University of Latvia Press, 2020) 383–400. doi.org/10.22364/iscflul.7.2.31; A Tavast, H Pisuke and A Kelli, ‘Õiguslikud väljakutsed ja võimalikud lahendused keeleressursside arendamisel (Legal Challenges and Possible Solutions in Developing Language Resources)’ (2013) 9 Eesti Rakenduslingvistika Ühingu aastaraamat 317–332. dx.doi.org/10.5128/ERYa9.20. 28 Recital 26 of the GDPR explains: ‘The principles of data protection should therefore not apply to anonymous information, namely information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable’. 29 See WP29. Opinion 05/2014 on Anonymisation Techniques Adopted on 10 April 2014. ec.europa. eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf. 30 For further explanation of the concept of personal data, see Art 29 Working Party (WP29). Opinion 4/2007 on the concept of personal data. Adopted on 20th June. ec.europa.eu/justice/article-29/ documentation/opinion-recommendation/files/2007/wp136_en.pdf. 31 See G Spindler and P Schmechel, ‘Personal Data and Encryption in the European General Data Protection Regulation’ (2016) 7 Journal of Intellectual Property, Information Technology and Electronic Commerce Law. 32 The GDPR Art 9(1) defines special categories of personal data as ‘personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation’.

124  Aleksei Kelli, Arvi Tavast and Krister Lindén its content (meaning of the spoken text) and its meta-information (sex or age of the speaker etc.).’33 As a general rule, the GDPR protects the personal data of a living person. Recital 27 of the GDPR explains that the regulation does not apply to the personal data of deceased persons. However, EU Member States may regulate this issue. For instance, according to the Estonian Personal Data Protection Act 2018, section 9(2), personal data is protected for ten years after the death of the data subjects (20 years in case of minors). The definition of personal data is rather extensive, and it covers a variety of data potentially relevant for chatbots, such as a person’s name,34 direct and indirect identifiers and a person’s voice. The human voice, of which samples are needed to train models for chatbots, is a particularly interesting legal phenomenon.35 Article 4(14) of the GDPR defines it as biometric data,36 which in Article 9(1) is listed as special categories of personal data in case it is processed ‘for the purpose of uniquely identifying a natural person’. The question is whether the human voice as such is covered by the special categories of personal data. The situation is analogous to photographs of people as a person’s image also contains biometric data (physical characteristics) named as special categories of personal data (GDPR Article 9 (1)). Recital 51 of the GDPR explains that: The processing of photographs should not systematically be considered to be processing of special categories of personal data as they are covered by the definition of biometric data only when processed through a specific technical means allowing the unique identification or authentication of a natural person.

The authors are of the opinion that the same approach applies to voice as well. Unless the voice is used to identify an individual, it is not in the special categories of personal data, and its processing is not subject to the stricter requirements applicable to special categories of personal data.37 Although it is recommendable to use non-copyrighted content, the reality is that data used to build chatbots is often copyright protected. Drawing a line between copyrightable and non-copyrightable content is similarly challenging, as is the case with personal and non-personal data. Copyright law protects works. According to Article 2(1) of the Berne Convention, ‘The expression “literary and 33 Guidelines on VVA, 12. 34 A person’s name is considered personal data already by early EU case law. See Case C-101/01 Criminal proceedings against Bodil Lindqvist [2003] ECR I-12971. 35 See I Ilin and A Kelli, ‘The Use of Human Voice and Speech for the Development of Language Technologies: the EU and Russian Data Protection Law Perspectives’ (2020) 29Juridica International 71−85. www.juridicainternational.eu/article_full.php?uri=2020_29_the_use_of_human_voice_and_speech_ for_development_of_language_technologies_the_eu_and_russia. 36 According to the GDPR Art 4 (14) ‘“biometric data” means personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data’. 37 Guidelines on VVA also support the position that unless voice is used to identify an individual, it is not considered as special categories of personal data.

Building a Chatbot: Challenges under Copyright and Data Protection Law  125 artistic works” shall include every production in the literary, scientific and artistic domain, whatever may be the mode or form of its expression.’38 The concept of copyright protection is addressed in the EU case law as well. The European Court of Justice (ECJ) has held that copyright applies ‘only in relation to a subject-matter which is original in the sense that it is its author’s own intellectual creation’.39 According to the ECJ the originality exists ‘if the author was able to express his creative abilities in the production of the work by making free and creative choices’.40 Intellectual creations cannot be classified as works when there is no room for creative freedom for the purposes of copyright.41 When it comes to the development of chatbots, our emphasis is on works that consist of words. One word is not copyright protected since there is no creativity. The intriguing question is how many words it takes to qualify as a copyright-protected work.42 The ECJ has also analysed the issue and put forward the following explanation: words which, considered in isolation, are not as such an intellectual creation of the author who employs them. It is only through the choice, sequence and combination of those words that the author may express his creativity in an original manner and achieve a result which is an intellectual creation.

At the same time, the court suggested that 11 consecutive words are protected by copyright if that extract contains an element of the work, which, as such, expresses the author’s own intellectual creation.43 In case speech is used, there is a need to consider related rights44 to copyright as well.45 In this situation, it is important to distinguish between the author of the work and the performer of the work. When works are protected by copyright, then the performances are protected by related rights.46 A speech can be considered a 38 European countries have a similar approach. For instance, s 4(2) of the Estonian Copyright Act 1992 defines works as ‘any original results in the literary, artistic or scientific domain which are expressed in an objective form and can be perceived and reproduced in this form either directly or by means of technical devices. A work is original if it is the author’s own intellectual creation’. 39 Case C-5/08 Infopaq International A/S v Danske Dagblades Forening [2009] ECR I-06569, para 37. 40 Case C-145/10 Eva-Maria Painer v Standard VerlagsGmbH and Others [2011] ECR I-12533, para 89. 41 Joined Cases C-403/08 and C-429/08 Football Association Premier League Ltd and Others v QC Leisure and Others [2011] ECR I-09083, para 98. 42 For further discussion, see P Kamocki, ‘When Size Matters. Legal Perspective(s) on N-grams’. Proceedings of CLARIN Annual Conference 2020. 05–07 October 2020. Virtual Edition. C Navarretta and M Eskevich (eds) (CLARIN 2020) 166–169. office.clarin.eu/v/CE-2020-1738-CLARIN2020_ ConferenceProceedings.pdf. 43 Case C-5/08 Infopaq International A/S v Danske Dagblades Forening [2009] ECR I-06569, paras 45, 48. 44 Sometimes related rights are referred to as neighbouring rights. The nature of the related rights is that they contribute to dissemination of copyright protected works (performer’s rights, phonogram producer’s rights) or protect investment (rights of maker of sui generis databases). 45 In legal practice, both type of rights are usually transferred to third parties (more often to legal entities) who are called rightholders. 46 Article 2(a) of the International Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations (Rome 26 October 1961) defines performers as ‘actors, singers, musicians, dancers, and other persons who act, sing, deliver, declaim, play in, or otherwise perform literary or artistic works’.

126  Aleksei Kelli, Arvi Tavast and Krister Lindén performance. If the speech is pre-recorded, the phonogram producer’s rights have to be respected as well.47 Copyrighted content is sometimes extracted from databases. We can distinguish copyright protected and sui generis databases. According to Article 7 of the Database Directive,48 the maker of a sui generis database acquires rights to allow and forbid the extraction and re-utilisation of the database or its substantial part if the maker shows that there has been qualitatively and/or quantitatively a substantial investment in either the obtaining, verification or presentation of the contents to prevent extraction and/or re-utilisation of the whole or of a substantial part, evaluated qualitatively and/or quantitatively, of the contents of that database.

Sui generis database rights arise from the investment in the development of a database, not in the creation of the data in it. According to the EU case law, investment refers to ‘the resources used to seek out existing independent materials and collect them in the database. It does not cover the resources used for the creation of materials which make up the contents of a database’.49 It should be mentioned that the DSM Directive might introduce a potentially relevant right for language research affecting the creation of language models used for chatbots. According to Article 15 of the DSM Directive, publishers of press publications shall have the reproduction right and the making available right for the online use of their press publications by information society service providers. The impact of this right remains to be seen. The acknowledgement that training data required to build a chatbot often contains personal data and content protected by copyright and related rights does not exclude its use. The use is possible, but it has to follow the requirements set forth by copyright and data protection laws analysed in the following section.

B.  Legal Basis for Building a Chatbot There are two different scenarios for building a chatbot. This distinction relies on the chatbot’s life cycle. According to the first scenario (the initial creation), data containing personal data and copyrighted content is used from different sources 47 Article 2(c) of the International Convention for the Protection of Performers, Producers of Phonograms and Broadcasting Organizations defines the producer of phonograms as ‘the person who, or the legal entity which, first fixes the sounds of a performance or other sounds’. 48 Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases (Database Directive) OJ L77. 49 C-203/02 The British Horseracing Board Ltd and Others v William Hill Organization Ltd [2004] ECR I-10415. For further analysis, see A Kelli, A Tavast, K Lindén, K Vider, R Birštonas, P Labropoulou, I Kull, G Tavits, A Värv, P Stranák and J Hajic, ‘The Impact of Copyright and Personal Data Laws on the Creation and Use of Models for Language Technologies’, in K Simov and M Eskevich (eds), Selected Papers from the CLARIN Annual Conference 2019 (Linköping University Electronic Press, 2020) 53−65. ep.liu.se/en/conference-article.aspx?series=ecp&issue=172&Article_No=8.

Building a Chatbot: Challenges under Copyright and Data Protection Law  127 to create a language model for a chatbot.50 The second scenario relates to the improvement of the model during the use of a chatbot. The Guidelines on VVA explain the need for improvement as follows [t]he accents and variations of human speech are vast. While all VVAs are functional once out of the box, their performance can improve by adjusting them to the specific characteristics of users’ speech.

To improve machine learning methods, chatbot designers wish to have access to and process data (eg, voice snippets) relating to the use of the device. Article 5(3) of the e-Privacy Directive requires consent for ‘gaining of access to information for any purpose other than executing users’ request’.51 The authors concentrate on the initial creation since the improvement of the existing chatbot relies on standard terms of use and the user’s consent to process his personal data. From a legal point of view, the creation of models involves text and data mining TDM. Article 2(2) of the DSM Directive defines text and data mining as ‘any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations’. Text and data mining has a different meaning in personal data protection and copyright context. From the personal data protection perspective, text and data mining constitutes such processing of personal data that needs a legal basis. The reason is that the GDPR defines processing so extensively that any operation performed on personal data is processing.52 From the copyright perspective, text and data mining as such is not copyright relevant.53 It is comparable to reading a book. However, a legal basis is needed to copy the data containing copyrighted content for subsequent text and data mining. Generally speaking, the use of personal data and copyrighted content can be based on permission or some other legal bases. According to Article 6(1) of the GDPR, consent is a legal basis for processing personal data. Article 4(11) of the GDPR defines the data subject consent as any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her. 50 Theoretically we could distinguish two cases here: 1) language model is developed for a specific chatbot; 2) language model is acquired from a third party. However, in the second case, the creation of the acquired language model had to be based on some legal bases which makes the case identical to the first case. Therefore, the second case is not discussed. 51 Guidelines on VVA, 22, 10, 12. 52 Article 4(2) of the GDPR defines processing as ‘any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction’. 53 This is also emphasised in Recital 9 of the DSM Directive which states ‘Text and data mining can also be carried out in relation to mere facts or data that are not protected by copyright, and in such instances no authorisation is required under copyright law’.

128  Aleksei Kelli, Arvi Tavast and Krister Lindén Article 7 of the GDPR describes conditions for consent such as the burden of proof, language and distinguishability from the other matters, withdrawal of consent and criteria for the assessment of consent.54 Although consent potentially guarantees a high level of protection of the rights and freedoms of the data subject, its acquisition is not always possible. For instance, the creation of language models could require the use of a high number of blog posts or video recordings or, many years ago collected language data (legacy data). Big data55 poses similar problems.56 In the referred cases, it is not realistic to acquire the data subject’s consent due to an extremely high administrative burden or lack of contact data. Therefore, some other legal ground is needed. The GDPR provides two other legal grounds besides consent which could potentially be relied upon.57 The first possible legal ground is the performance of a task carried out in the public interest.58 The reliance on this legal ground is based on the logic that research organisations and universities conduct research in the public interest. This means that language data has to be processed to develop language models by the university. Private entities cannot use this legal ground. However, Recital 159 of the GDPR explains that the processing of personal data for scientific research purposes should be interpreted in a broad manner including, for example, technological development and demonstration, fundamental research, applied research and privately funded research.

This means that a public-private partnership is possible and the private sector can develop language models in collaboration with research organisations. Another legal ground for processing personal data is legitimate interests.59 There are no limitations as to potential controllers as is the case with research organisations acting in public interest. Every entity and individual can process with a legitimate interest. However, there is no clarity as to its exact scope, so the legitimate interest needs to be motivated for each case.60 54 For explanation of the concept of consent, see European Data Protection Board. Guidelines 05/2020 on consent under Regulation 2016/679. Version 1.1. Adopted on 4 May 2020. edpb.europa.eu/ sites/edpb/files/files/file1/edpb_guidelines_202005_consent_en.pdf. 55 For an explanation of the concept of big data, see R Kitchin and G McArdle, ‘What makes Big Data, Big Data? Exploring the ontological characteristics of 26 datasets’ (2016) Big Data & Society 1–10. journals.sagepub.com/doi/full/10.1177/2053951716631130. 56 It is pointed out in legal literature that big data and personal data protection controversies reveal ‘difficulties in finding a legitimate ground for processing or acquiring consent from the data subject’ – M Oostveen, ‘Identifiability and the applicability of data protection to big data’ (2016) 6 (4) International Data Privacy Law 309. 57 For further discussion, see A Kelli, K Lindén, K Vider, P Kamocki, R Birštonas, S Calamai, P Labropoulou, M Gavriilidou and P Stranák, ‘Processing personal data without the consent of the data subject for the development and use of language resources’, in I Skadina and M Eskevich (eds), Selected papers from the CLARIN Annual Conference 2018, Pisa, 8–10 October 2018 (Linköping University Electronic Press, 2019) 72–82. ep.liu.se/ecp/article.asp?issue=159&article=008&volume=. 58 The GDPR Art 6(1)(e). 59 ibid Art 6(1)(f). 60 For a further explanation of the concept of legitimate interest, see WP29. Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC. Adopted on 9 April 2014. ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/ wp217_en.pdf.

Building a Chatbot: Challenges under Copyright and Data Protection Law  129 The acquisition of permission from the rightholder to use copyrighted content faces similar challenges as described above concerning getting consent to process personal data. Therefore, there is a need to rely on some other legal basis (ie copyright exceptions) to copy61 copyrighted content for TDM. Before adopting the DSM Directive, the InfoSoc Directive constituted the copyright framework for making copies of copyrighted content for text and data mining.62 The potential exceptions are the private use exception, the quotation right, the temporary reproduction right and the research exception.63 The private use exception is meant for natural persons for private use64 and has limited impact on building a chatbot. Anyone can rely on the quotation right and the temporary reproduction right.65 Theoretically, the quotation right could be a suitable legal basis to use copyrighted content since it does not exclude commercial purposes. However, the scope of the quotation right is somewhat limited. According to Article 5(3)(d), quotations are allowed for purposes such as criticism or review, provided that they relate to a work or other subject-matter which has already been lawfully made available to the public, that, unless this turns out to be impossible, the source, including the author’s name, is indicated, and that their use is in accordance with fair practice, and to the extent required by the specific purpose.

The narrow scope has been reinforced by the EU case law as well. The European Court of Justice has explained that ‘the user of a protected work wishing to rely on the quotation exception must therefore have the intention of entering into “dialogue” with that work’.66 Since content protected by copyright and related rights is used as raw material to develop language models and there is no ‘dialogue’ as required by the EU acquis, then the quotation right is not applicable. The temporary reproduction right67 might be a legal basis to copy copyrighted content for TDM to create language models. Its scope covers temporary acts of 61 It should be mentioned that TDM mainly concerns the reproduction right. Article 2 of the InfoSoc Directive defines reproduction right as ‘exclusive right to authorise or prohibit direct or indirect, temporary or permanent reproduction by any means and in any form, in whole or in part’. 62 For the discussion on the general framework of TDM, see JP Triaille, J de Meeûs d’Argenteuil and A de Francquen, ‘Study on the legal framework of text and data mining (TDM)’ (March 2014). op. europa.eu/en/publication-detail/-/publication/074ddf78-01e9-4a1d-9895-65290705e2a5/language-en. 63 See A Kelli, A Tavast, K Linden, R Birstonas, P Labropoulou, K Vider, I Kull, G Tavits, A Värv and V Mantrov, ‘Impact of Legal Status of Data on Development of Data-Intensive Products: Example of Language Technologies’, Legal Science: Functions, Significance and Future in Legal Systems II (The University of Latvia Press, 2020) 383-400. doi.org/10.22364/iscflul.7.2.31. 64 InfoSoc Directive Art 5(2)(b). 65 ibid Art 5(3)(d) and 5(1). 66 Case C-476/17 Pelham GmbH, Moses Pelham, Martin Haas v Ralf Hütter, Florian Schneider-Esleben [2019] Digital reports, para 71. 67 InfoSoc Directive Art 5(1) reads as follows: ‘Temporary acts of reproduction referred to in Article 2, which are transient or incidental [and] an integral and essential part of a technological process and whose sole purpose is to enable: (a) a transmission in a network between third parties by an intermediary, or (b) a lawful use of a work or other subject-matter to be made, and which have no independent economic significance, shall be exempted from the reproduction right provided for in Article 2’.

130  Aleksei Kelli, Arvi Tavast and Krister Lindén reproductions which are an integral part of a technological process.68 Recital 9 of the DSM Directive also suggests that the temporary reproduction right ‘should continue to apply to text and data mining techniques that do not involve the making of copies beyond the scope of that exception’. However, some experts warn that ‘a considerable level of uncertainty surrounds the applicability of the exception for temporary uses of Art. 5(1) InfoSoc and a proper analysis of each case should be performed before relying on it’.69 According to Article 5(3) clause a) of the InfoSoc Directive, the EU Member States may provide for exceptions to the reproduction right for ‘scientific research, as long as the source, including the author’s name, is indicated, unless this turns out to be impossible and to the extent justified by the non-commercial purpose to be achieved’. Although the research exception excludes commercial purposes, it can be used to copy content for TDM. Legal commentators also emphasise that ‘the exception is useful and relevant for TDM’.70 The DSM Directive creates a specific framework for text and data mining. The regulation covers the TDM exception for research purposes which is meant for research organisations and cultural heritage institutions, and the general TDM exception meant for everyone.71 Both exceptions limit the reproduction right and the right to make extractions from a database. The reliance on the general TDM exception can be excluded by the rightholder.72 At the same time, any contractual provision limiting the TDM for research purposes is unenforceable.73 According to Article 3(2) of the DSM Directive, the TDM exception for research purposes also allows retaining a copy of protected content ‘for the purposes of scientific research, including for the verification of research results’. Article 4(2) of the DSM Directive regulating the general TDM exception allows keeping the copy of the content as ‘for as long as is necessary for the purposes of text and data mining’. Therefore, it can be concluded that the TDM exception for research purposes is more favourable than the general TDM exception. The TDM exception for research purposes does not require that TDM to create language models must be done in isolation by research organisations. Recital 11 of the DSM Directive encourages research organisations to collaborate with the private sector and carry out TDM in public-private partnerships. 68 The temporary reproduction right is also addressed in the EU case law focusing on similar technological process, see Case C-5/08 Infopaq International A/S v Danske Dagblades Forening [2009] ECR I-06569; Case C-302/10 Infopaq International A/S v Danske Dagblades Forening [2012] Digital reports. 69 RE de Castilho, G Dore, T Margoni, P Labropoulou and I Gurevych, ‘A Legal Perspective on Training Models for Natural Language Processing’, in N Calzolari et al (eds) Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) 1273–1274. www.lrecconf.org/proceedings/lrec2018/pdf/1006.pdf. 70 JP Triaille, J de Meeûs d’Argenteuil and A de Francquen, ‘Study on the legal framework of text and data mining (TDM)’. (March 2014), 61. op.europa.eu/en/publication-detail/-/ publication/074ddf78-01e9-4a1d-9895-65290705e2a5/language-en. 71 The DSM Directive Arts 3 and 4. 72 ibid Art 4(3). 73 The DSM Directive Art 7(1).

Building a Chatbot: Challenges under Copyright and Data Protection Law  131 The analysis revealed that from the legal point of view, there are fewer legal restrictions on the use of content containing copyrighted works and personal data when it is done by research organisations. At the same time, the legal framework favours public-private partnership.

C.  Legal Status of Language Models The authors have previously analysed whether language models are subject to the same legal restrictions as the data used to create them and concluded that it is usually not the case.74 From the copyright perspective, the main issue is whether the language model contains copyrighted content. As a general rule, models do not include copyrighted works used for their creation since they rely on small snippets which are not original enough to be copyright-protected. It is also found in legal literature that a model ‘does not reproduce the original (corpora) nor reveals ‘its individuality’.75 If models contain copyrighted content, copyright requirements need to be followed when using the model. In practical terms, it means that the rightholder’s permission has to be acquired. The same question needs to be answered regarding personal data. In case there is no personal data, the person who created the model is free to use it at his discretion (share for free, sell, etc). It should be borne in mind that the personal data rules do not apply to anonymous data.76 The success of anonymisation depends on the type of data.77 The Guidelines on VVA referring to scientific literature78 warn that there are risks of re-identifying persons in some machine learning models. Therefore, mitigation measures need to be applied to reduce the re-identification risk to an acceptable threshold.79

74 A Kelli, A Tavast, K Lindén, K Vider, R Birštonas, P Labropoulou, I Kull, G Tavits, A Värv, P Stranák and J Hajic, ‘The Impact of Copyright and Personal Data Laws on the Creation and Use of Models for Language Technologies’, in K Simov and M Eskevich (eds), Selected Papers from the CLARIN Annual Conference 2019 (Linköping University Electronic, Press 2020) 53−65. ep.liu.se/en/conference-article. aspx?series=ecp&issue=172&Article_No=8. 75 RE de Castilho, G Dore, T Margoni, P Labropoulou and I Gurevych, ‘A Legal Perspective on Training Models for Natural Language Processing’, in N Calzolari et al. (eds), Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018) 1273–1274. www.lrecconf.org/proceedings/lrec2018/pdf/1006.pdf. 76 Recital 26 of the GDPR. 77 It is suggested that ‘Anonymizing voice recordings is specially challenging, as it is possible to identify users through the content of the message itself and the characteristics of voice itself ’ – The Guidelines on VVA, 26. 78 M Veale, R Binns and L Edwards, ‘Algorithms that remember: model inversion attacks and data protection law’ (2018) Philosophical Transactions of Royal Society A. doi.org/10.1098/rsta.2018.0083; N Carlini et al, ‘Extracting Training Data from Large Language Models’ (2020). www.researchgate.net/ publication/347125123_Extracting_Training_Data_from_Large_Language_Models. 79 The Guidelines on VVA, 25–26.

132  Aleksei Kelli, Arvi Tavast and Krister Lindén If a model contains personal data, there must be a legal basis to use the model in a chatbot. Depending on the context, the data subject’s consent or the legitimate interest of the user could be the legal base. There could be an interesting legal case when a language model was created using language data without a legal basis. The injured party has the right to compensation, and administrative fines can be imposed on the violator in case of a GDPR violation.80 The Enforcement Directive,81 which establishes the general framework for intellectual property enforcement, entitles the injured party to claim damages as well. The injured party can request the termination of illegal processing of personal data and unlawful use of copyrighted works. However, the question is what happens to a model which was created illegally, but the model itself does not contain any personal data or copyrighted content. There was an analysis in OneZero of a US case when a company used private photos to train facial recognition algorithms. The Federal Trade Commission required the company to delete all photos and algorithms which were developed using the data.82 It remains to be seen if the EU regulatory framework will be interpreted in a way allowing to require the deletion of illegally built models. Theoretically, the measure to demand the deletion of a model could be applicable in cases when due to lack of legal competencies or intentionally, a model is created without a proper legal basis. Therefore, it becomes even more crucial to follow copyright and personal data protection rules when building models.

IV. Conclusion Chatbots rely on language models. Models may be trained on language data containing copyrighted material and personal data. From a legal perspective, training involves text and data mining (TDM). TDM itself is not copyright relevant. This means that performing text and data mining is not regulated by copyright, and it does not require any legal basis. However, to copy copyrighted content for TDM, there has to be a legal basis. The potential legal base could be the rightholder’s permission or a copyright exception. The InfoSoc Directive provides the temporary reproduction right and the research exception as possible legal bases. The DSM Directive introduces two frameworks for text and data mining: 1) TDM exception for research purposes; and 2) a general TDM exception. The GDPR defines the processing of personal data so extensively that it covers all possible operations on the data. Therefore, there has to be a legal basis for the 80 The GDPR Art 82 and 83. 81 Directive 2004/48/EC of the European Parliament and of the Council of 29 April 2004 on the enforcement of intellectual property rights (Enforcement Directive) [2004] OJ L157. 82 D Gershgorn, ‘The FTC Forced a Misbehaving A.I. Company to Delete Its Algorithm’ (2021) OneZero. onezero.medium.com/the-ftc-forced-a-misbehaving-a-i-company-to-delete-its-algorithm124d9f7e0307.

Building a Chatbot: Challenges under Copyright and Data Protection Law  133 collection and analysis (TDM) of personal data. The potential legal case could be the data subject’s consent, research in the public interest or legitimate interest. Considering copyright and personal data protection in combination, it becomes apparent that the regulatory framework for the creation of language models is more favourable for research organisations. Both legal frameworks favour public-private collaboration, which means that private companies can cooperate with research organisations to build language models within the more favourable framework meant for research. When it comes to models, then the model training can typically be performed so that it is impossible to re-create the training data from the model, and the model does not contain original portions of works included in the training data. After meeting these conditions, the model is a new independent work, disconnected from the training material in terms of copyright. Models also contain no personal data if care is taken that potential identifiers such as names, social security numbers, addresses etc, are stored only in combinations that cannot identify a real person. In case personal data remains in the models, there has to be a legal basis for its processing. An intriguing issue relates to a case where a model is trained on language data containing copyrighted content and personal data without a proper legal basis. Even if the built model does not have any personal data or copyrighted content, there is a risk that the model has to be deleted.

134

8 Legal Tech Solutions as Digital Services under the Digital Content Directive and E-Commerce Directive KARIN SEIN*

I. Introduction The new Digital Content Directive (DCD)1 lays down mandatory rules for consumer contracts on the supply of digital content and digital services which will be applicable across the EU from July 2021. The maximum harmonising Directive regulates such core areas of consumer contract law as conformity criteria of the digital content/digital services, liability of the trader as well as remedies of the consumer. It is, however, not clear whether legal tech solutions provided by traders to consumers, eg different digital services helping consumers to enforce their air passenger rights, software for detecting unfair terms or smart contracts fall within the scope of the new Directive and if yes, what legal consequences would it have for the traders and consumers? Therefore this chapter seeks to deal with the following questions: a) when is a legal tech service a digital service covered by mandatory rules of the new Digital Content Directive, ie whether and when does it fall within the scope of the DCD; b) what do the objective conformity criteria for legal tech services under the new Directive include; c) whether and to what extent could the national rules for legal professions be considered part of objective conformity criteria. Finally, the chapter also explores whether legal tech providers can resort to the country-of-origin principle of the E-Commerce Directive.2

* The work leading to this publication has been supported by the Estonian Research Council grant no PRG 124. 1 Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services [2019] OJ L136/1. 2 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (‘Directive on electronic commerce’) [2000] OJ L178/1.

136  Karin Sein

II.  Do Legal Tech Services Fall within the Scope of the Digital Content Directive? It is not clarified whether and under what conditions legal tech services fall within the scope of the newly adopted DCD as its Article 3(5)(a) excludes from its scope ‘services other than digital services, regardless of whether digital forms or means are used by the trader to produce the output of the service or to deliver or transmit it to the consumer’. Recital 27 of the DCD brings as an example of such services legal services or other professional advice services, which are often performed personally by the trader, regardless of whether digital means are used by the trader in order to produce the output of the service or to deliver or transmit it to the consumer.

The legal literature also uses legal services as the main example of the ‘personal services’ exception of Article 3(5)(a) DCD.3 One can argue, though, that legal tech services that have powerfully entered into the legal services market4 also offer ‘legal services’ as they provide muchused solutions for mass claims enforcement5 not only in the air transport sector6 but also in other areas such as tenancy contracts7 or challenging speeding/parking tickets.8 Moreover, legal tech is not limited to mass claims enforcement but also includes contract generators and reviewers, intermediary platforms for legal services providers, legal chatbots or digital legal knowledge management systems.9 The common denominator – and advantage – of most legal tech applications offered to consumers is providing a do-it-yourself (DIY) product or legal service so that consumers can find a solution to their legal problem without turning to

3 D Staudenmeyer, in R Schulze and D Staudenmayer (eds), EU Digital Law (CH Beck, 2020) 80; K Sein and G Spindler, ‘The New Directive on Contracts for the Supply of Digital Content and Digital Services – Scope of Application and Trader’s Obligation to Supply – Part 1’ (2019) 15 European Review of Contract Law 265; S Navas, ‘The Provision of Legal Services to Consumers Using LawTech Tools: From ‘Service’ to ‘Legal Product’’ (2019) 7 Open Journal of Social Sciences 79, 85; S Geiregat and R Steenot, ‘Proposal for a directive on digital content – Scope of Application and Liability for a Lack of Conformity’, in I Claeys and E Terryn (eds), Digital Content & Distance Sales. New Developments at EU Level (Intersentia, 2017) 113. 4 See, eg, J Cornelius-Winkler, ‘Legal Tech – Herausforderung oder Ende der Anwaltschaft (wie wir sie kennen)?’ (2020) SpV 7 et seq with threatening forecasts for the traditional legal services market. 5 For more details, see P Rott, ‘Claims Management Services – An Alternative to ADR?’ [2016] European Review of Private Law 143. 6 Eg, Myflightright or Airhelp. 7 Eg, the German legal tech tool www.wenigermiete.de (accessed 8 April 2021) which is an automated calculator for rent reduction and enforcing reimbursement claims of tenants against their landlords. See P Rott, ‘Rechtsdurchsetzung durch Legal Tech-Inkasso am Beispiel der Mietpreisbremse – Nutzen oder Gefahr für Verbraucher?’ (2018) VuR 443 et seq. 8 See, eg, the AI-based tool DoNotPay, donotpay.com/learn/parking-tickets/ (accessed 8 April 2021). 9 For an overview of different types of legal tech services see ME Kauffman and MN Soares, ‘AI in legal services: new trends in AI-enabled legal services’ [2020] Service Oriented Computing and Applications 223–224.

Legal Tech Solutions as Digital Services under the Digital Content Directive  137 a legal professional.10 This helps consumers to overcome the psychological, time and financial barriers usually associated with hiring an attorney or other legal professional. Article 3(5)(a) DCD excludes legal services from its scope only if they are ‘services other than digital services’ and for describing such ‘other’ services recital 27 brings in the notion of ‘services, which are often performed personally by the trader’. Thus, the pivotal point is the personal performance of the legal professional: if legal services are offered personally, they are out of the scope of the Directive, even if they are transmitted to the consumer by digital means or if the professional has used some software (eg, MS Word or some legal knowledge management system) when producing the output. The same applies if a legal professional as a human being11 offers legal advice online, be it through a virtual meeting, videoconferencing, online forum or chat. If, on the contrary, there is no personal (that is, human) performance made by the trader and the overwhelming element is a digital service, we are within the scope of the DCD. This involves, among others, legal ‘counselling’ offered by chatbots as it involves no human performance. The psychological barrier of consumers described above is also part of the argument: if a consumer wanted to have personal counselling, he would hire a professional and not use the digital contract-generating service. Legal tech services are characterised by the fact that they automate certain tasks that lawyers usually perform.12 Automation is not human, not connected to a person but rather inevitably linked to technology. Therefore, if a legal service is based on automation and not on personal performance, ie if the element of human skills is not on the forefront of the service,13 it falls within the scope of the DCD. Consequently, legal document generators or smart contracts should be qualified as digital products subject to the Digital Content Directive as they are just advancements of existing contract templates. The functioning of legal chatbots is also based on automation and increasingly on the use of AI. The borderline between a digital and non-digital legal service is, of course, a fluid one, depending on the amount of human skill element in a particular legal tech business model. It is not possible to make a general statement on whether legal tech services as such fall within the scope of the DCD or not. Intermediation platforms matching consumers with lawyers, for instance, are clearly digital services within the meaning of Article 2(1) DCD as they allow consumers to share data and interact with other users, ie with lawyers offering their services.14 The contract concluded between a consumer and a lawyer on such a platform, in turn, may very well be outside the scope of the DCD as the lawyer is providing personal (human) 10 M Ebers, ‘Legal Tech and EU Consumer Law’, in M Canarsa, M Durovic, F de Elizalde, L di Matteo, A Janssen and P Ortolani (eds), The Cambridge Handbook of Lawyering in the Digital Age (Cambridge University Press, 2021) 196. 11 Sein and Spindler (n 3) 265. 12 Ebers (n 10) 195. 13 Staudenmayer (n 3) 80. 14 In this sense also Navas (n 3) 88.

138  Karin Sein legal service and uses digital means only for transmitting the output. If, by contrast, a consumer fills out an online questionnaire and receives automatic legal advice, the contract is about the supply of digital content and hence covered by the DCD if other requirements for its application are met.15 Finally, of course, software which helps attorneys or other legal professionals to provide their services does not fall within the scope of the DCD as it is not a business-to-consumer (B2C) contract. Such contracts are subject to the national contract law rules. Legal tech services can fall within the scope of the new DCD only if the consumer either pays a price or provides personal data which are not necessary for the contract performance or legal compliance (Article 3(1) DCD).16 Whereas the legal situation is clear in case of payable services the question whether there is a ‘data-as-counter-performance’ is a trickier one and depends upon the factual situation. Intermediary platforms for attorneys, among others, may use the datapayment model rather than a directly priced one. For example, an Estonian lawyers’ intermediation service, Hugo Legal, processes clients’ data also for the development of their services, financial and statistical analysis or, upon consent, for direct marketing.17 All these purposes are other than what is necessary for contract performance or compliance with legal requirements and hence, the digital intermediation service falls within the scope of the DCD. Finally, there are legal tech models where automated (digital) services are combined with ‘real’ legal services performed by a ‘real’ law professional. This is often the case with consumer mass claim enforcement services, the prominent examples of which are different passengers’ rights claim management tools. Here, consumers either sell their claims or give a power-of-attorney to the claim management company via digital means, usually including filling out an online template. Afterwards, however, a ‘real’ lawyer representing the claim management company takes the claims to the court and participates in the court proceedings, if no outof-court agreement can be reached. The latter services surely are ‘other services’ with human intervention within the meaning of Article 3(5)(a) DCD and hence outside the scope of the Digital Content Directive. Consequently, the contract between the consumer and the claim management company should be qualified as a mixed or bundle contract within the meaning of Article 3(6) DCD18 – with the 15 Geiregat and Steennot (n 3) 113. 16 On the data as counter-performance see A Metzger, ‘Data as Counter-Performance: What Rights and Duties Do Parties Have?’ [2017] Journal of Intellectual Property, Information Technology and Electronic Commerce Law 2–5; A Metzger, ‘Verträge über digitale Inhalte und digitale Dienstleistungen: Neuer BGB-Vertragstypus oder punktuelle Reform?’ [2019] JZ 577, 579; D Staudenmayer, ‘The Directives on Digital Contracts: First Steps Towards the Private Law of the Digital Economy’ (2020) 2 European Review of Private Law 217, 224–226. 17 Privacy policy of HugoLegal, available at hugo.legal/privaatsustingimused/ (accessed 8 April 2021). 18 Similarly Navas (n 3) 92. She also argues that the document automatically created by the system and digitally signed and downloaded by the consumer should be considered ‘digital content’ within the meaning of Art 2(1) DCD. ibid 89. I would rather argue for the qualification as a digital service because the consumer is primarily not interested in having the digital document but rather getting a service which in the end leads to the settlement of his claim.

Legal Tech Solutions as Digital Services under the Digital Content Directive  139 consequence that the digital part of it falls under the DCD and the human part is subject to the national contract law regulating legal services contracts.19 It may, yet again, be different if the digitally gathered claims are automatically submitted to the court via electronic order for payment procedure: depending upon the structure of the digital order-for-payment procedure in a given country20 it may well be that the human intervention (legal service offered by a human being) is practically non-existent and thus the whole contract could fall within the scope of the DCD. Finally, I would argue that it is not possible to differentiate between the contract for digital content and the underlying contract for legal services as suggested by Navas. She argues that in case of a contract creation tool we should differentiate between the conformity criteria under the DCD applying for the technical tool and the incorrect or not updated ‘content’ of the ‘digital content’ which would constitute a lack of conformity of the underlying legal services contract.21 She describes situations where the inaccurate legal information provided by a legal tech tool was caused by incorrect instructions used for designing the algorithm or where the system processes the data provided by the consumer incorrectly, and concludes that this is not a lack of conformity of the digital content or service as the defect is not of a technical nature.22 Similarly, the explanatory memorandum of the German draft transposition legislation states that concerning the liability rules for eg, contract generators and legal chatbots, one should differentiate between the content and results of the services on the one hand and the technical supply of the digital products on the other hand; only the latter is subject to the digital content rules.23 In my view, such differentiation would lead to unnecessary complexity as it would always require differentiation between the contract for digital content/ digital services and the underlying contract about the subject matter of the legal relationship. This differentiation would apply not only to legal tech services but

19 Recital 33 DCD. Of course, in addition to the national contract law rules, European consumer law directives other than the Digital Content Directive may be applicable here as well. The most important of them are surely the Unfair Terms Directive as well as the Consumer Rights Directive. 20 See, for instance, the consumer protection problems related to the automation of Estonian digital order-for-payment procedure in P Kalamees and K Sein, ‘Effective protection of consumers by the UCTD in order for payment procedure: the Estonian example’ (2020) 1 International Comparative Jurisprudence 46−61. 21 Navas (n 3) 93. 22 ibid 93–94. 23 ‘Sofern die Regelungen jedoch auf den entsprechenden Dienstleistungen vorgelagerte oder diese ergänzende digitale Produkte Anwendung finden, zum Beispiel bei Legal-Tech-Angeboten wie Dokumentengeneratoren oder Legal-Chatbots, ist mit Blick auf das Gewährleistungsrecht zwischen den Inhalten und den Ergebnissen der Dienstleistung einerseits und der durch den Untertitel 1 geregelten Gewährleistung für die technische Bereitstellung des digitalen Produkts andererseits zu differenzieren.’ Referentenentwurf des Bundesministeriums der Justiz und für Verbraucherschutz. Entwurf eines Gesetzes zur Umsetzung der Richtlinie über bestimmte vertragsrechtliche Aspekte der Bereitstellung digitaler Inhalte und digitaler Dienstleistungen, 45. It is interesting, however, that the same explanatory memorandum treats tax advice software as falling entirely under the digital content rules including the rules on updating obligation. ibid 65.

140  Karin Sein to all types of digital services creating a situation where we would always have two different contractual regimes – one for the ‘technical part’ and one for the ‘subject matter’ part – even if in fact there is just one contract concluded between the parties. Differentiating between technical and substance parts would mean the application of the DCD to the technical part of financial services apps,24 gambling software or telemedicine – although Article 3(5) DCD clearly excludes these areas from the Directive’s scope. The real problem with such a solution is that these two parts would underlie two different legal regimes – the technical part being subject to the DCD and the ‘subject matter’ part to the national contract law. It would surely lead to controversies eg, concerning consumer’s remedies and the liability periods as the consumer should then use remedies for digital content’s technical fault under one legal regime but remedies for the defective subject matter under another regime. To bring a simple example: if I receive a digital rental contract from a contract generator and the contract is a) infected with a virus and b) with legal defects then concerning the virus I should use remedies under the DCD but concerning the substantial defects I should resort to a different legal regime. This surely cannot be a reasonable outcome. Differentiation between the technical and subject matter parts of the contract would also create proof problems for the consumer who is as a rule not able to tell whether, for example, loss of money in his bank account was caused by a defective banking service or a technical fault of a banking app. And finally, there is nothing in the recitals of the Directive indicating that such differentiation was the intention of the legislators. On the contrary – recital 26 DCD stresses that electronic files required in the context of 3D printing are within the scope of the Directive. These 3D printing files are similar to some of the legal tech tools as they are also based upon human instructions.

III.  Objective Conformity Criteria for Legal Tech Services A.  No one-size-fits-all Rule Although legal tech services offer consumers easy-to-access and affordable legal help, they do not come without risks. First, there may be shortcomings as to the quality of the legal service, for example the drafted contract violating mandatory rules of the applicable law, neglecting certain contractual risks or a chatbot suggesting steps of legal action that are against the interests of the consumer. If a legal tech service is subject to the DCD, it must meet both the subjective and

24 Indeed, Geiregat and Steennot suggest applying the DCD also to banking apps, as far as their conformity is concerned. Geiregat and Steennot (n 3) 113.

Legal Tech Solutions as Digital Services under the Digital Content Directive  141 objective conformity requirements under Articles 7 and 8 DCD. Whereas subjective conformity criteria do not pose a particular problem as they are shaped by the service provider itself, the objective conformity criteria aims at raising the EU-wide consumer protection standard, setting the mandatory criteria that can be deviated from only via ‘express and separate’ consent of a consumer.25 Ebers has observed that the lack of established quality standards leaves consumers unprotected from the poor quality of legal tech services.26 While it is true that currently no such standards exist, it is necessary to scrutinise whether such standards could be developed under the Directive as objective conformity criteria constitutes an autonomous concept of EU law.27 The main element of the objective conformity requirement is the ‘fit-forpurpose test’ in Article 8(1)(a) DCD setting forth that digital content or digital service shall be fit for the purposes for which digital content or digital services of the same type would normally be used, taking into account, where applicable, any existing Union and national law, technical standards or, in the absence of such technical standards, applicable sector-specific industry codes of conduct.28

In addition, the legal tech solution must have quality and performance features which are normal for legal tech services of the same type and which a consumer may reasonably expect (Article 8(1)(b) DCD). First, it is important to stress that there is no possibility to develop one-sizefits-all conformity criteria which would apply to all types of legal tech solutions. The conformity requirements are very much dependent upon the nature of a particular legal tech service. This is evident already from the wording ‘normal for digital content or digital services of the same type’ in Article 8(1)(b) DCD. It is self-understood that a contract generating tool is not of the same type of service as a mass claims enforcement application or online platform for legal professionals.29

25 See Art 8(5) DCD. 26 Ebers (n 10) 205. 27 K Sein and G Spindler, ‘The New Directive on Contracts for Supply of Digital Content and Digital Services – Conformity Criteria, Remedies and Modifications – Part  2’ (2019) 15(4) European Review of Contract Law 365, 368. 28 On objective conformity criteria see, eg, C Twigg-Flesner ‘Conformity of Goods And Digital Content/Digital Services’, 24–27 papers.ssrn.com/sol3/papers.cfm?abstract_id=3526228 (accessed 8 April 2021); JM Carvalho ‘Sale of Goods and Supply of Digital Content and Digital Services – Overview of Directive 2019/770 and 2019/771’ (2019) 8 European Competition Journal 198–199; Staudenmayer (n 16) 235–237. 29 The contractual relationship between a consumer and the platform provider falls within the scope of DCD but only concerning the digital services of the platform and not concerning the goods and services offered there. On the objective conformity criteria for online platforms under DCD in general and platform operator’s obligation to check the quality of the market participant, see G Spindler, ‘Role and Liability of Online Platforms Providing Digital Content and Digital Services – some preliminary thoughts, including impact of the Digital Content Directive’, 10–15 papers.ssrn.com/sol3/papers. cfm?abstract_id=3550354 (accessed 8 April 2021). It is doubtful as to whether EU consumer law directives are applicable to the relationship between consumer and platform provider, Ebers (n 10) 207.

142  Karin Sein Therefore, even the question whether and for how long the service provider is subject to the updating obligation30 depends on the nature of a particular service. For example, in case of a contract drafted by a contract generator, the service provider is not obliged to update the contract even if the legal rules applicable to the contract change afterwards. On the other hand, if I have a subscription to a contract management program as a continuous digital service over a period of time then of course the trader must ensure its legal compliance throughout the whole contract period.31 Sure, there are certain objective conformity requirements that are common to all legal tech services as they derive from the ‘digital nature’ of the services. These requirements include, for instance, compatibility and accessibility,32 compliance with IT-security standards, making available (security) updates33 as well as compliance with privacy by design and by default principles.34 The details of the latter of course again depend upon the model of a particular legal tech service. Concerning security standards, one must admit that the DCD does not define what kind of technical security standards are to be expected in a contractual context. Currently there are no harmonised security standards on the EU level as the EU regulation for IT-security leaves setting the standards within the competence of the Member States.35 Non-compliance with Union and national legal rules may also constitute a lack of conformity under the DCD as Article 8(1)(a) links the fit-for-purpose test with ‘any existing Union and national law’ – but only where the Directive’s conditions for lack of conformity are met.36 Importantly, national mandatory contract law rules, eg, for services contracts should not be qualified as ‘national law’ within the meaning of Article 8(1)(a) DCD. The question whether national legal profession rules as public law rules may constitute such national law rules will be dealt with below in a separate section. It is, however, conceivable that legal tech industry will develop pan-European industry codes of conduct which will then co-determine the objective conformity requirements. In addition to the conformity criteria stipulated in the DCD, conformity requirements of a legal tech service may derive from pre-contractual information obligations set forth in the Consumer Rights Directive (CRD).37 For example, Article 6(1)(r) and (s) CRD oblige legal tech services to provide information 30 The updating obligation under DCD has been called ‘a ground-breaking development’ by Staudenmayer (n 3) 133. 31 Art 8(4) DCD. 32 Art 8(1)(b) DCD. 33 See Art 8(2) DCD. 34 See recital 48 DCD. 35 Sein and Spindler (n 27) 369. 36 Staudenmayer (n 3) 137. 37 Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the European Parliament and of the Council and repealing Council Directive 85/577/EEC and Directive 97/7/EC of the European Parliament and of the Council [2011] OJ L304/64.

Legal Tech Solutions as Digital Services under the Digital Content Directive  143 about, for instance, the functionality of digital services, including applicable technical protection measures and any relevant compatibility and interoperability of the digital service with hardware and software that the trader is aware of or can reasonably be expected to have been aware of. As such, pre-contractual information forms an integral part of the contract under Article 6(5) of the CRD, any breach of it entitles consumers to remedies under the DCD.

B.  Contract Automation Tools Let us explain the meaning of objective conformity criteria on the example of a contract drafting tool. Consumers may use such tools for creating rental contracts or contracts for the sale of a car, for example. First, it is obvious that the contract generator must perform what it has promised, be it based on the contract with the consumer or what the consumer may reasonably expect, taking into account any public statement made by the trader.38 Hence, if the webpage of the service contains a statement that: Your contracts will be error-free, regardless of length or complexity. All you have to do is to answer a few questions in a smart questionnaire, and the contract is created automatically with compliant terms,39

it surely creates a legitimate expectation that the contract will be generated in compliance with the applicable law.40 In other words, the legal tech provider must make sure that the law, including the rules on unfair terms, is correctly translated into code. It is in my view not important whether this is done using an attorneyat-law or not as far as the outcome, ie the contract is in conformity with the legal requirements existing at the time of the conclusion of the contract. This includes ‘updating’ the generator in order to incorporate all relevant changes in the applicable law.41 Moreover, the generated contracts should be jurisdiction-specific, considering, among others, different provisions on unfair terms in different Member States. The same will be true in my opinion even if there are no such statements on the webpage of the legal tech service. If someone is offering to generate contracts to consumers, then an average consumer surely has a reasonable expectation that the contract is compliant with the applicable legal rules: this derives already from

38 Arts 7 and 8(1)(b) DCD. 39 See the webpage of legal tech tool Precisely: www.precisely.se/contract-generator/?utm_ medium=cpc&gclid=EAIaIQobChMI-6q2g4aP7QIVUEKRBR31hQ3jEAAYAiAAEgI_nPD_ BwE&utm_source=google&utm_term=%2Bcontract%20%2Bgenerator (accessed 8 April 2021). 40 The same is true for a statement ‘Aktuell & sicher: Immer auf dem neuesten Stand von Recht und Gesetz’ on the webpage of a German legal tech service SmartLaw. See www.smartlaw.de/ (accessed 8 April 2021). 41 Of course, the consumer may not reasonably expect that the contract is updated later when the law changes unless this is for some reason specifically agreed between the parties.

144  Karin Sein the nature of such a digital service.42 If the legal tech service does not wish to take responsibility for the accuracy of its tool then it is possible to offer it for free, ie not demanding a price or personal data: in these cases the service falls outside the scope of the DCD and hence the objective conformity criteria rules. The other option for the trader would be to derogate from the objective conformity requirements but it can only be done via an ‘express and separate’ statement under Article 8(5) DCD.43 As describing a digital service by listing certain negative criteria does not amount to an ‘express and separate’ statement,44 then it is surely not enough to just declare in the terms and conditions or in pre-contractual information that there might be errors or that the service provider does not guarantee compliance with the legal rules.45 The potential shortcomings of the generator should therefore be specifically brought to the attention of a consumer so that he can decide whether or not to run the risk of a defective legal product and, possibly, a later lawsuit. Of course, liability of the legal tech provider may be excluded in cases where the lack of conformity results from the consumer’s behaviour, eg incorrect data entry or non-disclosure of certain important information. The preconditions for such liability exclusion are not regulated in the Directive and are hence subject to the national law.

IV.  Legal Profession Rules as Objective Conformity Criteria under DCD? Most European countries subject legal professionals to some public law rules regulating who and under what conditions may offer legal services. The applicability of these rules depend upon whether a particular legal tech service can be qualified as a legal service under a particular national legal profession law.46 As we recall, under Article 8(1)(a) DCD the digital content or digital service must be fit for the purposes for which digital content or digital services of the same type would normally be used, taking into account, where applicable, any existing Union and

42 Similarly for German law see M Fries ‘Staatsexamen für Roboteranwälte?’ [2018] ZRP 163–164. He points out that times where all online services had a touch of experimentalism are over by now and that consumers expect to be able to rely upon the suggestions of legal tech providers without hiring additionally a lawyer. 43 Cf Art 8(5) DCD: ‘/ … / the consumer was specifically informed that a particular characteristic of the digital content or digital service was deviating from the objective requirements for conformity / … /’. 44 I Bach ‘Neue Richtlinien zum Verbrauchsgüterkauf und zu Verbraucherverträgen über digitale Inhalte’ [2019] NJW 1705, 1708. 45 D Staudenmayer, ‘Auf dem Weg zum digitalen Privatrecht – Verträge über digitale Inhalte‘ [2019] NJW 2498; Sein and Spindler (n 27) 376; A Sattler, ‘Urheber- und datenschutzrechtliche Konflikte im neuen Vertragsrecht für digitale Produkte’ [2020] NJW 3623, 3626. 46 S Fina, I Ng and R Vogl, ‘Perspectives on the Growth of DIY Legal Services in the European Union’ [2018] Journal of European Consumer and Market Law 241–242.

Legal Tech Solutions as Digital Services under the Digital Content Directive  145 national law. One can wonder whether national public law rules on legal professions could be considered ‘any existing national law’ which would affect the fitness test or the ‘reasonably-expectable-quality test’ and hence co-determine the objective conformity criteria. For example, should a legal tech service, in order to be compliant with the contractual standards under DCD, have the necessary licence or authorisation to carry out legal services as potentially required under the applicable national law, follow certain ethical duties or have an indemnity insurance? Licensing obligation. The first issues to arise are surely possible national restrictions to provide legal services. For example, in Germany the current debate about legal tech47 concentrates mostly on the question of whether or not their activity is prohibited under the Legal Services Act (Rechtsdienstleistungsgesetz; RDG).48 The courts as well as legal literature have been divided on that matter, especially as far as mass claim enforcement and contract generators are concerned.49 In its landmark decision of 27 November 2019, the German High Federal Court (BGH) allowed the activities of a legal tech provider LexFox offering claims management services under section 10 RDG but left a considerable amount of uncertainty as it emphasised the individual merits of the case.50 One may ask, therefore, whether non-compliance with such national legal professional rules and limitations would constitute lack of conformity, ie breach of objective conformity criteria under Article 8 DCD. More precisely, one could argue that a legal tech service which is not compliant with such national rules is not fit for the purposes for which legal tech services of the same type would normally be used.51 If one followed such an approach, then using legal tech services which do not possess a necessary licence would entitle consumers to contractual remedies, eg price reduction or contract termination.52 Such interpretation should not be followed for the following reasons. First, this would lead to conflicts with the legal consequences that the national legal system foresees for such breaches. Under German law, for example, assignment of a claim by a consumer to a claims management company that is not allowed to provide the requested service is, at least in principle, considered to be invalid under section  134 BGB.53 Hence, under German law we are not talking about a

47 See, eg, F Remmertz, ‘Automatisierte Rechtsdienstleistungen im RDG’ [2019] ZRP 139 et seq. 48 On the discussion see, eg, TA Degen and B Krahmer, ‘Legal Tech: Erbringt ein Generator für Vertragstexte eine Rechtsdienstleistung?’ [2016] GRUR-Prax et seq, 363; M Fries, ‘Rechtsberatung durch Inkassodienstleister: Totenglöcklein für das Anwaltsmonopol?’ [2020] NJW 193 et seq; F Skupin, ‘Gebündelte Geltendmachung abgetretener Forderungen aus sog. Lkw-Kartell verstößt gegen Rechtsdienstleistungsgesetz’ [2020] GRUR-Prax 116 et seq. 49 See K Sein and P Rott, ‘Obstacles to Legal Tech Services: Examples from Germany and Estonia’, forthcoming, with further references. 50 Decision of the German High Federal Court (BGH) of 27.11.2019 – VIII ZR 285/18; discussed, eg, by S Meul and N Morschhäuser, ‘Legal Tech-Unternehmen im Fahrwasser der Inkassolizenz – wird die Ausnahme zur Regel?’ [2020] CR 101–107. 51 On this principle, see Twigg-Flesner (n 28) 10–11, 13–14. 52 Art 14 DCD. 53 BGH (n 50).

146  Karin Sein non-conforming service by the legal tech company but instead about nullity of the assignment with the consequence that the consumer remains the owner of his claim. Moreover, issues of validity and nullity of contracts are explicitly reserved to the national law under Article 3(10) DCD. Second, consumers’ remedies regime under the DCD is not suited to deliver reasonable results for violations of legal profession law. As bringing the legal tech service into conformity with the contract54 is not possible, the consumer’s choice of remedies is restricted to price reduction and termination. Price reduction, in turn, would not be a suitable remedy as it would be impossible to determine the objective decrease in the value of the non-authorised legal tech service and it is not available in cases where the consumer had not paid a price but had only provided his personal data.55 Finally, in case of termination we would be left dealing with the question whether lack of authorisation qualifies as a non-minor lack of conformity or not.56 In short, the DCD is in my view not applicable to the breaches against national authorisation or legal professional rules and the consequences of such breaches are to be dealt with under the national law. Ethical duties under legal profession regulation. Further issues emerge regarding ethical standards and the fairness of contracts concluded with legal tech companies. Whereas lawyers are subject to legal ethics regulations including the duty of confidentiality and the obligation to prevent conflict of interests, the same does not apply to services provided by legal tech companies.57 Again, one may ask whether such ethical duties could be considered objective conformity criteria under the DCD entitling consumers to remedies if not complied with. One must bear in mind that confidentiality and loyalty are ethical concepts which are connected to a human being.58 Software cannot be loyal, nor can it keep secrets within the classical meaning. Of course, the legislator could opt for extending confidentiality obligations to legal tech companies as well. Under the current rules one can argue that consumers using a legal tech tool have a reasonable expectation that data given by them to the legal tech provider will not be made available to other persons.59 Concerning personal data one can develop such an obligation via privacy by design principle as part of the objective conformity criteria.60 Application of the same confidentiality standards as in force for ‘normal’ lawyers would mean that already the fact of seeking legal advice is confidential if not agreed otherwise. Roughly the same outcome could be achieved under data protection law as, for example, selling personal data of legal tech users to third 54 Art 14(1) DCD. 55 See Art 14(5) DCD. 56 Art 14(6) DCD. 57 Ebers (n 10) 210. 58 Cf Fina, Ng, Vogl (n 46) 244: ethical duties are not imposed on companies. 59 Art 8(1)(b) DCD obliges the digital content or digital service to possess the qualities normal for digital content or digital services of the same type and which the consumer may reasonably expect, given the nature of the digital content or digital service. 60 On privacy by design principle as part of objective conformity criteria see recital 48 DCD and Sein and Spindler (n 27) 371–272.

Legal Tech Solutions as Digital Services under the Digital Content Directive  147 parties such as advertising companies requires a free, clearly distinguishable and transparent consent.61 Similarly, if confidentiality obligation were considered part of objective conformity criteria, derogation from it would be possible only under express and separate consent under Article 8(5) DCD. Concerning the conflict of interest rules preventing lawyers to represent both sides of the same individual case, the pivotal question is whether legal chatbots informing consumers about legal issues are under the current level of technical development capable of advising consumers in their specific cases taking into account their particular circumstances. This is arguably not yet the case.62 True, there are already tools on the market which can predict the success of a complaint better than classical attorneys.63 Predicting an outcome of an individual case is, however, not identical with giving legal advice on an individual matter as understanding the specifics of an individual case and applying legal provisions to it cannot be replaced by pattern or frequency analysis of software. This shows, among others, especially clearly when moral values of norms are concerned.64 Legal bots are therefore not subject to the conflict of interest rules of the legal profession codes65 and in my view such duty should not be derived from the objective conformity criteria under the DCD either. Their relationship with the advice-seeking consumer is not comparable with the attorney-client relationship as the bots do not deal with the particularities of each separate case, they do not go into much detail, assess the situation concerning evidence or gather evidence for their ‘client’.66 There are also no legal guarantees available for chatbots as there are for attorneys. Consequently, there is no need or justification to subject them to the duty of avoiding conflict of interest. The situation may change with the technical development so that at some point such objective conformity criteria might be developed. The interests of the consumers would, however, be better protected if the European legislator adopted respective public law rules for such legal tech providers. Mandatory indemnity insurance. Typically, under national legal services regulations, practicing lawyers are required to have professional indemnity insurance to cover potential damage risks caused by their services. If legal tech companies are not subject to the national legal services rules, they are also not obliged to conclude such an indemnity insurance contract.67 One may therefore wonder whether a consumer could normally expect from a legal tech service that such an insurance

61 Art 7 GDPR. 62 S Hähnchen, PT Schrader, F Weiler and T Wischmeyer, ‘Legal Tech. Rechtsanwendung durch Menschen als Auslaufmodell?’ [2020] JuS 626; cf Fina, Ng, Vogl (n 46) 243. 63 See, eg, www.case-crunch.com/ (accessed 8 April 2021) claiming that ‘the network had a better grasp of the importance of non-legal factors than lawyers’. 64 Hähnchen, Schrader, Weiler and Wischmeyer (n 62) 628–629. 65 Similarly, Fina, Ng, Vogl (n 46) 241 for contract review automation services. 66 Determining the facts of an individual case is one of the core functions of law application and is capable of automation only in clear-cut cases. Hähnchen, Schrader, Weiler and Wischmeyer (n 62) 628. 67 Ebers (n 10) 212.

148  Karin Sein exists. In other words, could the obligation to take out indemnity insurance be based on the objective conformity requirement under Article 8 DCD? Here the answer should be negative: implementing mandatory insurance via contractual standards would in my view go too far, especially from the legal certainty point of view. Foreseeing mandatory indemnity insurance should be imposed by law, which should determine inter alia criteria such as the indemnity sum, exclusions to insured risks etc. It is not possible to derive these principles from the general objective conformity rules of the DCD.

V.  Possibility to Exclude or Limit Liability for Legal Tech Services in Standard Terms Finally, we must briefly analyse the possibilities to exclude or limit legal tech provider’s liability under the DCD. As, for example, legal compliance of a generated contract is considered part of the objective conformity criteria which can be derogated from only by an ‘express and separate consent’ of the consumer,68 the next question logically follows: are legal tech providers allowed to exclude or limit their liability in their general terms and conditions? Can they exclude price reduction or limit the amount of damages or make use of remedies dependent on gross negligence? The provisions of the DCD are mandatory under Article 22(1) and can be derogated from to the detriment to the consumer only after the lack of conformity is brought to the trader’s attention by the consumer. Due to Article 22 DCD, contract terms excluding or limiting consumers’ remedies, that is right to repair, right to termination or price reduction under the DCD, are not binding either for the consumer and their validity is not to be assessed under the Unfair Contract Terms Directive (UCTD).69 However, as damages claims are outside the scope of the DCD, the question whether and under what conditions damages claims can be limited or excluded is subject to national law. The national law, however, must comply with the requirements of the UCTD70 and ban standard terms unfairly restricting or limiting consumers’ damages claims.71 Although, as is known, the annex of the Directive is not mandatory for the Member States72 but rather contains an ‘indicative and non-exhaustive list’ of unfair terms,73 we must recall that under point 1(b) of the 68 See Art 8(5) DCD and above at III.B. 69 Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts [1993] OJ L95/29. 70 Similarly, Navas (n 3) 94. 71 According to Ebers, it is currently unclear whether disclaimers such as ‘the service providers assumes no liability for any errors or omissions in the information contained in the Service and expressly disclaims any responsibility to update this information’ can be regarded as ‘unfair’ and ‘non-binding’ under the UCTD. Ebers (n 10) 211. Under Art 22(1) DCD such clauses are, in my view, non-binding as far as they exclude consumer’s right to repair, right to termination or price reduction. 72 C-478/99 Commission v Sweden [2002], paras 20, 22. 73 Art 3(3) UCTD.

Legal Tech Solutions as Digital Services under the Digital Content Directive  149 annex standard terms inappropriately excluding or limiting the legal rights of the consumer vis-à-vis the other party in the event of total or partial non-performance or inadequate performance by the seller or supplier of any of the contractual obligations are considered unfair and thus not binding on the consumer.74 The Court of Justice has also clarified that while a contract term which corresponds to that of a term included in that annex does not in itself suffice to establish whether that term is unfair, it is nevertheless an essential element to take into account when deciding upon its unfairness.75 It cannot be excluded, though, that national law provisions regulating the possibility to limit the liability of the regulated legal service provider will also come into play – provided that the legal tech service falls within the scope of the legal profession rules. For example, section 47 of the Estonian Bar Association Act76 allows the attorney to limit its liability except for intent and gross negligence. Although the Court of Justice has clarified that the UCTD is applicable to the contract between an attorney and a client,77 a legal profession rule regulating the possibility to contractually exclude or limit the liability is dispositive law within the meaning of Article 1(2) UCTD with the consequence that the unfair terms provisions are not applicable. Hence, a passengers’ rights enforcement company owned by an attorney can make use of such limitation. One caveat, though, is that, again, one must differentiate between various types of legal tech services. Intermediation platforms, for example, are not generally liable for the quality of advice given by the lawyers hired over their platform unless there are good reasons for such liability (eg the platform provider is aware of the low quality of a particular lawyer but keeps offering its services on the platform). The platform provider can refer to the safe harbour privileges of Article 14 of the E-Commerce Directive, provided that it remains a ‘passive’ intermediary.78 Therefore, intermediation platforms are strictly speaking not contractually limiting their liability if they state in their terms and conditions that they are not liable for the services of the lawyers.

VI.  Legal Tech Services and the Country-of-origin Principle of the E-Commerce Directive Another debated issue related to legal tech companies is whether their services could be subject to the country-of-origin-principle of the E-Commerce Directive. In this context, it must be stressed that this Directive does not include a general exclusion for legal services; the only exclusion for legal professions in its Article 1(5)



74 Art

6(1) UCTD. Invitel [2012], para 26. 76 RT I, 22.12.2020, 38. 77 C-537/13 Šiba [2015], paras 23–24. 78 Spindler (n 29) 6–7 with further references. 75 C-472/10

150  Karin Sein concerns the activities of notaries as well as the representation of a client and defence of his interests before the courts. These exclusions are not relevant for legal tech companies with the consequence that in principle legal tech services as such are not excluded from the scope of the E-Commerce Directive. As the E-Commerce Directive applies only to providers of information society services79 we must first determine whether legal tech services could be qualified as information society services. In this context it is important to stress that whether legal tech services may be subject to a national authorisation requirement does not depend upon how a Member State defines legal services but rather how the E-Commerce Directive defines information society services. If a legal tech provider can be classified as an information society services provider within the meaning of the E-Commerce Directive, then it can rely upon its country-of-origin principle in its Article 3. Information society services are defined in Article 1(b) of the Directive 2015/ 153580 as any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services. An example of a service which is not provided ‘by electronic means’ and hence not an information society service is brought in point 2(e) of its annex I and constitutes ‘telephone/ telefax consultation of a lawyer’.81 Legal tech solutions are, however, something different than just a telephone or online consultation of a lawyer – they involve the provision of smart contracts, contract generators or counselling by chatbots, for example. The Court of Justice has given guidelines about the meaning of an information society service in several cases dealing with electronic intermediary platforms for accommodation rental and car sharing. In these cases the Court of Justice explained that despite being provided electronically, at a distance, at the individual request and against a remuneration,82 an electronic intermediation service should not be qualified as an information society service if such service ‘forms an integral part of an overall service whose main component is a service coming under another legal classification’.83 The service ‘coming under another legal classification’ would 79 Art 1(1) of E-Commerce Directive. 80 Directive (EU) 2015/1535 of the European Parliament and of the Council of 9 September 2015 laying down a procedure for the provision of information in the field of technical regulations and of rules on Information Society services [2015] OJ L241/1. 81 Legal services provided offline and only transmitted online are not information society services within the meaning of the E-Commerce Directive and hence cannot profit from its country-of-origin principle. Such services are subject to the services Directive and specific directives such as the Directive 2005/36/EC on the recognition of professional qualifications as well as Directives 77/249/EEC and 98/5/EC. Under these Directives authorisation or licence requirements are in principle legitimate. 82 Whereby the CJEU has clarified that the information society service can be paid also with income generated by advertisements. See C-291/13 Papasavvas [2014], para 30; C-484/14 McFadden [2016], paras 42, 43. 83 C-434/15 Asociación Profesional Elite Taxi [2017], para 40; C-390/18 AirBnB [2019], para 50; C-62/19 Star Taxi App [2020], para 49. On AirBnB case see L Van Acker, ‘C-390/18 – The CJEU Finally Clears the Air(bnb) Regarding Information Society Services’ [2020] Journal of European Consumer and Market Law 77 et seq. A critical view on the Uber case: MY Schaub, ‘Why Uber is an information society service’ [2018] Journal of European Consumer and Market Law 109 et seq.

Legal Tech Solutions as Digital Services under the Digital Content Directive  151 in case of legal techs surely be provision of legal services and the crucial question therefore is whether the main component or core of a legal tech service is a ‘classical’ legal service offered offline. In the end, this leads us into a circular reasoning as in order to classify an information society service we still need to answer what is a legal service. One must further admit that the criteria developed in the Court of Justice jurisprudence on intermediary platforms are not suited to apply to legal techs (except for intermediary platforms for lawyers) as there are no third parties such as Uber drivers or AirBnB guests involved. Therefore, the criterion of ‘decisive influence’ used by the Court of Justice to differentiate between intermediary and transport services84 is not really of use when trying to qualify legal techs as information society services. One must therefore try to apply the concept of ‘mixed contract’ developed in the Ker-Optika case where the CJEU decided that one should differentiate between two services, namely the online service (the sale of contact lenses) and the offline service (the delivery of the contact lenses). The CJEU concluded that online service is an information society service while the offline service is not.85 However, online-offline division is not possible in the case of legal techs as there is no offline activity inherently linked to the legal tech service that could transform its activities into an overall service outside of the scope of information society services.86 One is therefore left with the question whether legal tech services form an integral part of legal services. Here again one has to differentiate between different legal tech solutions. I would argue that intermediary platforms for lawyers as well as highly automated legal tech services (eg, contract generators, smart contracts or counselling by chatbot) often do constitute an information society service within the meaning of the E-Commerce Directive as their activities by their very nature can be carried out at a distance and by electronic means and they do not need human input or ‘physical examination’.87 Intermediary platforms are not legal services in the strict sense as they are just intermediaries and not legal services providers: their services do not form an ‘integral part of an overall service whose main component is a service coming under another legal classification’.88 Contract generators are usually based very much on the same principle as fill-in-yourself electronic contract templates –

84 See on that criterion in depth, P Hacker, ‘UberPop, UberBlack, and the Regulation of Digital Platforms after the Asociación Profesional Elite Taxi Judgment of the CJEU’ (2018) 14(1) European Review of Contract Law 2018, 80–96. 85 C-108/09 Ker-Optika [2010], paras 29–40, 77. 86 This was the argument used by Hacker for social media providers and search engines but it applies equally to legal techs. Hacker (n 84) 90. 87 See recital 18 of the E-Commerce Directive which excludes ‘activities which by their very nature cannot be carried out at a distance and by electronic means’ from the notion of information society services. The recital brings statutory auditing of company accounts or medical advice requiring the physical examination of a patient as examples of services which should not be considered information society services. 88 Elite Taxi (n 83) para 40; AirBnB (n 83) para 50.

152  Karin Sein and we do not qualify publishers of legal templates as legal services providers.89 Also, legal chatbots are – at least for now – just following template-based questionnaires and mostly give legal information, not legal advice in an individual case. Consequently, such legal tech services should be qualified as information society services and their providers should therefore be able to rely upon the country-oforigin principle.90 Specifically, no licence or authorisation may be required from them due to Article 4(1) of the Directive.91 If they do not constitute legal services in the strict sense then they also cannot be subjected to general authorisation schemes ‘not specifically and exclusively targeted at information society services’ under Article 4(2) such as licensing obligations under national legal profession rules.92 Instead, ‘taking up of the activity of an information society service, such as requirements concerning qualifications, authorisation or notification’ is determined by the law of their country of origin.93 The practical consequence of such interpretation of course involves some ‘race-to-the-bottom’ as legal tech service providers could then establish themselves, for example in Estonia, where the legal services regulation is extremely liberal94 and offer their services online also to the consumers of other Member States. Such liberal interpretation of the notion of an information communications service is of course not a universally accepted view. It is argued that if regulated legal services are provided via an electronic medium, then national authorisation requirements are still allowed under Article 4(1) of the Directive.95 Indeed, the German High Federal Court has subjected all legal services offered from abroad to German clients – even if they also contain information society services – to the German legal profession rules96 which seems to consider information society services as being an integral part of a legal service.

89 In Germany, Cologne High Regional Court ruled that a contract generator does not offer legal services; the case is pending at the German High Federal Court. Decision of Cologne High Regional Court of 19 June 2020 – 6 U 263/19. 90 According to the annex of the E-Commerce Directive, the country-of-origin principle does not extend to ‘contractual obligations concerning consumer contracts’. Hence, the provisions, especially the objective conformity criteria of DCD remain untouched. 91 Art 4(1) states: ‘Member States shall ensure that the taking up and pursuit of the activity of an information society service provider may not be made subject to prior authorisation or any other requirement having equivalent effect.’ 92 It is unclear whether the reasoning of the CJEU in the Star Taxi App case could lead to a different conclusion. The CJEU ruled that a mere extension of the scope of pre-existing rules to taxi dispatching services using IT applications does not constitute an authorisation scheme which is specifically and exclusively targeted at such services within the meaning of Art 4(1) of the E-Commerce Directive. Star Taxi App (n 83) paras 80–82. 93 Article 2(i) of the E-Commerce Directive. Arguably this is the prevailing opinion also in Germany with the consequence that the authorisation requirement of the German Rechtsdienstleistungsgesetz would not be applicable to online services established in another Member State but offering their legal services on the German market. See M Krenzler, Rechtsdienstleistungsgesetz 2nd edn (CH Beck, 2017) RDG, s 1, para 98. But Krenzler himself is critical to such view and justifying authorisation obligation. ibid para 99. 94 Sein and Rott (n 49). 95 Ebers (n 10) 203. 96 Decision of the German High Federal Court of 5 October 2006 – I ZR 7/04.

Legal Tech Solutions as Digital Services under the Digital Content Directive  153 Even if legal tech providers can resort to country-of-origin-principles, their activities can be restricted by domestic measures but only under the conditions set forth in Article 3(4) of the e-commerce directive. Under this provision such restrictive measures must be necessary, for example for public policy or consumer protection, they must be proportionate and they also must be notified to the European Commission as well as to the Member State where the company is established. The importance of the notification of such measures was highlighted in the AirBnB case where the Court of Justice stressed that a Member State’s failure to notify of a measure restricting the freedom to provide an information society service renders the measure unenforceable against individuals.97 If taken literally, any measure taken under a national legal profession regulation (eg, fining the information society service) is without effect if the respective Member State has not notified it to the European Commission. It is further important to stress that under Article 3(4) of the E-Commerce Directive such domestic measures must be ‘taken against a given information society service’ which has led to the discussion whether legal acts such as the German Rechtsdienstleistungsgesetz can also qualify as such measures98 or whether Article 3(4) of the Directive only allows a particular measure against a particular service provider.99 The latter approach is preferred by the European Commission, which has explained in the context of electronic financial services that ‘a Member State could not, on the basis of Article 3(4), decide that its entire legislation on, say, non-harmonised investment funds was applicable in a general and horizontal fashion to all services accessible to its residents’.100 In the Daniel B case, the CJEU, however, considered national legislation targeted to (online) pharmacies as such measures as it analysed whether such legislation (and not an injunction or order executed against a particular company) is appropriate and necessary under Article 3(4) of the E-Commerce Directive.101 Concerning one prohibition the CJEU concluded that it still requires an individual assessment whether it ‘does not result in the provider in question being prevented from carrying out any advertising outside his or her pharmacy’.102 The CJEU was not consistent in applying this individualised approach concerning other restrictive national rules and held that national legislation which requires that pharmacies selling medicinal products include a health questionnaire in the process of ordering medicinal products online is not violating E-Commerce Directive.103 Hence, due to the country-of-origin principle legal tech services classified as information society services cannot be subjected to national legal profession rules 97 AirBnB (n 83) paras 96, 100. 98 Krenzler (n 93) RDG, s 1, para 100. 99 In this sense see Nordmeier, in G Spindler and F Schuster (eds), Recht der elektronischen Medien 4th edn (CH Beck, 2019) TMG, s 3, para 27; G Spindler, ‘Der Regierungsentwurf zum Netzwerkdurchsetzungsgesetz – europarechtswidrig?’ [2017] ZUM 473 (476–478). 100 Communication from the Commission to the Council, the European Parliament and the European Central Bank. COM(2003) 259 final, 5. 101 C-649/18 Daniel B [2020], paras 69–71, 86, 90, 100, 103, 114. 102 Daniel B (n 101) para 74. 103 ibid para 102.

154  Karin Sein (including licensing obligation or any other authorisation scheme) in general, but certain restrictive measures based on consumer protection or other public policy purposes can be targeted against legal techs under the preconditions of Article 3(4) of the E-Commerce Directive.

VII. Conclusion Due to the diversity of the legal tech landscape, it is not possible to state generally whether legal techs fall within the scope of the new DCD or not: this depends upon the nature of a particular service. If a legal tech service is based on automation and not on personal (human) performance – as is the case for legal chatbots, contract generators or intermediary platforms for lawyers, for example – it falls within the scope of the DCD. A mixed contract consisting of human and digital services parts (Article 3(6) DCD) is also possible and occurs in the case of mass claim enforcement services where part of the service is automated, but another part (representation in court) is performed personally by attorney(s). It is not, however, possible to divide one automated service (eg a contract generator) into online and offline parts (ie digital services contract and the underlying legal services contract) and apply the DCD to the ‘technical part’ and national contract law to the ‘substance part’. Such differentiation would lead to controversies and unreasonable outcomes, for example concerning consumers’ remedies and liability periods as consumers should then use remedies for digital content’s technical fault under one legal regime but remedies for the defective subject matter under another regime. Indeed, the DCD can provide answers only to certain problems connected with the use of legal techs104 – namely to the quality standards and mandatory set of consumer’s remedies if these standards are not met. It is, however, mostly not possible to treat requirements of legal profession law (eg licensing obligation, ethical duties or mandatory indemnity insurance) as part of objective conformity criteria. It is, however, conceivable that legal tech industry will develop pan-European industry codes of conduct which will then co-determine the objective conformity requirements. Intermediary platforms as well as highly automated legal tech services (eg chatbots or contract generators) often constitute not legal services in the strict sense but rather information society services – with the consequence that they can profit from the country-of-origin principle of the E-Commerce Directive and may not be subjected to national authorisation obligations. However, certain national restrictive measures based on public policy purposes may still be targeted against them if these measures are necessary, proportionate and notified under Article 3(4) of the E-Commerce Directive.

104 Cf

Ebers (n 10) 210.

9 Contracting in Code MEGAN MA*

I. Introduction Legal pedagogy is experiencing a pressure to evolve with the currents of ‘progress;’ namely, at the hand of artificial intelligence technologies in law. Mireille Hildebrandt’s recent textbook is a prime example. Law for Computer Scientists and Other Folk, as she describes, is an endeavour to ‘bridge the disciplinary gaps’ and ‘present a reasonably coherent picture of the vocabulary and grammar of modern positivist law’.1 Moreover, law schools are beginning to offer technology-driven and dynamic courses including training in computer programming.2 Between law and computer science, there appears to be two opposing spectrums of thought: one in boundless enthusiasm of the two working in tandem; and the other in rampant scepticism. This begs the question: do these two ­disciplines operate in the same language and if not, could they? This chapter seeks to unpack these questions through the case of smart contracts3 and specifically, the programming languages of these contracts. Reflecting on the philosophies of logic, linguistics, and law, the project follows the conundrum: what is the significance of the medium in contract drafting? Jacques Derrida questioned natural language and the medium of writing as the accepted form of communication. His argument strikes an interesting parallel to the use of written and descriptive language in law. Derrida considers how writing is perceived as the original form of technology; that ‘the history of writing will * CodeX Fellow at Stanford Law School; PhD Candidate in Law and Lecturer at Sciences Po. 1 M Hildebrandt, Law for Computer Scientists and Other Folk (forthcoming OUP, 2020). A web version is currently accessible on the open source platform lawforcomputerscientists.pubpub.org/. 2 Law schools are beginning to offer courses in technical development, including computer programming. Moreover, classes that apply design-thinking to legal studies and were developed with the intention of acknowledging technology as a powerful driving force in law. Consider Harvard Law School and Georgetown Law School’s Computer Programming for Lawyers classes, or The Design Lab at Stanford Law School. See, eg, Harvard Law School, ‘Computer Programming for Lawyers’ hls. harvard.edu/academics/curriculum/catalog/default.aspx?o=75487 (last accessed February 2020. 3 When using the term ‘smart contracts’ for this chapter, I am not referring to the narrow scope of blockchain contracts, but to ‘computable contracts’ as discussed in H Surden, ‘Computable Contracts,’ (2012) 46 University of California at Davis Law Review 629, 647.

156  Megan Ma conform to a law of mechanical economy’.4 Independent of structure or ­meaning, writing was a means to conserve time and space by way of ‘convenient abbreviation’.5 Is legal writing then not merely a method of notation? Would this suggest that the use of code advances the notion of convenience, communicating in a manner that further conserves time and space? Alternatively, Michel Foucault states, rather than ‘an arbitrary system’, language forms and is interwoven with the world. It is an ‘enigma renewed in every interval … and offer[ed] … as things to be deciphered’.6 Equally, Geoffrey Samuel argues that the ‘true meaning of a legal text is hidden within the language employed’.7 The project is, therefore, a thought experiment on the translation of text to numbers by unpacking several formal languages used in computable contracts. The hypothesis is that, by analysing the components of both legal and programming language, we can develop a richer dialogue on the sociological implications of translating law to algorithmic form. Furthermore, it would be interesting to consider what contextual understanding may need to exist to ‘interpret’ contractual language. The chapter8 will unfold as follows. Section II will open with a primer on translation, introducing broader notions of conceptual transfer, meaning, and understanding. Section III will reflect on histories of logic and formalism; and their return in light of new contract drafting technologies. Section IV embarks on a brief investigation of programming languages, analysing sample translations of contracts from natural language to computer code. Section V will gather observations and suggest implications. Section VI will consider next steps. Finally, I will conclude with a few remarks.

II.  A Primer on Translation Rules are pervasive in the law. In the context of computer engineering, a field that regards itself as fundamentally deterministic, the translation of legal text to algorithmic form is seemingly direct. In large part, law may be a ripe field for expert systems and machine learning. For engineers, existing law appears formulaic and logically reducible to ‘if, then’ statements. The underlying assumption is that the legal language is both self-referential and universal. Moreover, description is

4 J Derrida, Limited Inc. (Northwestern University Press, 1988) 4. 5 ibid. 6 M Foucault, The Order of Things: An Archaeology of the Human Sciences (Tavistock Publications, 1970) 35. 7 G Samuel, ‘Is Legal Reasoning like Medical Reasoning?’ (2015) 35 Legal Studies 323, 334. 8 Note that the original study was part of a collaboration between the MIT Computational Law Report and myself to examine formal programming languages. Therefore, for the goals of this chapter, it will proceed with a select analysis of a few languages used. Nevertheless, these suffice in illustrating the working hypothesis.

Contracting in Code  157 considered distinct from interpretation; that in describing the law, the language is seen as quantitative and objectifiable. The qualities described would then make computer programming languages ideal. Nevertheless, is descriptive natural language purely dissociative? In other words, does the medium matter in legal writing? From the logic machine of the 1970s to the modern fervour for artificial intelligence (AI), governance by numbers is making a persuasive return. Could translation be possible?

A. Semantics In a recent article, Douglas Hofstadter commented on the ‘Shallowness of Google Translate.’9 He referred largely to the Chinese Room Argument;10 that machine translation, while comprehensive, lacked understanding. Perhaps he probed at a more important question: does translation require understanding? Hofstadter’s experiments seemed to suggest it does. He argued that the purpose of language was not about the processing of texts. Instead, translation required imagining and remembering; ‘a lifetime of experience and […] of using words in a meaningful way, to realise how devoid of content all the words thrown onto the screen by Google translate are’.11 Hofstadter describes the classic illusion, known as the ‘ELIZA effect’, of having the appearance of understanding language; instead, the software was merely ‘bypassing or circumventing’ the act.12

B. Context For Yulia Frumer, translation not only requires adequately producing the language of foreign ideas, but also the ‘situating of those ideas in a different conceptual world’.13 In languages that belong in the same semantic field, the conceptual transfer involved in the translation process is assumed. In the event that languages do not share similar intellectual legacies, the meaning of words must be articulated through the conceptual world in which the language is seated. From a historical perspective, Frumer uses the example of eighteenthcentury Japanese translations of Dutch scientific texts. The process by which translation occurred involved analogising from western to Chinese natural philosophy; effectively reconfiguring the foreign to local through experiential learning. 9 D Hofstadter, ‘The Shallowness of Google Translate,’ The Atlantic (30 January 2018), www.theatlantic. com/technology/archive/2018/01/the-shallowness-of-google-translate/551570/. 10 A thought-experiment first published by John Searle in 1980 arguing that syntactic rule-following is not equivalent to understanding. 11 Hofstadter (n 9). 12 ibid. 13 Y Frumer, ‘Translating Worlds, Building Worlds: Meteorology in Japanese, Dutch, and Chinese,’ (2018) 109 Isis 326.

158  Megan Ma This is particularly fascinating, provided that scientific knowledge inherits the reputation of universality. Yet, Frumer notes, ‘… if we attach meanings to statements by abstracting previous experience, we must acquire new experiences in order to make space for new interpretations’.14 Interestingly, Duncan Kennedy tested the relationship between structure, or symbols, and meaning by deconstructing argument into a system of ‘argumentbites.’ Argument-bites form the basic unit. Operations then performed on argument-bites constitute and build legal arguments. Such operations diagnose and assume the circumstances, or relationships, in which the argument-bite is to be manipulated and ‘deployed’.15 The import of structural linguistics conceptualises law and argument as systematically formulaic; ‘a product of the logic of operations’.16 Perhaps most interesting about Kennedy’s theory is his idea of ‘nesting’. Kennedy describes nesting as the act of ‘reproduction’ or the ‘reappearance of [argument-bites] when we have to resolve gaps, conflicts or ambiguities that emerge [from] … our initial solution to the doctrinal problem’.17 Therefore, the conundrum surfaces where language may be applied to law in a mechanical fashion but the process of reducing legal argument to a system of operations raises considerations on the act of labelling and the power in its performativity. That is – and as Kennedy rightfully notes – ‘language seems to be ‘speaking the subject,’ rather than the reverse’.18

C. Code Hildebrandt teases this premise by addressing the inherent challenge of translation in the computer ‘code-ification’ process. Pairing speech-act theory with the mathematical theory of information, she investigates the performativity of the law when applied to computing systems. In her analytical synthesis of these theories, she dwells on meaning. ‘Meaning’, she states, depends on the curious entanglement of self-reflection, rational discourse and emotional awareness that hinges on the opacity of our dynamic and large inaccessible unconscious. Data, code … do not attribute meaning.19

The inability of computing systems to process meaning raises challenges for legal practitioners and scholars. Hildebrandt suggests that the shift to computation

14 ibid 327. 15 Kennedy describes relating argument-bites to one another by such operations as a means of confronting legal problems. See D Kennedy, ‘A Semiotics of Legal Argument’ (1994) 3 Collected Courses of the Academy of European Law 317, 351. 16 ibid 343. 17 ibid 346. 18 ibid 350. 19 M Hildebrandt, ‘Law as computation in the era of artificial intelligence: Speaking law to the power of statistics’ (2019) Draft for Special Issue University of Toronto Law Journal 10.

Contracting in Code  159 necessitates a shift from reason to statistics. Learning to ‘speak the language’ of statistics and machine-learning algorithms would become important in the reasoning and understanding of biases inherent in AI-driven legal technologies.20 More importantly, the migration from descriptive natural language to numerical representation runs the risk of slippage as ideas are (literally) ‘lost in translation’. Legal concepts must necessarily be reconceptualised for meaning to exist in the mathematical sense. The closest in semantic ancestry would be legal formalism. Legal formalists thrive on interpreting law as rationally determinate. Judgments are deduced from logical premises and syllogisms; meaning is assigned. While, arguably, the formalisation of law occurs ‘naturally’ – as cases with like factual circumstances often form rules, principles, and axioms for treatment – the act of conceptualising the law as binary and static is puzzling. Could the law behave like mathematics; and thereby the rule of law be understood as numeric?21

III.  Logical Ancestors and the Formalistic Return To translate the rule of law in a mathematical sense then requires a reconfiguration of the legal concept. Interestingly, the use of statistics and so-called ‘mathematisation’ of law is not novel. Oliver Wendell Holmes Jr. most famously stated in the Path of Law that ‘[f]or the rational study of the law, the blackletter man may be the man of the present, but the man of the future is the man of statistics and the master of economics’.22 Governance by numbers then realises the desire for determinacy; the optimisation of law to its final state of stability, predictability, and accuracy. The use of formal logic for governance has a rich ancestry. From Aristotle to Descartes to Leibniz, the common denominator was that mathematical precision should be applied across all disciplines.

A.  Early Signs Since the twelfth century, logicians allegedly used logical paradoxes to spot ‘false’ arguments in courts of law.23 It was not, however, until the seventeenth century when Gottfried Leibniz proposed a mental alphabet;24 whereby thoughts could 20 Advances in natural language processing (NLP), for example, have opened the possibility of ‘performing’ calculations on words. This technology has been increasingly applied in the legal realm. See ibid 13. 21 What is being described here is not in reference to the World Justice Project (WJP) Rule of Law Index. I do, nevertheless, acknowledge that it is a prime example of a broadly qualitative assessment transformed as a ‘quantitative tool for measuring the rule of law in practice.’ See World Justice Project, World Justice Project Rule of Law Index (2019) 7. 22 OW Holmes Jr., ‘The Path of Law,’ (1897) 10 Harvard Law Review 457, 469. 23 K Devlin, Goodbye Descartes: The End of Logic and The Search for a New Cosmology of the Mind (John Wiley &Sons, 1997) 54. 24 ibid 62.

160  Megan Ma be represented as combinations of symbols, and reasoning could be performed using statistical analysis. Building from Leibniz, George Boole’s infamous treatise, The Laws of Thought, argued that algebra was a symbolic language capable of expression and construction of argument.25 By the end of the twentieth century, mathematical equations were conceivably dialogic; a form of discourse. This was perceivably owed to Boole’s system; that complex thought could be reducible to the solution of equations. Nevertheless, the most fundamental contribution of Boole’s work was the capacity to isolate notation from meaning.26 That is, ‘complexities’ of the world would fall into the background as pure abstraction was brought to centre stage. Eventually, Boole’s work would form the basis of the modern-day algorithm and expression in formal language. ASCII, the acronym for the American Standard Code for Information Interchange, is an exemplary case. Computers are only capable of understanding numbers. For a computer to interpret natural language, ASCII was developed to translate characters to numbers. Using a binary numeral system, ASCII assigns a numerical value – 32– to a letter. In brief, by performing the mathematical calculation, a binary code of 0s and 1s could be computed from a letter. Early conceptual computing devices, such as the Turing machine, were borne into existence as a direct product of Boolean algebra. Christopher Markou and Simon Deakin point to the breakthroughs in natural language processing (NLP) as contributors to the emergence of ‘Legal Technology (Legal Tech).’27 Markou and Deakin point to Noam Chomsky and early researchers of AI designing ‘hard-coded rules for capturing human knowledge’.28 Chomsky’s work stirred new developments in NLP, eventually powering advances in machine translation and language mapping. Known as expert systems, NLP applications ‘relied upon symbolic rules and templates using various grammatical and ontological constructs’.29 These achievements were then further enabled by Deep Learning30 models, capable of abstracting and building representations of human language. With the rise of artificial legal intelligence, computable contracts – and more broadly, computable law – are making a powerful return. Contracts may be represented as computer data with terms made ‘machine-readable’ through a process of conversion: from descriptive natural language to consonant computer instruction. Conditions of agreements are not explained but listed as structured data records. Despite the capacity to express contracts in an alternative computable form, there 25 G Boole, The Laws of Thought (1854) ch 1. 26 Devlin (n 23) 77. 27 C Markou and S Deakin, ‘Ex Machina Lex: The Limits of Legal Computability,’ (2019) Working Paper, ssrn.com/abstract=3407856. 28 ibid 13. See also cited reference, E Brill and RJ Mooney, ‘Empirical Natural Language Processing’ (1997) 18 AI Magazine 4. 29 Markou and Deakin (n 27) 11–15. 30 Deep Learning is a subset of machine learning that involves artificial neural networks and the assigning of numerical weights on input variables. For further explanation, see ibid 10–12.

Contracting in Code  161 is no means for interpretation. Instead, interpretation is perceived as irrelevant. Should digital data inscription and processing be considered a form of legal writing? If so, would it change the character of law?

B.  Modern Variations Expert systems and machine learning technology used for the revision of contracts seek to reduce the risk of human error. Eventually, contract analysis would manage, record, and standardise provisions that are ‘proven favourable’;31 in effect, perfecting contractual boilerplate. Boilerplate contracts are often regarded as a trade-off between tailoring and portability; that with broad standardisation, the ‘burden’ of interpretation is lifted.32 Contractual boilerplate, therefore, relies heavily on formalistic drafting, whereby form presides over meaning. For computable contracts, the migration of mediums – from descriptive natural language to mathematical form – generates data that identifies and signals the specific version of contracts that should be used in future cases.

i.  Market Uptake By integrating AI in contract drafting, there is a push away from static mediums of writing. These include Microsoft Word and Adobe PDF; the original technological artefacts that evolved from pen and paper. In either case, the technology is never described as a replacement.33 The purpose of these inventions is merely assistive.

ii.  New Environments Interestingly, the legal community is beginning to explore the problems associated with the use of static platforms like Microsoft Word. In a recent paper, Michael Jeffrey interrogates the use of Microsoft Word as the dominant and default format for the editing of legal documents. He considers the inefficiencies of manual updating, drafting, and reviewing. Though interpreted as a static platform, Microsoft 31 B Rich, ‘How AI is Changing Contracts,’ Harvard Business Review (Web Article, February 2018), hbr.org/2018/02/how-ai-is-changing-contracts. See also white paper ‘How Professional Services Are Using Kira,’ Kira Machine Learning Contract Analysis cdn2.hubspot.net/hubfs/465399/04-resources/ whitepapers/KiraSystemsWhitePaper-HowProfessionalServicesFirmsAreUsingKira.pdf (last accessed February 2019). 32 HE Smith, ‘Modularity in Contracts: Boilerplate and Information Flow’ (2006) 10 Michigan Law Review 1175, 1176. 33 Follows the existing literature that technology could only work complementary to the law. See F Pasquale, ‘A Rule of Persons, Not Machines: The Limits of Legal Automation’ (2019) 87 George Washington Law Review 2, 6. See also NM Richards and WD Smart, ‘How should the law think about robots?’, in R Calo et al (eds), Robot Law (Edward Elgar, 2016) 16–18. Their chapter argues how law hinges on social and political relationships and metaphors that require a latent understanding of temporal social constructs (emphasis added).

162  Megan Ma Word, in actuality, ‘can be controlled through code’.34 Included in its software is, in fact, a number of templates modelled specifically for drafting legal documents. These templates contain automatic text entry, macros, and special formatting.35 For long and complicated legal documents, Jeffrey argues that an integrated development environment (IDE) could ‘facilitate the authoring, compiling, and debugging’ of contracts.36 For programmers, the use of IDE provides several key features that are amenable to legal drafting. Options include increased readability owed to color-coded syntax highlighting, automatic error detection, and predictive auto-complete features to provide suggestions while drafting. These features, he claims, could improve the drafting process by reducing the risk of human error and increasing efficiency. Yet, the most interesting perspective he offers is the subtle equation of linguistic concepts as inherently mathematical.37 Jeffrey draws programming concepts and applies them specifically to elements of legal drafting. The syntax, he notes, is ‘designed for drafting and document generation’ and that the process would be ‘quite natural’.38 The underlying assumption is that static platforms and IDEs have the same functional purpose. The differences lie in the added features for real-time edits. This speaks to a greater assertion: programming languages serve the same uses as natural language. Yet, the shift from pen and paper to Microsoft Word did not fundamentally change the use of natural language for legal drafting. The use of IDEs, on the other hand, would effectively alter not only the platform, but also the method of execution. Juro, for example, is a Legal Tech start-up that promotes contract management on a dynamic platform.39 Based in London, Juro works to improve contract management by translating contracts drafted on Microsoft Word or Adobe PDF to machine-readable form. The start-up’s platform allows for contracts to be built in a text-based format that is language independent (ie JSON). The contracts, thereby, exist in code. Other start-ups such as OpenLaw apply a hybrid approach; an integration of machine-readable code with clauses drafted in ­natural language.40

34 M Jeffrey, ‘What Would an Integrated Development Environment for Law look like?’ (2020) 1.1 MIT Computational Law Report, law.mit.edu/pub/whatwouldanintegrateddevelopmentenvironmentforlawlooklike. 35 ‘MS Word for Lawyers: Document Templates’ (Tech for Lawyering Competencies: Research & Writing) law-hawaii.libguides.com/TLC_Research_Writing/WordTemplates (last accessed May 2020). 36 Jeffrey (n 34). 37 Jeffrey notes, ‘For legal drafting … the focus is linguistic – rather than mathematical – but the core concepts are the same.’ See ibid. 38 ibid. 39 See Juro’s White Paper, R Mabey and P Kovalevich, ‘Machine-readable contracts: a new paradigm for legal documentation,’ (Juro Resources) info.juro.com/machine-learning?hsCtaTracking= 60e75e06-22bb-4980-a584-186124e645b3%7C6a7d3770-289d-4c97-bcfb-c9f47afec77f (last accessed February 2020). 40 ‘Markup Language’ (OpenLaw) docs.openlaw.io/markup-language/#variables (last accessed April 2020).

Contracting in Code  163 OpenLaw runs on JavaScript41 and uses a mark-up language to ‘transform natural language agreements into machine-readable objects with relevant variables and logic defined within in a given document’.42 These documents are then compiled together to act as contracts. Clauses are interpreted as ‘embedded template[s]’.43 The goal is to reduce drafting work by storing boilerplate clauses as data that may be added to contracts. The incorporation of code with natural language offers a dynamic interpretation of legal agreements. It mirrors the notion that select contractual elements are reproducible and calculable, while others require human intervention. The drafting process is left rather unchanged. The aforementioned start-ups are only a few of the growing number of Legal Tech start-ups committed to the ‘betterment’ of contract drafting. These contracts are classified as more efficient, precise; otherwise, ‘smarter’. Nevertheless, there is a dearth of literature on the use of formal languages for legal writing. Albeit formal programming languages for contract drafting not only exist but have proliferated in the past few years. Interestingly, their ancestors sprung from logic programming in the 1970s.

IV.  A Study of Code Two of the most broadly used programming languages, Python and Prolog, use opposing methods of operation; the former is procedural, while the latter is declarative. Procedural programs often specify how the problem is to be solved. That is, with procedural programs, there are clear instructions for the program to follow. Akin to baking, all terms are defined explicitly, and all rules must be laid out. Should a program, such as Python, find that it cannot proceed with the task, this is typically because the program is unable to recognise the syntax. Equally, Python is incredibly sensitive to changes in the code; even a misplaced comma or indent in the spacing could affect the overall outcome of the specified task. Procedural programs often include functions; self-contained modules of code capable of being manipulated and reused for innumerable tasks. Perhaps its most powerful operation, Python is able to examine and decide actions on the basis of conditions. Moreover, Python simplifies work by being able to loop through the same tasks in a given list. Rather than the manual repetition of a given task, Python is able to do so in a matter of seconds. On the other hand, declarative programs specify what the problem is and ask the system, instead, to solve it. Declarative languages are founded on either the relationships: (1) between objects; or (2) between objects and their properties.

41 Defined as a programming language with a code structure to build commands that perform actions. ‘Code Structure’ The JavaScript Language, javascript.info/ (last accessed April 2020). 42 ‘Markup Language’ (n 40). 43 ibid.

164  Megan Ma These relationships may be defined implicitly through rules or explicitly through facts. Facts describe relationships, while rules qualify them. The purpose of Prolog, therefore, is to form a fixed dataset that would derive answers to future queries about a relationship or set of relationships based on the inputted information. In contrast, the purpose of Python is to complete a particular task. While it can certainly account for prospective changes to the data, every step is explicitly expressed.44 Advancing forward several decades, Python and Prolog have become the basis for a new era of programming languages used for drafting computable contracts. Ergo, Lexon, and Blawx are among a few of the current languages being prototyped. Each language is built from a different model. Ergo is a programming language modelled on execution logic for legal writing. It belongs to the suite of resources offered by The Accord Project.45 Blawx and Lexon, on the other hand, are non-coding options with the former developed on declarative logic and the latter derived from linguistic modelling.46 In order to understand how formal languages may be used to draft contracts, I refer to extracts of legal documents translated from natural language to code. These translations are originals of each programming language, unedited and taken directly from their technical documentation. They were included as demonstrations of how contracts may be drafted in the select language. The translations are, therefore, presumed to be manually done by each language’s programmers; and thereby, implicitly representative of their respective design choices.

A. Ergo To begin, Ergo follows a more traditional form of procedural programming and is largely function-based. This means that its language is predicated on the performance of the contract. However, Ergo is unique. It cannot be divorced from the overarching contract implementation mechanism, known as Cicero. Cicero consists of three ‘layers’: (1) text; (2) model; and (3) logic. Ergo is the logic component.47 It is perhaps considered the ‘end’ process of a continuous flow of translation from human-readable to machine-executable.48 44 I acknowledge that Python is able to work in adaptive environments and does not have a fixed data set. The comment is directed at the explicit expression of a given task. 45 The Accord Project also offers CiceroMark and Concerto. The former is a contract template generator that helps build agreements embedded with machine executable components. The latter is a program that enables the data of computable contracts to be manipulated and modelled. The Accord Project also offers a trial template editor to build and test out smart agreements. For more information, see ‘Key Concepts,’ (Accord Project) docs.accordproject.org/docs/accordproject-concepts. html (last accessed October 2020). 46 Lexon qualifies its model as designed with the intention of reasoning in natural language and uses formal linguistic structure. See H Diedrich, Lexon: Digital Contracts (Wildfire Publishing, 2020). 47 ‘Key Concepts’ (n 45). 48 ibid.

Contracting in Code  165 The Cicero architecture, therefore, is an interdependent network of resources that start with natural language text and end with compartmentalised data packages. That is, natural language contracts may be deconstructed into reproducible modules that can be interchangeably used between various types of contracts. How does this work? Contractual clauses are sorted and categorised into qualitative and quantitative components. Descriptive terms of the contract remain at the text layer.49 Variables that are quantifiable, on the other hand, are extracted from the natural language and captured in the model layer. These variables are notably bits of information that are reusable, iterative and computable. This layer bounds natural language to data, as variables map conditions and relationships of the contract. Arriving at the logic layer, what remains are functional requirements of these variables. In other words, what are the specific operations necessary in order for these variables to perform the demands and terms of the contract? Consequently, Ergo is intentionally limited with its expressiveness.50 Consider the following contractual clause translated from descriptive natural language to Ergo. The original provision, in prose, states: Additionally, the Equipment should have proper devices on it to record any shock during transportation as any instance of acceleration outside the bounds of -0.5g and 0.5g. Each shock shall reduce the Contract Price by $5.00.

The clause, in code, reads: Figure A  Extracted from Ergo’s ‘Fragile Goods Logic’ (Template Studio, Accord Project) studio.accordproject.org/?template=ap%3A%2F%2Ffragile-goods%400.15.0%23hash (last accessed November 2021).

49 ibid. 50 The goal is for conditional and bounded iteration. This is presumably contributive to the reusability of contractual clauses. See ‘Ergo overview,’ (Accord Project) docs.accordproject.org/docs/logic-ergo. html (last accessed February 2020).

166  Megan Ma At first glance, the translation is rather striking. There are evidently several omissions from the natural language text to the Ergo language. First, mention of recording devices that determine the weight changes are excluded from the code. Moreover, fluctuations in the Contract Price are equally excluded. Instead, only variables remain, such as DeliveryUpdate, PaymentObligation, accelerometerReadings, accelerationMin and etc. Upon closer reading, it becomes clear that the contractual clause has undergone a decoupling process. That is, a conversion from the original unified contractual language to independent, actionable constituents has taken place. These variables are quantitative reconfigurations of the ‘performative’ elements of the contract. For example, the model layer reconstructs the weight changes and fluctuations in the Contract Price to: Figure B  Extracted from ‘Fragile Goods’ (Accord Project) templates.accordproject.org/ [email protected] (last accessed October 2020).

As noted, Ergo applies these variables and signals their operations. The Ergo language requests for the acceleration readings from the recording devices, then dependent on the parameter changes, computes whether the Contract Price would alter. This method of distilling the quantifiable from the qualifiable suggests that contracts are necessarily unambiguous and, in effect, are simply a matter of structuring.

B. Lexon Alternatively, Lexon is an innovative peculiarity. Unlike other programming languages, Lexon is founded on linguistic structure and designed to reason in natural language. Lexon reduces vocabulary and grammar to rule sets. Lexon’s base vocabulary consists of definable ‘names’ used to designate objects and clauses. Just as one would draft sentences in natural language with a subject and predicate, Lexon operates in a similar fashion. There is, however, an important difference: articles are considered superfluous, ‘filler’ words.

Contracting in Code  167 Below is a sample contract drafted in Lexon: Figure C  Example 2a on Lexon demo (Lexon) demo.lexon.tech/ (last accessed November 2021).

For an agreement at this level of simplicity, articles may not seem necessary to clarify the meaning of contractual terms. Nevertheless, party obligations do occasionally hinge on articles; potentially affecting the performance of the contract. It is not inconceivable that specifying a particular object as opposed to a general one matters, especially in certain procurement and sales contracts. Lexon argues that the primary role of articles is to improve text readability. Yet, Lexon concedes that articles can ‘fundamentally change the meaning of a contract’ and that this may be an area ripe for abuse.51 Further complicating the narrative, Lexon is not concerned about semantics altogether. The start-up’s creator, Henning Diedrich, acknowledges the inherent ambiguity of natural language that renders interpretation challenging; but argues that the Lexon language is not intended to clarify nor create complete contracts. Instead, Lexon is bridging the gap between formal programming and natural languages. Like other formal languages, Lexon cannot understand the ‘meaning’ of its terms. Its structural design only accounts for functionality. Lexon uses Context Free Grammars (CFG). First theorised by Chomsky, CFG do not depend on context; rules operate independent of the objects in question.52 Chomsky had originally developed CFG in an effort to formalise natural language. While this was largely unsuccessful in linguistics, it has since been popularised in computer 51 Lexon has noted that future tools would account for the possibility of such abuse. See Diedrich (n 46) 33. 52 The term ‘context-free grammars’ is what is most commonly used by linguists today. Chomsky’s original formulation was known as constituency grammars. Chomsky emphasised the scientific study of language in a rational manner, free of context and culture. See N Chomsky, Cartesian Linguistics: A Chapter in the History of Rational Thought 3rd edn (Cambridge University Press, 2009) and also, N Chomsky, ‘Remarks on Nominalization’, in RA Jacobs and PS Rosenbaum (eds), Readings in English Transformational Grammar (Ginn and Company, 1970) 184–221.

168  Megan Ma science. Consequently, Lexon applies the model to create a programming language that is both expressible in natural language and readable by machines. Diedrich contends that meaning could never be attained. Meaning is regarded as something that, though it cannot be extracted, could be pointed to or described.53 The Lexon language is structured in a manner reflective of these underlying assumptions. That is, rather than dwelling on the interpretation of the specific word or phrase in natural language, Lexon limits meaning to function. Diedrich states, ‘the actual functionality of the contract is the better description of … the list of the actual rights and obligations of that person without relying on the original meaning of the word’.54 By framing functionality as a proxy for party obligations, Lexon inadvertently reframes the basis of contract theory from party autonomy to contract performance.

C. Blawx55 Finally, Blawx is a programming language based on declarative logic. Perhaps the most interesting element of this language is its user interface. The code visually appears as puzzle pieces –or Lego blocks – searching for its missing piece. Blawx was inspired by the program, Scratch, created in MIT as an educational assistant for children learning how to code. As the ‘blocks’ literally connect with one another, they visually capture the relationships between objects and their properties. Much like Prolog, Blawx operates on sets of facts and rules. Facts represent objects, or things, known to be true in the code. Rules are coded statements composed of both conditions and conclusions. Both elements are required in order for a rule to exist. Unlike other programming languages, Blawx works on the premise of declarative rules such that ‘conclusions are true if conditions are true’. This may seem no different than traditional ‘if, then’ statements. This is surprisingly false. In programming, the ‘if conditions then conclusions’ framework operates temporally. For machines, this means that conditions only apply to the specific task at hand and do not apply globally to the program.56 In the case of Blawx, rules are encoded in a declarative manner to help form the particular program’s ‘universe of knowledge’. Once the ‘universe’ of facts and rules have been established, the program will be able to answer to queries. Queries are fact-based and binary. Blawx aims to transform legal documents to queryable databases. In practice, this would suggest that contracts may be encoded using the aforementioned logic of the program. Ultimately, the goal is for parties to be able to reason by asking binary questions to the application. The encoding of facts and rules allows parties 53 Diedrich (n 46) 107. 54 ibid 106. 55 It must be acknowledged that Blawx is currently in alpha version and at the early stages of a prototype. It has, however, been recognised for its potential as a legal reasoning and drafting tool. 56 This is described as ‘if right now the conditions are true, then next the computer should do conclusions’. See ‘Facts, Rules, and Queries,’ (Blawx.com) www.blawx.com/2019/09/facts-rules-andqueries/#page-content (last accessed February 2020).

Contracting in Code  169 to move from legal reasoning to legal information extraction. Interpretation, then, is no longer required since the solutions are presumed to be directly retrievable. Consider the sample translation of a legislative act from descriptive natural language to Blawx. The article states: 5(1): A personal directive must (a) Be in writing, (b) Be dated (c) Be signed at the end i. By the maker in the presence of a witness, or ii. If the maker is physically unable to sign the directive, by another person on behalf of the maker, at the maker’s direction and in the presence of both the maker and a witness, and Be signed by the witness referred to in clause (c) in the presence of the maker.

The provision, in Blawx, reads: Figure D  Taken from ‘Example: Using Blawx for Rules as Code,’ Blawx.com, www.blawx.com/2020/01/example-using-blawx-for-rules-as-code/#page-content (last accessed February 2020).

170  Megan Ma

This translation is an especially difficult read. First, the ‘block’ appearance of the language may be troubling for those who are not tactile learners. The programming language forces the reader to focus instead on the conceptual components of the rules as opposed to the clause. The logic of the program necessitates a substantive breakdown of the legislation to its ontological elements. Simply put, it reduces the law to the relevant actors and their obligations. In this case, these elements are: (1) the roles (actors); and (2) the signatures (obligations). More importantly, the process of converting natural language to Blawx faced significant challenges with interpretation.57 Coding the legislation required reframing the meaning of ‘personal directive’58 into a binary; either as an object or an action. Fundamentally, it is a reconfiguration of the law to its function. Rather than, ‘what are the requirements of a personal directive’, the question becomes ‘what actions must be taken in order for the personal directive to have legal effect?’ The questions asked de facto bear the same meaning. The difference, while subtle, crucially points to an implicit recognition of the legal effect of the document in natural language. Notably, a personal directive could only exist should 57 There is repeated commentary on the difficulty of interpretation when converting to a binary. ‘Example: Using Blawx for Rules as Code,’ (Blawx.com) www.blawx.com/2020/01/example-usingblawx-for-rules-as-code/#page-content (last accessed February 2020). 58 Here, the personal directive is understood to be a ‘living will’.

Contracting in Code  171 the requirements be met. Otherwise, it would simply be a piece of paper. This was raised as a note on the translation. Blawx introduced the concept of ‘validity’ as a new condition59 because there was no form of classification for a document that was not a personal directive. In the context of computable contracts, the Blawx language – like Ergo – would perhaps work best for contracts with clear objectives and unidirectional relationships. Thus, through the study, the technology is observably limited. Namely, contracts drafted in these languages are governing simple transactions. Nonetheless, they expose conflicting interpretations of contract theory between computer scientists and legal actors. More specifically, a commonality across all program languages is the formulation of contract law as entirely predicated on performance. Consequently, programming languages alone are function-based. The principle of party autonomy, expressed often as details in contract terms, is only secondary to the actual completion of the transaction. Rather than what parties have agreed to and how the parties have fulfilled their obligations, it becomes solely dependent on whether the obligation has been completed. Zev Eigen states that contracts are a product of how drafters and signers interpret the law.60 Contracts that are negotiated represent a ‘meeting of the minds’. Standard boilerplate, on the other hand, is the product of only the drafters’ interpretation, not the signers. In this case, programming languages run the risk of eliminating ‘the signers’ altogether; and ‘the drafters’ are the code itself.61 Consequently, this could reconfigure basic contracts doctrines; conflating principles of consideration as offer and acceptance as obligation.

V.  Observations and Implications With the increasing normalisation of smart contracts, computer code could ­foreseeably become a vehicle in which contracts are drafted. The question remains: should programming languages be recognised as a form of legal language? The following section will analyse the observations taken from the study against existing literature. As discussed, function becomes paramount to computable contracts. Formal programming languages reveal that because natural language is indeterminate, a migration away from semantics to syntax could resolve the challenges relevant to interpretation.

59 Following the formula of a declarative rule, this would suggest ‘this is a personal directive (conclusion) if it is valid (condition)’. See Blawx (n 57). 60 ZJ Eigen, ‘Empirical Studies of Contract’ (2012) Faculty Working Paper 204, 7, scholarlycommons. law.northwestern.edu/cgi/viewcontent.cgi?article=1203&context=facultyworkingpapers. 61 Consider Lawrence Lessig and the conceptualisation of code as law. Lessig draws attention to code as a form of control; that ‘code writers are increasingly lawmakers’. See L Lessig, Code 2.0 2nd edn (Basic Books, 2006) 79.

172  Megan Ma In ‘Self-Driving Contracts’, Casey and Niblett consider the gaps in contract theory owed to the ambiguity of natural language. They argue that, currently, natural language as a medium of legal expression allows contracts to be both intentionally and unintentionally incomplete.62 Intentional incompleteness is interesting because it implies that general language circumvents the ex-ante costs of decisionmaking and creates a space for changes in conditions. This, however, often leads to issues of enforceability, such as disputes about the definitions of ‘reasonable’ and ‘material’.63 Consequently, ‘self-driving contracts’ would use machine learning algorithms and expert systems to remove questions of enforceability.

A. Ergo Much like ‘self-driving’ contracts, the aforementioned programming languages help automate the processes of contract creation and interpretation. As observed in the study, interpretation is internalised by the technical bounds of the programming language as contractual clauses are constructed to reason purposively. For Ergo, the question remains whether contractual ambiguities are a mere consequence of improper structural representation. Notably, the migration from text-to-model layer implies the potential for mathematical precision from inception. Duncan Kennedy argues that, whether Hart or Kelsen, determinacy is a matter of degree.64 Though legal drafting may be simplified through the act of sorting, assessing whether a clause is sufficiently amenable to reusability is a difficult ask. The underlying assumption for the Cicero architecture is that the simplification process will not eventually alter the method of drafting. Perhaps a better question: is there value to qualitative descriptive clauses in legal writing? That is, would the ‘text’ layer remain relevant going forward; and what is the significance of retaining the natural language component of contract drafting? As discussed by Casey and Niblett, contracts are deliberately incomplete. Again, this is because contracts are manifestations of party intent.65 In effect, how contracts are written frame the behaviour of parties, and thereby influence its performance. Contracts that are negotiated tend to be less specific and have more room for interpretation. Performance is less likely to be exact. Yet, performance is not compromised despite the ‘incompleteness’ of the contract. Instead, the contract’s incompleteness signals trust between parties.66

62 AJ Casey and A Niblett, ‘Self-Driving Contracts’ (2017) 43 Journal of Corp. Law 101, 112–117. 63 ibid 113. 64 D Kennedy, Legal Reasoning: Collected Essays (Davies Group Publishers, 2008) 154. 65 Eigen (n 60). 66 ibid 17. Eigen references the study by Chou, Halevy and Murninghan. See EY Chou et al, (2011) ‘The Relational Costs of Complete Contracts’ IACM 24th Annual Conference Paper, papers.ssrn.com/ sol3/papers.cfm?abstract_id=1872569.

Contracting in Code  173 Contracts, then, call for ambiguity, and specifically semantic ambiguity. In isolation, programming languages like Ergo create the illusion that mutual assent is automatic and indisputable. Semantic ambiguities no longer exist, as contractual negotiations are limited to operations with little care for parties’ preferences. This could potentially invoke a behavioural change since contracts would become primarily functional in nature. Equally, this could conceivably lead to a simplification of contracts and a convergence towards contractual boilerplate. But, just as Cicero operates through the trifecta of text-model-data, natural language is indispensable from contract drafting. The role of natural language becomes monumental, ensuring that the elements of trust and party autonomy are not compromised and, rather, maintain the heart of contracts doctrine.

B. Lexon Lexon’s language poses a similar puzzle. Readable in natural language, Lexon’s verbs are coded such that they coincide with the performance of the transaction. Diedrich’s formulation of meaning finds parallels with Ludwig Wittgenstein’s writings. Wittgenstein argues that language, as used presently, extends beyond names and ‘dry dictionary entries with their definitions’.67 The actions derived from words are effectively married to their meanings. It is conceivable then that language could be no more than a list of orders and classifications. It follows that abiding by the rules of association is accepting the inherent authority of its practice. Meaning is found in the performance of the word, and not in the understanding of it. Lexon claims that it neither translates nor transforms thought.68 Instead, Lexon preserves the natural language construction of ‘meaning’, by placing a constraint on its rules. That is, Lexon uses a subset of natural language grammar as the programming language of the legal contract.69 This approach is known as ‘controlled natural language’. Rather than processing all of natural language, a machine need only to process an assigned vocabulary and grammar. The assigned set becomes the operatives of the language game. Equally, Lexon wears the legacy of Chomskyan formal semantics; whereby the syntactic structure is both a projection and vessel of its function. Interpretation is again internalised by ‘mapping … symbols to a reference structure’.70 The difficulty lies in whether the constrained grammar could sufficiently manage more complex legal contracts. In the attempt to draft contracts within its specific grammar, party intentions may be lost in the language game.

67 S Jasanoff, Can Science Make Sense of Life? (Polity, 2019) 117. Wittgenstein considered language as a form of life; and thereby, linguistic expression is constructive of its being. See also L Wittgenstein, Philosophical Investigations 2nd edn (Macmillan, 1958) 19. 68 Diedrich (n 46) 104. 69 ibid. 70 G Baggio, Meaning in the Brain (MIT Press, 2018) 62.

174  Megan Ma

C. Blawx Blawx, alternatively, required defining in advance the actions of contractual parties. Again, the code internalises interpretation as a preliminary step. Using a declarative logic, Blawx must first set the parameters of its dataset. On several occasions,71 the code required defining new categories and forming different classifications in order to be amenable to translation. This typically involved making explicit the relationship between legal objects and their properties. Interestingly, legal questions, particularly those assumed to be accommodating to mathematical configuration, were found to be challenging in the Blawx language. For example, the determination of a personal directive could easily be structured as a binary question. Still, it was necessary to define the object that did not fulfil the requirements of a personal directive. This subsequently provoked a deeper question on the implicit recognition of legal documents. Simply put, Blawx exposed the tacit force of law. Reflecting on H.L.A. Hart, the underlying assumption of ‘power-conferring rules […] exist not in virtue of some further law-making act, but in virtue of a fundamental rule of recognition implicit in the practice of law-applying officials’.72 Similarly, J.L. Austin contemplated the performative effect of ‘utterances’. The act of marriage, for example, demonstrates how the utterance of a certain few words puts into effect its meaning.73 Austin suggests that legal and moral obligations are relative to public specification; that utterances necessarily correspond with particular procedures situated within social contexts. Their mis-performance leads to a nullification or voidance of the act.74 In the case of Blawx, the meaninglessness and inability to articulate the ‘inverse’ of a legal document (ie missing the signature of a witness but would otherwise be a personal directive) points to the implicit dimension of the law.75 The dividing line between a document having legal force, or not, speaks to the inherent authority of legal rules. Just as marriage could only be recognised within a specific circumstance, it was necessary for Blawx to acknowledge the deeper context; that is, ‘how is legal recognition being defined?’ Blawx then applied a purposive interpretation, classifying legal recognition as validity. While the translation is rather sound – and validity is often a proxy for determining legal effect – the questions asked are

71 Blawx had encountered difficulty with interpreting the natural language of the legislation. Blawx recognised that it took ‘creative liberties’ in converting the statute to Blawx language. See Blawx (n 57). 72 HLA Hart, The Concept of Law (Oxford University Press, 1961) chs 4, 6. 73 JL Austin, How to Do Things with Words 2nd edn (Harvard University Press, 1975) 7. 74 ibid 16. 75 GJ Postema, ‘Implicit Law’ (1994) 13 Law and Philosophy 361. There is an alternative argument that Blawx may not be the right choice in programming language for particular types of law (ie legislation). That is, procedural languages could perceivably be a better option. Python, a procedural language, could construct a personal directive on the basis that the requirements are fundamentally conditional. There may be merit to a deeper investigation as to whether certain programming languages are more conducive to specific types of contracts.

Contracting in Code  175 distinct. From ‘is it legal’ to ‘is it valid’ is necessarily distinguishable in contract law. A contract may be valid but legally unenforceable. Therefore, interpreting legal force as validity subverts existing contract theory and, again, narrows interpretation to seemingly functional equivalents. Casey and Niblett are correct in noting that there will be an attempt to ‘pigeonhole [computable contracts] into existing frameworks of thought’.76 For Blawx, its uptake would likely require changes to existing contracts doctrines. The challenge of using programming languages centres on interpretation. Drafting contracts in formal programming languages highlight the ambiguity of the original source. The task of translating contracts from descriptive natural language to code brings to light underlying assumptions of legal authority as well as re-evaluates party autonomy in contract theory. In nearly all the cases, the interpretative exercise was done ex ante; that the contract’s legal effect was established in direct parallel to performance.

VI.  Emerging Frontiers As mentioned, formal programming languages have the impact of unifying legal concepts such as mutual assent with performance; effectively, reinvigorating arguments associated with contractual boilerplate.77 Equally, it raises an argument for increased granularity by breaking down and identifying the conceptual components of contracts to specific executable tasks programmable in the language. In either case, there is a definite reframing of contracts doctrines. Recalling Derrida: is the use of computer code for legal writing beyond ‘convenient abbreviation’? Hofstadter would argue that computer code cannot be devoid of meaning and would indeed imprint its effect to the system. Hofstadter states, [w]hen a system of ‘meaningless’ symbols has patterns in it that accurately track, or mirror, various phenomena in the world, then that tracking, or mirroring imbues the symbols with some degree of meaning ….78

Structure cannot be divorced from meaning. Perhaps the question asked is not whether programming languages should be a legal language, but how they could be amenable to the demands of contract law. Are these demands to create more complete contracts, or to limit ambiguity and ensure contract enforcement? These concerns speak to whether the effort to complete contracts or disambiguate contractual terms could resolve inherent tensions of contract interpretation and enforceability. 76 Casey and Niblett (n 62). 77 Boilerplate contracts as lifting the burden of interpretation and ensuring enforcement. Computable law borrows and extends the characteristics of contractual boilerplate in the name of increased ­precision, efficiency, and certainty. See Smith (n 32). 78 D Hofstadter, Gödel, Escher, Bach 20th anniversary edn (Basic Books, 1999) preface-3.

176  Megan Ma However, using programming languages to draft contracts could pose challenges akin to incorporating contractual boilerplate to new contracts. As Richard Posner argues, clauses ‘transposed to a new context may make an imperfect fit with the other clauses in the contract …’.79 At the current stage, the aforementioned programming languages appear to limit interpretation to functionality. By doing so, there runs the risk of a conceptual mismatch with existing contract theory; potentially reframing the purpose of contract law altogether. With the use of programming languages to draft contracts, the forthcoming challenges would be to ensure that the interpretative exercise is not forgotten. Instead, interpretation should continue to be understood as a continuous effort, allowing for responsiveness to changing environments. For programming languages to act as a legal language, party autonomy cannot be compromised. While the intention of program languages is not presumably to place limitations on contract formation, ‘law has language at its core’.80 Consequently, the functional nature of most programming languages has an inadvertently transformative impact on legal writing and the character of contract law. Next steps would require untangling performance from mutual assent.

A.  New Encasings In the case of Blawx and Lexon, the question is more complex as rules, categories, and framing are intentionally reconfigured. Blawx and Lexon predicate on a shift in the performance of the law; bringing to light the translation of legal concepts. The adage, ‘the medium is the message’, is particularly relevant for these languages. The emergence of no-code platforms, including advances in visual development like Blawx, have been revered as an opportunity to better integrate programming languages in legal drafting. While the ‘high level of abstraction of visual development creates a tight coupling between programming components and the meaning that the components are intending to communicate’,81 there remains unresolved questions on meaning-making. Both Blawx and Lexon express their own conceptual framework, redefining and asserting the meaning of existing legal interpretations. This further speaks to the limits of the law82 and the difficulty with demarcating legal concepts. 79 RA Posner, ‘The Law and Economics of Contract Interpretation’ (2005) 83 Texas Law Review 1581, 1587. 80 Markou and Deakin (n 27) 3; and ‘… the central place of language in law’ described in F Pasquale, ‘The Substance of Poetic Procedure: Law & Humanity in the Work of Lawrence Joseph’ (2020) 32 Law & Literature 1, 31. See Pasquale’s references to the similarities between lawyers and poets found in D Kader and M Stanford, Poetry of the Law: From Chaucer to the Present (2010). See also FE Cooper, Effective Legal Writing (Bobbs-Merill, 1953) and his introduction with Law is Language. 81 H Shadab, ‘Software is Scholarship’ (2020) MIT Computational Law Reportlaw.mit.edu/pub/ softwareisscholarship/release/1. 82 As described by Joseph Raz as the exercise of distinguishing the principles and standards that should be included or excluded from the legal system. See J Raz, ‘Legal Principles and the Limits of Law’ (1972) 81 Yale Law Journal 823.

Contracting in Code  177 Similarly, programming languages such as Blawx and Lexon seek to offer comparable promises of clarity and precision. Their current state, however, could risk undercutting contracts doctrine as clauses are forcibly fit to what is permissible of the language as opposed to legal principles. For Blawx, the conflation of validity with enforceability is problematic. Lexon, on the other hand, constructs barriers for contracting parties by limiting the vocabulary and grammar available. Again, the language must be sufficiently agile to accommodate for the possibility of unpredictable circumstances. Ultimately, contracts regulate the future through transactions.83 Contracts allow performance ‘to unfold over time without either party being at the mercy of the other […]’.84 By confining the operational space, the ‘medium’ inadvertently ties the hands of its parties. Alternatively, a hybrid language raises the potential for parallel drafting. An initial assessment of clauses that may be ‘code-ified’, thereby, become paramount to maintaining the integrity of the contract. This could foreseeably demand defining working guidelines on articles and provisions that are: (1) invariant to context; and (2) for varying types of contracts. Smith’s ‘modular boilerplate’ could be an excellent start; specifically, the combined assessment on the remoteness of the audience and risk of the transaction.85 Still, contracts must be ‘tailored to the parties’ needs’:86 and integrating standard ‘reusable’ code could occasionally lead to an improper fit. To have equal effect between natural language clauses and code, execution must mirror the qualitative description. Ron Dolin reflects on particular elements of contracts that are already ‘tagged, labelled, identified, or otherwise ‘marked up’ … [and] amenable to complex search and integration’.87 Existing tools, such as the Extensible Markup Language (XML), predefine rules for encoding documents to allow for both human and machine-readability. Even in cases where rules are not predefined, definition languages88 outline permissible tags with attributes that are readily usable. Dolin argues that the trade-offs of using XML are largely between increased accuracy and reduced ambiguity against significant ‘upfront costs’.89 He suggests then that the difficulty of integrating XML in legal documents is unpacking the ‘intimate relationship between information needed to be exchanged […] and the shared, controlled vocabulary used to express details’.90 He cites the example of medical informatics that thrived on XML integration. Their success,

83 G Samuel, ‘The Reality of Contract in English Law’ (2013) 13 Tulsa Law Journal 508, 523. 84 Posner (n 79) 1582. 85 Smith (n 32) 1209–1210. 86 ibid 1210. 87 R Dolin, ‘XML in Law: An Example of the Role of Standards in Legal Informatics’, in R Dolin et al. (eds), Legal Informatics (CUP, 2021) 2. 88 See, eg, Document Type Definition (DTD), XML Schema Definition (XSD) and Relax NG. 89 Dolin (n 87) 7. 90 ibid.

178  Megan Ma Dolin suggests, is owed to a standardised method of information exchange and ‘well-defined descriptions’.91 The question becomes: are there well-defined descriptions and a shared, controlled vocabulary in contract law? Two examples are informative: (1) the OASIS LegalXML eContracts schema; and (2) the Y Combinator Series Term Sheet Template. OASIS, the Organization for the Advancement of Structured Information Standards, is a non-profit consortium that works on the development of standards across a wide technical agenda.92 In 2007, a technical committee on contracts created an XML language to describe a generic structure for a wide range of contract documents. This became the OASIS LegalXML eContracts Schema (eContracts Schema). The intention of the eContracts Schema is to ‘facilitate the maintenance of precedent or template contract documents and contract terms by persons who wish to use them to create new contract documents with automated tools’.93 That is, the eContracts Schema focuses on reproducibility, reusability, and recursion. The most striking feature of the eContracts Schema is their metadata component. Their model allows its users to add metadata at the contract and clause level for specific legal subject matter or categorisation of distinct content. In this case, eContracts Schema provides an opportunity for clauses to cater to the specific requirements of contractual parties. This is reminiscent of clause libraries. Increasingly, contract drafting software94 apply a similar granular approach, suggesting template clauses that may be easily amended in a no-code form. The Y Combinator Series A Term Sheet Template (Term Sheet)95 is a standard form of terms to seek Series A funding.96 The term sheet was drafted by Y Combinator, a venture investor that supplies earliest stage venture funding for start-ups.97 The Term Sheet was created to inform founders of start-ups on terms most frequently negotiated, particularly when seeking funding for this next stage. The Term Sheet was drafted based on the experiences of venture investors. Not only does it provide a baseline for founders, but more importantly, it increases transparency about investors’ perceived risks.98 91 ibid 8. 92 ‘About Us’ (OASIS Open Standards. Open Source.) www.oasis-open.org/org (last accessed August 2020). 93 See Abstract section ‘eContracts version 1.0’ (OASIS) docs.oasis-open.org/legalxml-econtracts/ CS01/legalxml-econtracts-specification-1.0.html (last accessed August 2020). 94 See WeAgree Wizard and the no-code inclusion of contract blocks from their verified clause library. ‘Clause library – prevent reinventing the wheel’ (WeAgree Accelerated Contract Drafting) weagree. com/contract-automation/#clause-library-integrated-contract-automation (last accessed April 2021). 95 ‘Series A Term Sheet Template’ (Y Combinator) www.ycombinator.com/series_a_term_sheet/ (last accessed August 2020). 96 Series A funding is defined as funding to further refine the product and monetise the business, once a start-up has established a user base with consistent performance. See N Reiff, ‘Series A, B, C, Funding: How It Works’ Investopedia (5 March 2020) www.investopedia.com/articles/personal-finance/102015/ series-b-c-funding-what-it-all-means-and-how-it-works.asp. 97 ‘About Y Combinator’ (Y Combinator) www.ycombinator.com/about/ (last accessed August 2020). 98 ‘Series A Term Sheet Template’ (Y Combinator) www.ycombinator.com/series_a_term_sheet/ (last accessed August 2020).

Contracting in Code  179 Unlike the eContracts Schema, the Term Sheet is not ‘technologically-driven’. Nevertheless, it illustrates that well-defined descriptions and a shared, controlled vocabulary exist in contracts. To a large extent, the Term Sheet is no different than any existing contract template. Yet, the most unique characteristic of the Term Sheet is the tone of the contract. Unlike other templates, the intention is not to blindly assert ‘boilerplate’ contractual terms to drafters. Instead, the Term Sheet offers recommendations to support the positions of both contractual parties. Recent Legal Tech start-up, Lawgood, mirror the exact notions of the Term Sheet: contract drafting based on verified expertise. Lawgood’s drafting tool, the Contract Workbench, heightens the quality of the drafting process by developing a precedent language that tailors to the positions of the parties.99 Consider the sample indemnification clause drafted on Lawgood. Figure E  Extracted from demo of Lawgood lawgood.io/ (last accessed November 2021).

There are a number of fascinating features100 to the software. Notably, Lawgood offers several drafting options depending on the needs of the contractual parties. The familiarity of MS Word is coupled with a toggle switch that highlights the most common positions negotiated when drafting indemnity clauses. Below the toggle, a ‘simplified’ version of the term explains the meaning of the various positions, distilling and translating the legalese to plain English. Unlike the examples

99 Lawgood, lawgood.io/product (last accessed August 2020). 100 It should be noted that features of Lawgood extend beyond the toggle. There are also text buttons and embedded code. See ibid.

180  Megan Ma of the programming languages studied in the paper, the translations are intended to be instructive rather than binding. There are indubitably caveats to the software. The precedent language, created by Lawgood, draws primarily from the experiences of its developers. It gathers the collective legal knowledge of contractual precedents specific to the expertise of its founders. The product is, therefore, limited to the frameworks as stipulated by its creators. Nonetheless, Lawgood illustrates that a marriage of the old and new is possible – in particular, the prospect of a shared lexicon in contract law. All in all, hybrid languages represent the recurring theme that there are distinct metaphorical spaces between determinacy and indeterminacy. Legal drafting is simplified through the act of sorting, assessing whether a clause is sufficiently amenable to reusability. From XML to Lawgood, the open secret is that contractual language will always remain a dialogic process between its parties. To conclude, the purpose of the study is not to suggest that programming languages are not a possibility for legal writing. In fact, formal languages could provoke a more transparent discussion of obligations and expectations involved within the dynamics of contractual negotiation.101 Yet, the mechanics of current programming languages illuminate that there is still work required for code to become a legal language. Reflecting on programming languages as a medium for contract drafting has revealed that language indeed could alter the function of contract law. Further discussion is required on how programming languages could better navigate and shape the legal landscape. For now, perhaps it can be understood that the tool is an extension of the craft, and not simply a means for its effectuation.



101 Recall

the discussion on modularity.

10 Summarising Multilingual Documents: The Unexpressed Potential of Deep Natural Language Processing LUCA CAGLIERO

I. Preliminaries In recent years, the digitalisation process has produced a huge volume of electronic documents such as, amongst other, news articles, financial reports and legal documents. These data represent invaluable resources for private and public bodies. On the one hand, they can be stored, managed and retrieved using traditional data management systems,1 which take advantage of Information and Communication Technologies in order to support ordinary business activities (eg, accounting, business plan writing, planning of marketing strategies). On the other hand, they can be deeply analysed using deep learning and machine learning techniques in order to extract new, hidden knowledge from them.2 Unlike traditional business intelligence platforms, whose aim is to support decision making strategies based on the exploration of large volumes of historical business-oriented data, data mining and machine learning techniques rely on advanced algorithmic solutions to leverage historical, heterogeneous data in the extraction of hidden yet actionable knowledge. The key advantage behind the use of machine learning and deep learning is that end-users are not requested to know in advance which patterns are most relevant to make appropriate decisions. Hence, the learning process executed on the raw data is able to automatically infer the most valuable correlations among data in order to derive advanced data descriptions and accurate predictions of target variables.3

1 A Silberschatz, HF Korth and S Sudarshan, Database System Concepts, 7th edn (McGraw-Hill Book Company, 2020). 2 I Goodfellow, Y Bengio and A Courville, Deep Learning (MIT Press, 2016). 3 J Geller, ‘Data Mining: Practical Machine Learning Tools and Techniques – Book Review’ (2002). SIGMOD Record, 31, 76–77.

182  Luca Cagliero Exploring textual data is particularly challenging because it requires interpreting the natural language, which strongly depends on the application domain, the language, and the target audience. Advanced Natural Language Processing techniques are needed to understand both syntax and semantics behind the text. Specifically, Natural Language Processing techniques focus on studying the interaction between computers and natural languages. It adopts computer applications to fruitfully process large natural language corpora. Within this scope, the diffusion and integration of Deep Learning architectures has produced significant advances towards a better understanding of multilingual document corpora. They encompass a wide range of pre-trained and fine-tuned models which are able to adapt the domain under consideration, to translate text from one language to another, and to map semantically related concepts without the need of ad hoc, rule-based models. generate concise summaries of large corpora), machine translation (ie, translate a text from one language to another), sentiment analysis (ie, extract subjective information from a text such as opinions, moods, rates, and feelings) and question answering (eg, design chatbot applications). Legal documents are a particular kind of textual corpora that deserve more and more attention by the machine learning research community.4 Since the effects of the digitalisation has become remarkable, the production rate of electronic legal documents has reached an unprecedent level. The availability of such a large volume of legal data in electronic form has enabled the use of smart, datadriven algorithms and tools whose application was previously hindered. Legal document corpora, such as contracts and dispositions, have peculiar characteristics (Jail 2021): the use of complex and multiple-level document structures with several nested sections, subsections, and paragraphs, the presence of a relatively high number of repetitions and redundant forms, the abundance of intra- and cross-document citations, and the peculiarity of the vocabulary used. To manage these contents, an automated way to pinpoint the most relevant information is desirable. Automated text summarisation focuses on extracting the key information from large document collections. Typically, the raw input text is first partitioned into separate snippets consisting of single words, multi-word forms, phrases, sentences, or paragraphs. Then, a mining process is applied to the prepared documents in order to either shortlist the most significant content (eg, the most relevant sentences) or to rephrase part of the existing content. Content evaluation and processing must consider complementary aspects such as relevance, redundancy, timeliness and pertinence to the domain under analysis. The application of text summarisation techniques to multilingual documents are aimed at easing the process of document perusal by making the input content easily accessible and promptly usable. In the legal domain, it can also be applied to provide targeted summaries tailored to specific queries or topics.

4 HR

Turtle, ‘Text Retrieval in the Legal World’ [1995] Artificial Intelligence Law 3, 5–54.

Summarising Multilingual Documents  183 The present chapter introduces the automated text summarisation task and discusses its applications to the legal domain. First, it overviews the fundamentals of Natural Language Processing pertinent to the text summarisation problem. Then, it highlights the main challenges in processing textual documents coming from the legal domain. Finally, it presents some practical applications of text summarisation that can be profitably exploited to tackle relevant challenges, such as the presence of multilingual documents, the lack of semantic annotations, the need to tailor the knowledge discovery process to a specific domain and to combine multimodal information.

II.  The State of the Art of Automated Text Summarisation Automated text summarisation is a well-known Natural Language Processing task, which focuses on extracting concise representations of textual documents of various types and domains, ranging from scientific reports, legal contracts, and news articles (Galgani 2015). The advantages of summarising relatively large collections of textual documents include, amongst other, the better accessibility and readability of the summarised content,5 the automatic annotation of the source content to ease content browsing and exploration6 and the generation of summaries tailored to particular purposes.7 In the traditional summarisation pipeline the document content is first split into elementary units representing words, multi-word forms, phrases, or sentences. These textual units are exploited to build a unified document representation on top of which a summarisation algorithm is applied in order to identify a subset of relevant and not redundant units. The summarisation task can produce either a general-purpose summary or query-driven results. More specifically, general-purpose techniques focus on processing the raw text and extracting a summary that reflects the main document characteristics, without any further end-user requirement. Conversely, querydriven summarisation is aimed at generating an output summary that reflects the expectation of a given user or that is pertinent to a particular topic or sub-topic. In the latter case, the summarisation algorithm may produce different summaries of the same document according to the user-specified constraints. Examples of query-driven summarisation systems are the output generated by the content

5 V Gupta and GS Lehal, ‘A survey of text summarisation extractive techniques’ (2010) 2(3) Journal of Emerging Technologies in Web Intelligence 258–268. 6 N Nazari and MA Mahdavi, ‘A survey on automatic text summarisation’ (2019) 7(1) Journal of AI and Data Mining 121–135. doi.org/10.22044/jadm.2018.6139.1726. 7 A Nenkova and K McKeown, ‘A survey of text summarisation techniques’, in CC Aggarwal and C Zhai (eds), Mining Text Data (Springer, 2012) 43–76.

184  Luca Cagliero curation platforms,8 where end-users visualise the summary content extracted from various news articles. Each summary is tailored to a specific topic, whose description is provided as input by the end-user. Summarisation algorithms can be abstractive, when the original text is rephrased by means of automated text generation models, or extractive, when they perform a selection of the existing content. Abstractive models have already been applied to application contexts in which the output summary should consist of a relatively short text snippet reflecting a specific intent or a particular language form or inflection.9 For example, automatic headline generation is a typical scenario where readers would expect a synthetic yet meaningful rephrase of the document content (San 2015). Similarly, chatbot systems commonly provide answers to targeted questions by not only selecting existing content but also adapting the answer to the target user and the relative context.10 Conversely, extractive methods are commonly used to generate abstracts of verbose documents (eg, legal documents), to annotate scientific reports, to summarise news events described by redundant news flows. In the legal domain, they can be profitably used to highlight the key aspects relative to specific legal elements (Jail 2021). Content selection is typically driven by two complementary evaluation metrics:11 the relevance measures content significance within the analysed documents, whereas the redundancy measures content novelty with respect to the previously selected content. Relevance is commonly quantified separately for each unit of text under consideration. For example, in sentence-based text summarisation the summary consists of a selection of document sentences. Content relevance can be measured both at the sentence and the word/phrase levels. Redundancy can be measured globally on the whole summary of locally for each candidate sentences. The idea behind it is to iteratively add to the summary only the sentences that provide sufficiently new information. The goal of the text summarisation task is to build a summary that consists of the textual units that maximise content relevance and minimise content redundancy. The quality of automatically generated summaries is commonly assessed in an empirical way to verify the effectiveness and usability of the automated ­summarisation systems. Several methods to quantitatively and qualitatively evaluate

8 K Stanoevska-Slabeva, V Sacco and M Giardina, Content Curation: A New Form of Gatewatching For Social Media? (2012). Retrieved from pdfs.semanticscholar.org/89e2/06cdb4f36ff9b0b69b3244b3b e4c883d1f4e. 9 Y Miao, Deep Generative Models For Natural Language Processing. Unpublished doctoral dissertation (University of Oxford, 2017). 10 S Gupta and SK Gupta, Abstractive Summarisation: An Overview Of The State Of The Art. Expert Systems with Applications (2019), 121, 49–65. doi.org/10.1016/j.eswa.2018.12.011. V Gupta, N Bansal and A Sharma, ‘Text summarisation for big data: a comprehensive survey’. Paper presented at the International Conference on Innovative Computing and Communications, Singapore (2019). 11 C Lin and E Hovy, ‘Automatic evaluation of summaries using N-gram co-occurrence statistics’. Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (2003), 1, 71–78.

Summarising Multilingual Documents  185 the quality of the summarisation system have been proposed in literature (Lin and Hovy 2004). They include both intrinsic or extrinsic evaluation metrics.12 Intrinsic evaluators assess the quality of a summary against a ground truth or based on the feedbacks provided by a pool of domain experts who manually assessed the automatically generated summaries. Conversely, extrinsic evaluation metrics measure the effectiveness of a summarisation method in accomplishing a given task. This latter type of measures is commonly used in Information Retrieval.13 A description of the most popular intrinsic evaluation metrics is reported below. The Rouge metric is the most popular intrinsic evaluation score (Lin and Hovy 2004). It measures the unit overlaps between the content of the reference summary, ie, the ground truth, and those of the automatically generated summary. According to the type of textual unit considered in the evaluation, the following Rouge scores have been defined: –– Rouge-N: it measures the overlap between the n-grams, ie, the sequences of words in a row. –– Rouge-L: it measures the overlap between the longest common subsequence of words in common. –– Rouge-W: it is variant of Rouge-L that gives priority to sub-series of overlapped sequences. –– Rouge-S: it exploits the skip-bigrams as reference text units. Skip-grams are pairs of words in the sentence with arbitrary gaps. The Rouge tool can be exploited to compare the output summaries against an arbitrary ground truth. Typically, a pool of domain experts is first involved in the generation of the reference summaries for a selection of document collections. Next, the summarisation methods are applied to the test collection in order to verify whether the developed solution is suitable for summarising documents within the domain under consideration.

A.  Timeline and Update Summarisation When the document collection varies over time, there is a need to timely capture the most salient trends in the analysed textual corpus. Timeline summarisation addresses the problem of tracing the temporal evolution of salient events described by a stream of news documents. It has found application in a variety of

12 JM Conroy, JD Schlesinger and DP O’Leary, ‘Topic-Focused Multi-Document Summarisation Using an Approximate Oracle Score’ (2006). ACL Anthology. 13 DC Blair, Language and Representation in Information Retrieval (Elsevier North-Holland, Inc., 1990).

186  Luca Cagliero contexts, such as news and email processing. In the legal domain, it can be profitably exploited to trace the main changes in ordinary laws, actions, and dispositions. Timeline summarisation is an automatic process that automatically creates a timeline of an event over a long-lasting time period. The timeline consists of a sequence of daily summaries for the most important dates. The timeline summarisation task is modelled as follows: –– date selection: select a subset of the most important dates. –– date summarisation: summarise the documents related to each of the selected dates separately. In the former task, all the available dates are ranked by decreasing importance within the reference period. The key aspects that need to be considered during date selection include, amongst other, the number of published articles per date, the number of references from other dates to the selected date, the content of the documents in the considered date. In the latter step, a summary per selected date is generated. The summary should reflect the most peculiar aspects relative to that particular date. Beyond content relevance, the timeline summarisation process also considers an additional summary feature, namely the latency of the selected content. Since the source document content is a flow of news descriptions, it is likely that the same information is repeated multiple times. Latency indicates the time gap between the first occurrence of that relevant content and any of its repetitions. Ideally, the text summariser should pick the salient content at the earliest time it occurred. Therefore, the main timeline summarisation goal is to maximise content relevance and to minimise redundancy and latency. Three main strategies have been proposed to address the timeline summarisation process (Ghalandari and Ifrim 2020): 1. 2. 3.

Direct summarisation: the news document collection is treated as a unique set of sentences from which a timeline is directly extracted. Date-wise approach: it first selects the important dates and then builds a summary per date. Event detection: it first detects and ranks the main events contained in the document collection. Then, the most important ones are separately summarised.

Date-wise approaches have shown to be the best performing. They take advantage of a two-stage process that jointly evaluate document content and citations to decide whether a specific date is particularly relevant. This achieved high-quality results in contexts where the document citation network is quite dense. Hence, it is suitable for summarising timelines of legal documents, where intra- and interdocument citations have primary importance. A related issue is to update the content of the summary of a document collection whenever significant changes occur. The latter task is commonly named as

Summarising Multilingual Documents  187 update summarisation task.14 In order to accomplish the aforesaid task efficiently, the summarisation models can be incrementally updated without the need for recomputing it from scratch. This is particularly useful when the automated summarisation process is expected to generate an output (almost) in real time. Under this umbrella, the choice of the algorithmic solution is crucial for guaranteeing the timely generation of the output summarisation. Furthermore, the quality of the results is strongly affected by the effectiveness of a preliminary content filtering phase, whose main goal is to early discard the non-pertinent content from the raw data stream.

B.  Language-dependent Solutions to Text Summarisation Handling textual documents written in different languages is crucial for both creating a vast and knowledgeable base that includes content generated in different geographical areas. For example, in various application domains, such as news event description, contracting and scientometrics, the source documents often include texts in different languages. Hence, there is a need to study and develop algorithmic solutions able to handle documents written in different languages as well as to combine the information provided by cross-lingual content. Multilingual text summarisation focuses on extracting summaries from documents all written in the same language. The peculiarity of these systems is their inherent portability to document collections written in different languages. Therefore, summarisation systems that are designed to handle English-written documents can be easily adapted and reused on documents written in other languages. Cross-lingual text summarisation aims at combining text written in different languages in order to produce a summary in a target language that is potentially different from one of the source documents. For example, given a collection of documents written in French, produce an output summary written in Italian. The cross-lingual context may require ad hoc translation steps when the source documents do not include any portion of text in the target language. This implies the need for combining extractive and abstractive summarisation methods. More specifically, the key steps of the cross-lingual summarisation task are enumerated below: –– Traditional document summarisation: The input documents are summarised using traditional language-specific approaches. The intermediate summary generated at this stage is written in the language of the input documents.

14 D Wang and T Li, Document Update Summarisation Using Incremental Hierarchical Clustering in Proceedings of the 19th ACM international conference on Information and knowledge management (2010) 279–288.

188  Luca Cagliero –– Machine translation: The intermediate summaries are automatically translated into the target language using ad hoc translation methods. This step does not perform any further processing on the text except for language translation.15 –– Content compression: The content of the translated summary is compressed to convey as much information as possible in the produced summary.16 The increasing availability of textual documents written in different languages has prompted the need to increase the effectiveness and efficiency of cross-lingual summarisation systems. For example, in the legal domain, managing contracts, rulings and civil and penal codes written in several languages would allow lawyers to enrich the domain knowledge and improve the awareness of the local and foreign practices and rules.

III.  The Summarisation Pipeline The summarisation pipeline aims at transforming the raw data in textual form into actionable knowledge. It consists of the following steps: –– Text pre-processing: the textual content is transformed into a more structured data representation (Gupta and Lehal 2010) using established linguistic features. –– Summarisation: the prepared data are analyses by means of a text summarisation approach in order to extract the output summary.17 –– Post-processing: the summary can be visualised or integrated into a decision support system to enable further text processing and end-user exploitation. Hereafter, each step of the summarisation pipeline will be thorough described.

A.  Text Pre-Processing Text pre-processing techniques encompass a large variety of data transformation and cleaning methods. The most relevant ones are enumerated below. –– Noise filtering: cleans up the raw text from typos and special characters that are likely to be irrelevant for subsequent text analyses. For example, it can remove 15 X Wan, F Luo, X Sun, S Huan and JG Yao, Cross-Language Document Summarisation Via Extraction And Ranking Of Multiple Summaries. Knowledge and Information Systems (2018) 1-19. 16 X Wan, H Li and J Xiao, ‘Cross-language document summarisation based on machine translation quality prediction’. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics (2010) 917–926. 17 R Mihalcea, ‘Graph-based ranking algorithms for sentence extraction applied to text ­summarisation’. Paper presented at the Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, Barcelona, Spain (2004).

Summarising Multilingual Documents  189

–– ––

––

––

–– ––

––

––

numbers, HTML tags, punctuation, etc. The definition of noise strongly depends on the domain in which the textual corpus was generated. Lowercasing: consists in making the analytics process case insensitive by lowercasing the whole text. Sentence segmentation: entails splitting the raw text into units of text that represent semantically meaningful pieces of information. The most common segmentation stages include sentence, clauses, and phrase splitting. Sentences are commonly split according to punctuation. Clauses and phrases are portions of a full sentence that can be useful for describing a specific event aspect. Specifically, a clause is a group of words containing a subject and verb, whereas a phrase is a group of words without a subject-verb component, used as a single part of speech. Word tokenisation: words are the smallest units of language that convey meaning to the majority of people you would like to reach. Tokenisation aims at splitting the sentences into words. It usually takes the presence of multi-word forms into account (eg, the ‘White House’). On top of word segmentation, the text can be further split into alternative, more meaningful representations such as terms and n-grams. A term is a sequence of one or more tokenised and lemmatised word forms. A n-gram is a sequence of n words in a row, which could represent more complex natural language forms. Stemming: is the process of reducing inflected words to a root form, where root does not necessarily correspond to the morphological root of the word. Heuristic, rule-based models are commonly applied to map related words to the same stem. Stopword elimination: filters out the words that convey a limited informative content (eg, prepositions, conjunctions, articles). Lemmatisation: is the algorithmic process of determining the lemma of a word based on its intended meaning. It groups together the inflected forms of a word so they can be considered as the same thing and identified by a common lemma. Part-Of-Speech (POS) tagging: aims at annotating the words in a text with the corresponding part of speech (noun, verb, adjective). Annotation relies on both the underlying meaning of a word and its context of usage. It performs a syntactical analysis of the sentences according to a certain grammar theory. Constituency grammars describe the syntactical structure of the sentence in terms of recursively built phrases, ie, sequences of syntactically grouped elements, eg, noun phrases, verb phrases, prepositional phrases, adjective phrases, and clauses. Dependency grammars analyse the dependencies between words Named Entity Recognition and Disambiguation (NERD): focuses on mapping nouns to semantically relevant entities in domain-specific ontologies (eg, Wikidata). Whenever an entity is recognised in the text, it can be profitably

190  Luca Cagliero exploited to drive further analyses such as text categorisation, recommendations, or clustering. When a noun can be mapped to multiple entities depending on the relative context (eg, the word Cars can be either a vehicle or a movie), an additional disambiguation is necessary in order to rank the candidate entities and pick the most likely one according to the preceding and subsequent words.

i.  Occurrence-based Text Representations Since data mining algorithm are unable to directly process textual data in their original form, documents must be transformed into a more manageable representation. Traditional document representations rely on an occurrence-based model, where for each document the number of occurrences of each textual unit in a dictionary are stored. These models consider two main syntactical properties: –– term frequency: the frequencies of occurrence of a term in the analysed documents. –– Document frequency: the number of documents where a given term occurs. The aforesaid indicators can be conveniently combined together in order to compute appropriate term relevance scores (eg, the term frequency inverse document frequency).

ii.  Distributed Representations of Words The advent of Deep Learning models has recently enabled the use of distributed representations of words, also called word embeddings, to represent the document content at the word level. Each word in an input vocabulary is mapped to a vector of real numbers in ad hoc vector space, which most of the semantic relationships between words are preserved. Summarisation systems can exploit documentlevel similarities between word embeddings to capture content relevance and redundancy. Word2Vec18 is the most popular method to learn word embeddings. Developed in 2013 at Google by Tomas Mikolov, it is based on a deep neural network architecture trained to reconstruct the linguistic context of a word, which is represented by the immediately preceding and subsequent words. More specifically, given an input word in the SkipGram model, it gives the probability for each word in the vocabulary to be the closest word. Instead, in the Common Bag Of Words (CBOW) the network tries to predict a term using context as input. For example, given the words w(t-1), w(t-2), w(t+1), w(t+2), the model will output w(t). The context can be a single word or a set of words. The number of words in each 18 T Mikolov, K Chen, G Corrado and J Dean, Efficient Estimation Of Word Representations In Vector Space. arXiv preprint arXiv:1301.3781 (2013).

Summarising Multilingual Documents  191 context is determined by a parameter called ‘window size’, which indicates how many words before and after a given term will be included in the context. The Word2Vec model can be efficiently queried in order to detect relationships between words by exploiting the nearest neighbour functionality. Specifically, given a word in the latent vector space, its nearest words are likely to be the most semantically related ones. FastText19 is another popular word embedding model developed by Facebook Inc. in 2016. The idea behind it is to use sub-words and character-level information and not just words to get word embeddings. The advantage is that the embeddings of words that are not present in the original dictionary can be derived by a combination of lower-level embeddings. For example, the word ‘person’ can be represented by a combination of character n-grams as follows: < pe, per, ers, rso, son, on >. GloVe,20 abbreviation of Global Vectors, is another unsupervised learning algorithm proposed by Stanford researchers in 2014. GloVe exploits global co-occurrence-based word statistics to obtain word vectors. Unlike Word2Vec, which exclusively relies on window-based occurrence, GloVe considers also the fact that some context words appear more frequently than other in the analysed document collections. The key idea is to derive linguistic or semantic similarity based on the co-occurrences between word pairs (ie, a co-occurrence matrix).

B.  Text Summarisation Algorithms Several techniques have been exploited to extract significant content from the original documents. The most popular techniques are enumerated and briefly described below. –– Statistical-based approaches: they pick the most important sentences and/or words from the source documents based on statistical analyses performed on a representative subset of document descriptors, which are typically occurrencebased statistics (Gupta and Lehal, 2010). –– Clustering-based approaches: this method focuses on automatically grouping the most similar sentences within the same cluster21 and then identify a representative sentence per cluster namely the centroid.22

19 P Bojanowski, E Grave, A Joulin and T Mikolov, Enriching Word Vectors with Subword Information, arXiv preprint arXiv:1607.04606 (2016). 20 J Pennington, R Socher and CD Manning, GloVe: Global Vectors for Word Representation. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP 2014) 1532–1543. ACLWeb Anthology. 21 RM Alguliyev, NR Isazade, A Abdi and N Idris, COSUM: Text Summarisation Based On Clustering And Optimisation. Expert Systems (2019) 36(1). https://doi.org/10.1111/exsy.12340 e12340. 22 D Wang, S Zhu, T Li, Y Chi and Y Gong, Integrating Document Clustering and Multi-document Summarisation. ACM Transactions on Knowledge Discovery from Data (2011) 5(3), 14.

192  Luca Cagliero –– Graph-based approaches: these approaches model pairwise sentence similarity in a graph, which can be analysed in order to find the most authoritative sentences.23 –– Itemset-based approaches: these methods focus on extracting and analysing the most recurrent co-occurrences among multiple words.24 These word combinations are representative of the underlying topic trends in the analysed document collections. Thus, they are exploited to weigh the relative importance of the corresponding sentences. –– Concept-based approaches: they first perform NERD and then retrieve the most relevant sentences based on the presence of relevance concepts and their mutual relationships.25 –– Topic-based approaches: they identify the most salient document topics and then cover each topic with a sufficiently large subset of representative sentences (Nenkova and McKeown 2012). For example, Latent Semantic Analysis (LSA) is a popular approach to detecting latent topics in large document corpora, where the analysis of text semantics is based on the observed co-occurrence of words (Steinberg and Jezek 2004). –– Optimisation-based approaches: these methods reformulate the summarisation task as an optimisation problem and solve it using ad hoc solvers such as Integer Linear Programming models.26 The objective function typically combines the maximisation of content relevance with the minimisation of content redundancy. –– Machine Learning-Based approaches: these methods transform the s­ ummarisation problem into a supervised classification problem (Silva 2017). The goal here is to classify each sentence of the test document either as worth being included in the summary or not using a training set of documents, ie, a collection of documents and their respective human-generated summaries (Moratanch and Chitrakala, 2017). The main drawback is that, in many real-life contexts, a sufficiently large collection of annotated documents is often not available.

i.  Summarisation Based on Deep Learning Models A remarkable class of summarisation algorithms strongly rely on Deep Learning and word embedding models. The key idea is to exploit domain-specific word 23 G Erkan and DR Radev, ‘Lexrank: Graph-based lexical centrality as salience in text summarisation’ (2004) 22 Journal Of Artificial Intelligence Research 457–479. 24 E Baralis, L Cagliero, A Fiori and P Garza, MWI-sum: A Multilingual Summarizer Based On Frequent Weighted Itemsets. ACM Transactions on Information Systems (TOIS) (2015) 34(1), 5. 25 JM Conroy, JD Schlesinger, J Goldstein and DP O’Leary, Left-Brain/Right-Brain Multi-Document Summarisation. Proceedings of the Document Understanding Conference (2004). 26 H Takamura and M Okumura, Text Summarisation Model Based On Maximum Coverage Problem And Its Variant. Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (2009a). 781–789; Text Summarisation Model Based On The Budgeted Median Problem. Proceeding of the 18th ACM conference on Information and knowledge management (2009b) 1589–1592.

Summarising Multilingual Documents  193 embedding models in order to produce a rich word- and sentence-level document representation. To achieve the aforesaid goal, a large set of documents is required to train the word embedding model (typically, thousands of documents are needed). For example, in graph-based summarisation algorithms (Erkan 2014) the distance between pairs of sentences can be estimated by computing the distance among the corresponding vectors in the distributed representation of words. An alternative way to compute distances is to obtain the representation of a sentence as the average all the word vectors that are composing it.

ii.  Abstractive Models Various deep learning architectures have proposed to build abstractive summaries of large document collections.27 The most common strategies are based on the Encoder-Decoder model, where the input documents are first encoded into a fixed length representation, which in turn is used by the decoder for reconstructing the original text snippets. Alternative solutions rely on the concept of pointers,28 which helps identifying the most representative words from source text, Reinforcement learning models, and hierarchical models to build hierarchical structures on top of the document sentences.

C.  Text Postprocessing The generation of document summaries is instrumental for a number of practical applications: first, the exploration of a concise and easy-to-ready summary could save time and enhance the quality of the knowledge retrieval process. For example, in e-learning activities the automatic generation of summary of teaching materials has shown to improve the engagement of the students of higher-level courses.29 In the legal domain similar applications could be devised to support the exploration and reading of long, complex documents. The output summary could be tailored to specific end-user needs. For instance, each summary could reflect a different facet of the main document topic. This is particularly evident while summarising real-world events, which have had a significant impact on several real-life aspects (eg, the effect of the Covid-19 pandemic on healthcare systems, economy, psychology, etc). While summarising

27 J Zhang, Y Zhou and C Zong, Abstractive Cross-Language Summarisation Via Translation Model Enhanced Predicate Argument Structure Fusing. IEEE/ACM Transactions on Audio, Speech, and Language Processing (2016) 24(10), 1842–1853. 28 See, PJ AL and CD Manning, ‘Get To The Point: Summarisation with Pointer-Generator Networks’ in R Barzilay and M-Y Kan (eds),Association for Computational Linguistics 1073–1083. 29 E Baralis and L Cagliero, ‘Learning From Summaries: Supporting e-Learning Activities by Means of Document Summarisation’. IEEE Trans. Emerg. Top. Comput (2016) 4, 416–428.

194  Luca Cagliero legal documents, lawyers could pay attention to specific sub-topics or events that are deemed as primarily relevant to their study.30 Notably, the feedback provided by the end-user could be in turn exploited to refine the summarisation process of the same document collection or of similar collections ranging over the same topic. The idea behind is to collect quantitative feedback scores on the quality of the generated summary (separately for each unit of text) and then back-propagate such relevant feedback to the summarisation algorithm in order to enhance the quality of the newly generated models.31 Text summarisation could support the accessibility of textual resources when the device used to access to the electronic documents is not suitable for exploring the raw documents, the bandwidth used to transmit the documents is particularly limited, or the mobile or Web interface does not allow users to manage long text snippets. Producing a concise summary of a verbose document could enable the exploration of contents whose accessibility is particularly limited without any ad hoc algorithmic support (Wang 2009).

IV.  Legal Document Summarisation This chapter presents the main research contributions related to the summarisation of documents from the legal domain. Specifically, in section IV.A the main challenges relative to the transformation and analysis of legal documents are briefly described. Section IV.B enumerates the main summarisation approaches tailored to the legal domain. Finally, section IV.C presents the tools that have been proposed to support lawyers in their professional activities.

A.  Main Challenges Summarising legal documents entails addressing various challenges related to their peculiar characteristics:32 –– Document size: legal documents (eg, judgement documents, acts, contracts) are, on average, longer than those analysed in other domains. This makes the problem of identifying the most relevant content particularly appealing yet challenging.

30 FL Wang, R Kwan and SL Hung, ‘Multi-document Summarisation for E-Learning’, in FL Wang, J Fong, L Zhang and VSK Lee (eds), ICHL (Springer, 2009) 353–364. 31 L Cagliero, P Garza and E Baralis, ‘ELSA: A Multilingual Document Summarisation Algorithm Based on Frequent Itemsets and Latent Semantic Analysis’ ACM Transactions on Information Systems (TOIS) (2019) 37(2), 21. 32 D Jain, MD Borah and A Biswas, ‘Summarisation of legal documents: Where are we now and the way forward’ (2021) 40 Computer Science Review 100388, at doi.org/10.1016/j.cosrev.2021.100388.

Summarising Multilingual Documents  195 –– Document structure: legal content is often organised into complex hierarchical structures, with many subsections and paragraphs. Luckily, within the same legal subdomain, the internal structure of the document is often repetitive.33 –– Vocabulary: legal-based terms are peculiar and quite different from those used in most of the other domains. This calls for ad hoc word representations that are able to capture the inherent semantics behind the text. –– Context-dependent: the content of legal documents must be tailored to the context in which it is produced, because different legal institutions may have peculiar writing styles and use particular notations. –– Citations: intra- and inter-document links and bibliographic references are very important and need to be considered to properly summarise the content of legal documents.

B.  Domain-specific Summarisation Approaches The summarisation methods tailored to the legal document type can be classified into the following categories: –– Citation-based approaches: this particular class of summarisation algorithms entails generating catchphrases that convey most of the information provided by a set of cited sentences in the source documents. They mainly rely on the concept of citation, where a citing sentence, called citance, makes implicit or explicit reference to a cited sentence within the same document or in another (pertinent) document. Citances are properly combined with the cited sentences in order to form a summary.34 The key idea is to use both incoming and outgoing citation information to summarising legal content because references explicitly mention the most salient content. This kind of approaches have proved to be effective for the legal professionals who are looking for relevant precedents of any preceding case. –– Rhetorical role-based approaches: this class of summarisers aim at understanding the rhetorical role associated with a sentence.35 Rhetorical roles help in aligning the sentences that are associated with the same rhetorical roles in the final summary generation thus making the final summary more readable

33 C Grover, B Hachey, I Hughson and C Korycinski, ‘Automatic Summarisation of Legal Documents’, in J Zeleznikow and G Sartor (eds), ICAIL (ACM, 2003) 243–251. 34 F Galgani, P Compton and A Hoffmann, Citation Based Summarisation Of Legal Texts, in Pacific Rim International Conference on Artificial Intelligence (Springer, 2012) 40–52. 35 M Saravanan, B Ravindran and S Raman, (2006) ‘Improving legal document summarisation using graphical models’ (2006) 152 Frontiers In Artificial Intelligence And Applications 51.

196  Luca Cagliero and coherent.36 Rhetorical roles can be used both to identify the main document theme37 and to extract the sentences that are relevant to each theme.38 –– Ripple down rules-based approach: this class of method is mainly based on a set of hand-crafted rules, which is initially created by domain experts with the help of computer scientists.39 Then, a refined version of the initial rule-based model is produced via incremental updates. Rules can be easily adapted and replaced with new ones whenever a failure occurs. The rule generation task is obviously time-consuming. However, the process does not require any further data annotation step.

C.  Applications of Legal Document Summarisation Various applications and tools based on text summarisation have been proposed to address specific issues related to legal data management. This section overviews the main examples of applications and tools presented in literature. –– Summarisation of court debate dialogues: this is the relevant application of text summarisation techniques, which is particularly challenging due to the inherent dialogue characteristics. Specifically, court debates are typically lengthy and highly repetitive. Therefore, an overall summary could fasten the review of the past debates thus improving knowledge retrieval and exploitation. For example, the authors modelled sentence relationships within each document as directed graphs, which can be exploited to compress the salient content based on graph node connectivity and similarity.40 Conversely, the works presented in Hachey41 and Kanapala42 proposed to adopt a supervised machine learning approach, where the goal is to predict a sentence is worth including in the summary or not based on the content of previously annotated documents. Inspired by the advances in Deep Learning architectures, the studies recently

36 A Farzindar and G Lapalme, LetSum: An Automatic Legal Text Summarising System. Legal Knowl. Inform. Syst., JURIX (2004) 11–18. 37 S Teufel and M Moens, ‘Summarising scientific articles: experiments with relevance and rhetorical status’ (2002) 28(4) Computer Linguist 409–445. 38 B Hachey and C Grover, A Rhetorical Status Classifier For Legal Text Summarisation, in Text Summarisation Branches Out, (2004) 35–42. 39 SB Pham and A Hoffmann, Incremental Knowledge Acquisition For Building Sophisticated Information Extraction Systems With Kaftie, in International Conference on Practical Aspects of Knowledge Management (Springer, 2004) 292–306. 40 MY Kim, Y Xu and R Goebel, ‘Summarisation of Legal Texts with High Cohesion and Automatic Compression Rate’, in Y Motomura, A Butler and D Bekki (eds), JSAI-isAI Workshops (Springer, 2012) 190204. 41 B Hachey and C Grover, ‘Extractive summarisation of legal texts’ (2006) 14 Artificial Intelligence. Law 305–345. 42 A Kanapala, S Jannu and R Pamula, ‘Summarisation of legal judgments using gravitational search algorithm’ (2019) 31 Neural Comput. Appl. 8631–8639.

Summarising Multilingual Documents  197 proposed in Tran43 and (Zhong 2019) focused on studying and developing ad hoc multi-layer neural network models to capture the key correlations among the court debates text. –– Judgement theme extraction: judgements are multi-faceted legal documents characterised by a fairly complex structure. In (Farzindar and Lapalme, 2004) and in Merchant and Pande44 the authors investigated how to automatically determine the main themes of a judgement. The summary consists of text fragments that mainly characterise each document theme. –– Legal entity recognition: the recognition of relevant entities within the text is a challenging task because an entity describes a concept pertinent to a specific domain. In the legal domain, many entities have a peculiar meaning, which significantly differs from those in other domains. Some attempts to support domain-specific entity recognition and disambiguation have been made.45 These solutions are intended for legal experts who are quite familiar with natural language processing tools. Their effectiveness was tested on benchmark datasets and assessed against expert-generated outputs.

V.  Conclusions and Future Research Directions The unexpressed potential of Deep Learning and Natural Language Processing techniques has emerged in most application domains in which the digitalisation process has produced its strongest impact. To gain insights into electronic documents, traditional approaches based on linguistic models have now become obsolete thanks to the massive use of distributed text representations, the introduction of sophisticated data cleaning and preprocessing steps, the study and development of supervised and unsupervised learning algorithms and the evolution of the multimedia interfaces useful for data retrieval and exploration. In the abovementioned scenario the role of Artificial Intelligence in supporting expert decisions has become primary. An appealing task is to convey the most relevant information hidden in the analysed documents into a synthetic yet informative description. Automated text summarisation algorithms address the above issue by the analysing the syntax and the semantics behind the text. This chapter overviewed the main research efforts devoted to electronic document summarisation. Furthermore, it deepened the analysis of the applications of summarisation techniques to the legal domain.

43 VD Tran, ML Nguyen and K Satoh, ‘Automatic Catchphrase Extraction from Legal Case Documents via Scoring using Deep Neural Networks’ (2018) CoRR, abs/1809.05219. 44 K Merchant and Y Pande, NLP Based Latent Semantic Analysis for Legal Text Summarisation. ICACCI (2018) 1803–1807 IEEE. 45 P Bhattacharya, K Hiware, S Rajgaria, N Pochhi, K Ghosh and S Ghosh, A Comparative Study of Summarisation Algorithms Applied to Legal Case Judgments (2019).

198  Luca Cagliero Within this context, the peculiar characteristics of the input documents makes the summarisation process more and more challenging. Specifically, legal documents are lengthy, their content is highly repetitive, the document is fairly complex, and the citation network is dense and potentially hard to explore. To overcome the above issues, in the last decade the Deep Learning and Natural Language Processing communities have made a joint effort to tailor the summarisation process to the peculiar document characteristics. Under this umbrella, previous studies have been devoted to deeply analysing the links between citing and cited sentences, to recognising the main document themes, and to categorising the analysed text using predefined labels. Despite the remarkable progresses in the development and use of machine learning-based solutions, several research directions deserve further investigations and future developments. The most relevant research directions are summarised below: –– Since acts, laws, contracts have been collected worldwide, there is an increasing need to combine the information provided by legal documents written in different languages. –– Judgements, laws and contracts are subject to periodic updates. To keep track of the main variations of the document content, the use of timeline and update summarisation techniques is particularly appealing. –– The exploitation of the knowledge extracted from legal documents needs ad hoc visualisation and exploration tools focused on supporting domain experts in the knowledge discovery process. –– The summarisation process can be not only focused on textual documents, but also on a set of multimodal resources including videos, images, speeches, and social tags. –– The recent progress in abstractive summarisation leaves room for significant extensions of existing conversational AI agents, whose main goal is to generate content in language and mimic the interaction between humans by replying to specific queries. –– The involvement of the domain expert community in the digitalisation process could enable the manual annotation of a significant portion of legal documents, which in turn may be used to train more effective deep learning models.

part iii (Non-)Performance, Remedies and Dispute Resolution

200

11 Remedies for Artificial Intelligence CRISTINA PONCIBÒ

I. Introduction The development of emerging technologies has led to the increasingly widespread use of artificial intelligence systems (AI systems) endowed with the aptitude to replace humans in those activities in which, in the past, human intervention was considered indispensable, precisely because they involved the use of sensory, cognitive, and intellectual faculties.1 It is not surprising, therefore, that the application of AI systems is becoming more relevant with respect to contract law.2 In particular, it proceeds from a liberal conception which suggests that humans are too slow, too costly, too fallible, too unproductive, not always rational, nor even always reasonable. In contract law, AI conveys values of profitability, cost reduction, process optimisation and disintermediation. However, we should be aware of these and limit this logic to specific cases where hyper-automation may be acceptable and useful (eg basic and repetitive contracts). Indeed, the activity involved in concluding contracts (and in executing them) therefore appears to us, more than others, as typical of humans, aimed as it is at implementing choices that respond to essentially human needs and interests, and made to achieve economic purposes. Such activity establishes relationships between humans, which do not cease to be such even when they are parties other than physical persons. The chapter aims at exploring some of the gaps and uncertainties about the adaptation of traditional contract law to this new reality such as the idea of contracts without consent.3 Precisely, the chapter deals with the law of remedies in cases where AI systems are used by humans to contract. Specifically, our analysis follows a functional approach to contract law and, therefore, favours the aspect of 1 HGM Eidenmueller, ‘The Rise of Robots and the Law of Humans’ (26 March 2017). Oxford Legal Studies Research Paper No 27/2017 at dx.doi.org/10.2139/ssrn.2941001. 2 See chs two and three in this volume. 3 O Ben-Shahar, ‘Contracts without consent: Exploring a New Basis for Contractual Liability’ (2004) 152 University of Pennsylvania Law Review 1829. N Irti, ‘Scambi senza accordo’ [1998] Rivista Trimestrale di diritto e procedura civile 347.

202  Cristina Poncibò performance over the classic investigation of contract formation.4 The reason is that there is a gap in the literature given that the vast majority of institutions and legal scholars are mainly focusing on issues of AI systems and extra-contractual liability,5 while neglecting the case of contract law, with a few recent exceptions.6 In particular, the chapter addresses and criticises the promise of AI of putting remedies to an end that is driven by ideology.7 Additionally, the intervention of AI in the domain of contracts will be even more significant in the future, when connected refrigerators will order the groceries for the upcoming week by analysing their owner’s consumption data.8 There is thus, most certainly, a form of progressive automation of contract law with the development of computer science. More interesting, when the machine is not simply an automaton, but becomes capable of making more or less complex decisions on its own (as already happens in algorithmic contracts, especially used in financial markets for high frequency trading activities), we are no longer faced with simple automation, but something that is already suited to the term autonomy. In the last stage, AI systems will become an autonomous contractual party. It is autonomous, in the literal sense of auto-nomos, ‘which is governed by its own laws’. Obviously, to be part of a contract, you need a capacity to contract, in other words, a legal personality.9 Interestingly, this is why the smart contract on the blockchain does not fit within the meaning of advanced AI systems. Blockchain is not artificial intelligence in itself, but it can assist in the storage and secure transmission of authenticated information. Moreover, smart contracts carry out the automated execution of a right or a contractual obligation of which the conditions and effects have been predetermined by a human. However, the smart contract may rely extensively on artificial intelligence in the future and become, in this way, an example of the adoption of AI systems for contracting.10 From the perspective of a comparative lawyer, it should also be noted that such technology is problematic to both the common law and civil law of contracts. Thus, in a comparative perspective, AI raises concerns from both sides. Indeed, it seems that the use of AI systems in contracting has a particularly strong and negative

4 A Schwartz and D Markovits, ‘Function and Form in Contract Law’, in AS Gold, JCP Goldberg, DB Kelly, E Sherwin and H E Smith (eds), The Oxford Handbook of the New Private Law (Oxford, 2020). 5 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)) at www.europarl.europa.eu/doceo/ document/TA-9-2020-0276_EN.html. 6 F Martin-Bariteau and M Pavlovic, ‘AI and Contract Law’. in FM Bariteau and T Scassa (eds), Artificial Intelligence and the Law in Canada (LexisNexis Canada, 2021). 7 E Severino, The destiny of the technique (Il destino della tecnica) (BUR Milan, 2019) 8–9. 8 G Howells and C Twigg-Flesner, ‘Interconnectivity and Liability: AI and the Internet of Things’, in L Di Matteo, M Cannarsa and C Poncibò (eds), Artificial Intelligence: Global Perspectives on Law & Ethics, Forthcoming, at ssrn.com/abstract=3843134. 9 The discussion about AI personality is well developed in legal scholarship, see: B Solum, ‘Legal Personhood for Artificial Intelligences’ (1992) 70 North Carolina Law Review 1231. 10 IR MacNeil, ‘The Many Futures of Contracts’ [1973] Southern California Law Review 691.

Remedies for Artificial Intelligence  203 impact on the civil law doctrine of specific performance (section V). However, AI promises to offer a new paradigm – based on the paramount importance of efficiency – for both common and civil lawyers. It seems that our comparison has now to shift from contract law to global technology that practically offers a new paradigm to manage transactions, especially in a cross-border dimension.11 On such basis, the chapter gives an overview of the problems raised by the use of AI, and, in section I, it discusses of the adaptability of the remedial apparatus, by developing a preliminary analysis of AI systems’ impact over defects in consent, force majeure, specific performance and damages. In section II, the chapter considers the need to conceptualise ‘new’ solutions to prevent and/or address contractual pathologies to be developed to address the peculiarities of AI systems (ie coding legal constructs into AI algorithms and the quest for explainability).

II.  Overview of the Problems Raised by the Use of AI The presence of AI systems – favouring hyper-automation, efficiency (selfexecution) and autonomy – poses serious challenges to our understanding of contract law and also represents a novelty with previous studies concerning smart contract and contract automation.12 Additionally, case-law has distinguished the two scenarios and, specifically, in the case of B2C2 Ltd v Quoine Pte Ltd, the court precisely tried to draw a distinction between deterministic computers, on the one hand, and artificial intelligence on the other, noting that the latter could ‘be said to have a mind of its own’.13 First, technological advancement urges contract law scholars to discuss contractual activity that is, above all, hyper-automated14 – even, one would say – the intermediation of the same law. Due to their characteristics, these contractual relationships seem to have no need for the law to produce their results, because they self-execute (section V), and therefore pose themselves as events that are not placed under the aegis of the legal system, but are independent of it. So much so that they are likely to give rise to business that self-executes ineluctably, even against the law. Moreover, other aspects can be added to the extreme automation in the contractual affair: advances in technologies related to artificial intelligence suggest that the latter may be able (and to a certain extent already is able) to replace the intervention of humans, even in those moments or aspects of the contractual

11 The author has used the expression Lex Mercatoria ex Machina to describe the reliance on global technologies (such as blockchain) to manage cross-border commercial transactions in C Poncibò, ‘Blockchain in Comparative Law’, in B Cappiello and G Carullo (eds), Blockchain, Law and Governance (Springer International Publishing, 2020) 137–155. 12 L Di Matteo, M Cannarsa and C Poncibò, The Cambridge Handbook of Smart Contracts, Blockchain Technology and Digital Platforms (Cambridge University Press, 2019). 13 B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(I) 3 (Quoine), para 206. 14 See ch 2 in this volume about hyper-autonomous contracting systems.

204  Cristina Poncibò affair that require the use of not only cognitive, but also evaluative faculties and which exist in the implementation of choices (obviously oriented by reference criteria, but still integrating a decision-making activity).15 Second, in AI systems that promise to generate contracts that are ‘perfect’ and self-executing, the machine should ensure, among other things, an efficient contractual enforcement. According to AI enthusiasts, this circumstance reduces the risk that the service has not been (materially) performed and the expenditure of resources for the execution of the same, but does not necessarily guarantee the correct and total fulfilment. AI systems shall grant an efficient execution of the contractual obligation and thus avoid the breach of contractual obligations and finally the need to find a remedy. In fact, the main promise of AI systems’ enthusiasts consists in putting to an end the entire set of remedies aiming to compel promisors to perform theories and, surely, specific performance among them. In practice, AI systems by successive comparisons, by identifying correlations, creates emerging standards to insure the perfect execution of the contract at issue. This is why the law tends to boil down to a set of data: the time has come for ‘data rights’. This right hardly resembles the right we have known and that – certainly for good reasons – we would like to know for a long time to come. The ‘law of AI’ shares only a few things – ie only its normative effect – with the law, that is, the law produced by states following constitutional procedures.16 The latter law has a general and impersonal scope which ensures the absence of any discrimination and, for the sake of impartiality, it takes very little account of individual situations. There does not exist on the other hand, hardly any equality in the context of AI. This, on the contrary, impels a movement to individualise the rules of law as well as including the rules governing contracts.17 Third, it should be underlined that AI systems show an increasing level of autonomy from human masters (programmers, users, owners, manufacturers, programmers). In the fields of artificial intelligence and robotics, the term ‘autonomy’ is generally used to mean the capacity of an artificial agent to operate independently of human guidance. It is thereby assumed that the agent has a fixed goal or ‘utility function’ with respect to which the appropriateness of its actions will be evaluated.18 In fact, AI systems can be programmed to make decisions relevant in contract law on their own, without human intervention.19 Thus, a machine can decide to enter into a contract where predetermined conditions are met. This is the case with algorithmic trading which would represent a large part of transactions 15 U Pagallo, The Laws of Robots: Crimes, Contracts, and Torts (Springer, 2015) 48–60. 16 M Ebers and S Navas, Algorithms and Law (Cambridge University Press, 2020). 17 C Busch and A De Franceschi, Algorithmic Regulation and Personalized Law. A Handbook (Hart, 2020). 18 W Totschnig, ‘Fully Autonomous AI’ (2020) 26 Science and Engineering Ethics, 2473–2485. The author underlines that the concept of autonomy for AI is different from our understating of autonomy for humans. 19 DG Johnson and M Verdicchio, ‘AI, agency and responsibility: the VW fraud case and beyond AI’ (2019) 34 Law & Society 639–64.

Remedies for Artificial Intelligence  205 made by financial institutions around the world. The computer system decides alone, depending on hundreds of parameters, to buy or sell such financial product at such a price, volume, moment market and conditions. AI systems can also decide to run a contract that has already concluded. Accordingly, for an author, it is precisely this autonomy that marks the characteristic trait of AI systems in contracting and executing contracts and makes it possible to distinguish it from other types of computer programs in the contracting processes, such as in the much-debated case of smart contracts and blockchainbased contracts.20 To clarify the idea, advanced AI designates a machine capable of making an autonomous, distinct choice independent of the person who designed it or who uses it.21 AI systems therefore assume that the system performs data analysis to make the best choice according to the will of the programmer, but a choice that the latter could not do when programming the intelligent software.22 In this respect, legal scholars have also coined the curious expression ‘self–driving’ contracts to describe cases where ‘Parties agree to an outcome they want to achieve and rely on machine–assisted analytics to direct them toward that outcome.’23

III.  Defects in Consent Automatic execution contracts seriously undermine the concepts and regimes of traditional contract law such as consent. They are also a contradiction with some of the more usual procedural requirements: notice, implementation remainder, reasonable time, handwritten information, notification and motivation etc. In the perspective considered in this section we underline that legal remedies that can be activated only if and within the limits in which the malfunction of the AI system may have determined a defect in the contract evaluated in terms of the purposes, intentions and expectations of the human party to the contract. The psychology of the machine cannot detect these parameters since the machine is not the contractor: it is only the author of the cognitive and volitional processes to which the human contractor entrusts the conclusion of the contract. The real problem, therefore, is that of mutual understanding between AI systems and the human contractual party so that the expectations of the latter correspond to the results of the contractual activity conducted by the machine. Therefore, legal remedies can only be spoken of as the divergence between human expectations and the contractual activity conducted by the machine itself. 20 J Linarelli, ‘Advanced Artificial Intelligence and Contract’ (2019) 24 Uniform Law Review 330. 21 About the difference between automation and autonomy see also D Nersessian and R Mancha, ‘From Automation to Autonomy: Legal and Ethical Responsibility Gaps in Artificial Intelligence Innovation’ (2020) 27 Michigan Technical Law Review 55 at repository.law.umich.edu/mtlr/vol27/ iss1/3. 22 European Securities and Markets Authority (ESMA), Final Report, 19 December 2014 (ESMA/2014/1569). 23 A Casey and A Niblett, ‘Self-Driving Contracts’ (2017) 43:1 Journal of Corporation Law 6.

206  Cristina Poncibò Here our point consists in noting that contract law is also known to be very psychological, vulnerable to defects in consent (eg mistakes, biases), cognitive limitations of the parties24 and certain moral considerations, with the requirement of good faith that innervates many concrete solutions in civil law jurisdictions. The paramount criteria of AI systems are efficiency – not good faith. Thus, the machine will be able to demonstrate to a human party that it is advantageous not to perform the contract. First, it is clear that the lack of human psychology in AI contracting makes it difficult to apply the traditional remedies for defects in consent, such as mistake, fraud, threats and unfair exploitation.25 Having said this, the real problem, therefore, is to adapt to the use of AI systems in concluding a contract, all those remedies that contract law prepares for cases in which there is a divergence between the will of the contracting parties and the results that the contract has finalised. Surely, the practical ends, the intentions, the expectations that are relevant are exclusively those of the human party. Indeed, the real problem is, if anything, that of the mutual understanding between AI systems and the human party that uses it, that is, the actual correspondence of what the machine communicates to the outside (or the message it receives) to the expectations of that human party. However, this is a profile entirely internal to the sphere of the individual, which does not affect the validity or effectiveness of the contract concluded with the counterparty. The principles of self-responsibility and reliance prevent any divergence between the subjective expectations of the individual using AI and his declaration as conveyed and processed by artificial intelligence from having any relevance.26 To clarify, the advantage (if any) of AI contracts is that the use of artificial machines can sometimes allow a sort of objective verifiability of the processes and standards leading to the conclusion of a contract (section VII). Second, when autonomous artificial intelligence talks with a similar AI, as well as with systems or programs that ‘speak’ its own language, so that many of the questions that arise in human bargaining regarding occult dissent, or the misunderstanding of declarations of others, the identification of the party, the conformity between proposal and acceptance (all issues that give rise to various pathologies of consent) can be addressed as much more secure and objective parameters of linguistic adequacy and congruence in communication between the parties (ie solving more easily the problems that we usually entrust when interpreting the contract). If, as has been said, the contract itself is a machine, entrusting its operation to AI will sooner or later (if the expression is legitimate) lead to an

24 MA Eisenberg, ‘The Limits of Cognition and the Limits of Contract’ (1995) 47:2 Stanford Law Review 211–259. 25 J Cartwright and M Schmidt-Kessel, ‘Defects in Consent: Mistake, Fraud, Threats, Unfair Exploitation’, in G Dannemann and S Vogenauer (eds), The Common European Sales Law in Context Interactions with English and German Law (OUP, 2013). 26 B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(l) 3. Chwee Kin Keong and Others v Digilandmail.com Pte Ltd [2005] 2 LRC 28I.

Remedies for Artificial Intelligence  207 instrumental control of this operation and will allow the application of the preordained remedies to protect the parties on the basis of verification of the respect of the said parameters.27 In consideration of the above, to what extent the doctrine of the defects in consent could be relevant for more advanced forms of AI and, most important, how far it could adapt to such an innovation is an open question. It should also be noted that AI systems promise to: ‘(…) eliminate errors by automating key aspects of the contract review process’.28 Human errors – we stress – precisely to add and/or substitute human cognitive biases and limitations with new AI errors and biases.29 To provide an example, AI systems show a risky tendency to discriminate humans in fixing contractual prices (personalisation of prices).30 In the practice of the courts, in B2C2 Ltd v Quoine Pte Ltd, the court decided that when the law is faced with a contention that a contract made by and between two computer systems, acting as programmed but otherwise without human intervention, is void or voidable for mistake, it is necessary to have regard to the mindset of the programmer when the relevant programs were written, not at the later time when contracts were entered into. Apart from Quoine and a few other cases, the case law concerning the use of AI systems in contracting remains limited and subject to debate.31 In this respect, it should also be underlined that the approach of Quoine applies to automated systems – and not fully autonomous AI as specified before – nor to probabilistic systems in the near future.32 Thus, we could expect that the court called to render a judgment of a breach of contract in the presence of an AI system – in both common law (as in Quoine) and in civil law – will look to the state of mind of the programmer or, depending upon circumstances, it would look to the (typically) opaque subroutines of the algorithm during subsequent system operation to determine knowledge, and attribute that to the relevant party.

IV.  Force Majeure and Frustration It is also important to note the use of AI systems in the contracting process results in a sort of necessary ‘rigidification’ of the contract that denies any possible change, 27 M Loos, ‘Machine-to-Machine Contracting in the Age of the Internet of Things’ in R Schulze, D Staudenmayer and S Lohsse (eds), Contracts for the Supply of Digital Content: Regulatory Challenges and Gaps (Bloosmbury, 2017) 59–82. 28 Icertis at www.icertis.com/contract-management/ai-applications. 29 P Hacker, ‘Manipulation by Algorithms. Exploring the Triangle of Unfair Commercial Practice, Data Protection, and Privacy Law’ [2021] European Law Journal. 30 See ch 12 in this volume. P Hacker, ‘Teaching fairness to artificial intelligence: Existing and novel strategies against algorithmic discrimination under EU law’ (2018) 55:4 Common Market Law Review 1143–1185. 31 Lord Sales, UK Supreme Court, ‘Law and the Digital World’, Cour de Cassation seminar: Being a Supreme Court Justice in 2030, 16 April 2021. 32 B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(I) 3 (Quoine), paras 208–2011.

208  Cristina Poncibò modification, adaptation, or renegotiation. Consequently, it seems that only very basic transactions can be effectively coded and instructed through the machine. In other words, the decisions of the parties during the performance of the contract may need that flexibility that a contract based on hyper-automation loses. The use of AI systems in contract law has the consequence of making it practically impossible for the parties to adapt a contract to a change in circumstances and, thus, rely on the well-established doctrines of frustration and force majeure.33 In fact, the characteristic humans is that they are flexible, adaptable, permeable to influences and discussion, characteristics that to a computer program are more difficult to comprehend. However, the decisions of the parties during the execution of the contract also need that flexibility that automation loses. It is therefore very difficult to reverse a contract automatically executed due to the nature of AI systems. Of course, as already stressed with respect to smart contracts, the parties may decide to include a force majeure clause in programming the machine in cases where AI systems are acting as agents of the parties. However, the major disadvantage of this approach is that this effectively denies most of the benefits that hyper-automation would have, in particular by breaking the automatic execution of the contract. On the contrary, an AI system is not capable of reaching human adaptability, limiting itself to performing operations the programmer has defined for it, so all assumptions must be considered as soon as the contract is initially drafted. Thus, the future is entirely written from the start and there is no such thing as randomness and unpredictability. To put it differently, the force majeure regime or the frustration doctrine can hardly be applied to a contract operating on the basis of hyper-automation. Here, accordingly, the steps should be taken in advance by the parties, to be aware of the situation that justifies it. It seems that hyper-automation in executing contracts does reveal a change in contractual paradigm, permeable to an ultraliberal ideology where algorithmic efficiency prevails over human reason and adaptability. Paradoxically, the use of an AI system and thus automation deprives unilateral decisions of the parties to a contract as part of their usefulness. In fact, these unilateral powers in contractual matters have developed essentially because it is common sense that not everything can be decided and ruled upon at the conclusion of the contract. Thus, it is necessary to be able to make choices adapting to the circumstances to introduce more flexibility into the contractual relationship. Such an argument is particularly strong in consideration of the recent times that have forced parties to adapt the fulfilment of their contractual obligations to the changing circumstances of the pandemic. However, automation forces parties to provide, from the conclusion of the contract, terms of triggering of their unilateral powers to ‘freeze’ their decisions on 33 Similar issues are discussed for the case of smart contracts by E Tjong Tjin Tai, ‘Force Majeure and Excuses in Smart Contracts’ (2018) 6 European Review of Private Law 787–904.

Remedies for Artificial Intelligence  209 the day when the contract is binding and therefore leads to them depriving themselves of part of their decision-making powers with respect to contract adaptation and modification.

V.  Specific Performance It is well known that the law of contractual remedies includes both remedies aiming to compel promisors to perform the very thing they have undertaken to do (namely: specific performance) – to deliver goods, to convey real property, to complete a construction project, to refrain from competing with the other party, and so forth – and between substitutionary monetary remedies, primarily damages. This is also a doctrinal and comparative topic: whereas the primary remedy in common law systems is damages, in civil law systems it is specific performance. However, it is unclear to what extent the differing points of departure actually result in differences in practice. A recent empirical study may offer elements of reflection in this regard.34 In particular, AI promises that it is likely to facilitate, verify and enforce the performance of contracts. Put differently, AI–generated contracts are selfexecuting: the machine ensures, among other things, their enforcement. According to AI enthusiasts, this circumstance reduces the risk that the service has not been – materially – performed and the expenditure of resources for the execution of the same, but does not necessarily guarantee the correct and total fulfilment.35 In this sense, we disagree with the powerful narratives according to which AI contracts promise to create ‘(…) a world where specific performance of contracts is no longer a cause of action because the contracts themselves automatically execute the agreement of the parties.’ Some say that a further consequence of this is that, ‘[s]omeday, these programs may replace lawyers and banks for handling certain common financial transactions.’36

In our view this is a narrative. Yet, even the most performant of these systems remains imperfect – much like the human decision makers they seek to emulate. Accepting imperfection also means accepting the possibility that AI systems will

34 L Yehuda Anidjar, O Katz and E Zamir, ‘Enforced Performance in Common-Law Versus Civil Law Systems: An Empirical Study of a Legal Transformation’ (2020) 68:1 American Journal of Comparative Law 1–54. 35 In Civil Law, the remedies for breach are ranked in a different order of importance from those in the Common Law (damages, first). Traditionally, Civil Law has placed specific performance and termination on a higher footing than damages, in that it permits a creditor to claim either specific performance, where this is still possible, or termination, in all circumstances. The reflections of this section may also be relevant in cases of declaratory and injunctive relief, rectification and rescission. 36 A Hinkes, ‘Blockchains, Smart Contracts, and the Death of Specific Performance’, 21 May 2019 www.law. com/insidecounsel/2019/05/21/blockchains-smart-contracts-and-the-death-of-specific-performanc e/?slreturn=20210406051321.

210  Cristina Poncibò sometimes breach contract or cause harm. Additionally, the remedy of specific performance has been claimed in a case of algorithmic trading.37 It is therefore true that the self-execution of a contract has been studied by legal scholars with respect to smart contracts. In our view, AI systems have the capability to enhance the level of automation of various steps of the contractual process from entering to executing the agreement and they reach an extreme level of automation.38 Far from being contracts, smart contracts are ‘self-executing’ computer programs that are developed as applications on a blockchain: from when a certain condition, foreseen by the programmer, occurs and that the information reaches the program via the blockchain, the pre-coded computer consequence executes. Smart contracts are thus digital automata, computer duplications of a real legal contract, called the ‘fiat’ contract. Smart contracts can thus automatically code, for example, the execution of the contract, such as an insurance contract that indemnifies a traveller if his plane is late. But just as much as he programs the automated execution of a contract, the smart contract can schedule its non-execution in reaction to a failure to perform the expected service and automate the exception of non-performance. Use of the smart contract could still automatically trigger the payment of a penalty clause, or reduce the price. We can still consider using the smart contract to automatically block access to an apartment or rental car (thanks to a connected lock) if the rent has not been paid. Technically speaking, smart contracts are therefore not artificial intelligence, because the conditions, like the effects, are predetermined by humans. But they can lean on artificial intelligence which, for example, would analyse data required to trigger the smart contract. Correspondingly, the blockchain, used as a data register, is a valuable tool in the service of artificial intelligence which can rely on it to base its analyses of data designed to be reliable and secure. Thus, this is why, moreover, the smart contract on the blockchain does not generally fit within the meaning of advanced AI. Put differently, automation does not necessarily mean autonomy. Generally, smart contracts consist in the execution of an automated right or a contractual obligation of which the conditions and effects have been predetermined by a human. However, the smart contract may rely extensively on AI in the future and become, in this way, an example of AI contracting.39 The hyper-automation of contracts that characterise both smart contracts and AI contracts presents some advantages, for example, the speed and economy, since these contracts free the parties from time and the forces required to process 37 B2C2 Ltd v Quoine Pte Ltd [2019] SGHC(l) 3. In B2C2 Ltd v Quoine Pte Ltd the primary relief sought was specific performance (ie re-execution of the algorithmic trades at prevailing rates). Quoine objected that this would have led B2C2 to an even more advantageous position, given further fluctuations in the market. The court agreed. Considering the volatilities of the market, the court found that the proper relief lay only in damages: ‘in these circumstances granting specific performance would cause substantial hardship to Quoine which any potential difficulty in assessing damages does not outweigh’. 38 S Wilkinson and J Guiffre, ‘Six Levels of Contract Automation: The Evolution to Smart Legal Contracts – Further Analysis’, 30 March 2021. 39 MacNeil (n 10).

Remedies for Artificial Intelligence  211 cases and the execution of contracts. They then make it possible to simply and safely comply with the formal requirements that the law often imposes for the implementation of decisions of unilateral contractors: automatic formal notice or compliance within a time limit before the triggering of a sanction are many easily codable elements in a smart contract. However, human will is characterised by the fact that it is moving, adaptable and permeable to influences and discussion, characteristics that to a computer program are more difficult to process. However, the decisions of the party during the performance of the contract also need that flexibility that automation loses. It is therefore very difficult to reverse a sanction automatically executed by a smart contract due to the immutable nature of registrations on the blockchain. The game is to provide, in advance, a smart contract that encodes the automatic resolution contract in the event of nonperformance and to close at the choice of another sanction of non-performance such as price reduction. Nevertheless, it is still depriving yourself of any decision: deciding to sanction or not. Here the sanction decision is fixed, taken in advance, even before knowledge of the situation that justifies it. Some fear that this automation of sanctions in contractual breaches results from an ultra-liberal ideology of contract law where algorithmic decisions prevail over human reason.40 To clarify, if the decision of a creditor has been taken through artificial intelligence how can one verify that he has acted in good faith in unilaterally fixing the price or terminating the contract? Because good faith presupposes listening to explanations and difficulties of the party who did not perform. Perhaps then, should it be necessary to reintroduce a human decision where the smart contract tended rather to eliminate it: that is to say offering the possibility to contact a human representative of the contracting party. Thus, the hyper-automation of contractual decisions is not always a source of simplification. Worse, it can even be a source of additional difficulty, because it implies a sort of ‘petrification’ of contractual relations.

VI. Damages Preliminarily, we also deem it important to argue that contractual liability plays a role when AI systems are engaged in contracts. In these cases, contractual liability may primarily be established and quantified by considering and, according to circumstances, moving responsibility up the chain of command from an AI system to its human or corporate masters. In particular, we stress that the liability may concern either the owners or users who deploy AI systems, or the designers of the (AI) software, namely programmers, or the manufacturers of the AI embedded hardware device (ie product liability).41 40 B2C2 Ltd (n 37). 41 AI-based systems can be purely software-based, acting in the virtual world (eg voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (eg advanced robots, autonomous cars, drones or Internet of Things applications) or a combination of both.

212  Cristina Poncibò In a contractual perspective and depending on the circumstances of the case at issue, liability may arise from the breach of contractual obligations by one or more parties along the before mentioned chain of command. Additionally, product liability, that grounds on both contractual and non-contractual liability, may also be relevant while it presents a set of shortcomings that both the EU Commission42 and legal scholars43 are trying to identify and resolve with respect to AI systems involved. Primarily, uncertainties concerning our understanding of the notion of ‘product’ to ascertain whether it may be extended to cover AI software and not only devices (hardware) where AI is embedded in a product. Also questionable is the definition of the damage that has been conceived in the context of an EU Directive passed in 1985.44 On such a basis, we note that the boundary between contractual and noncontractual liability is often blurred because of the presence of the AI that promotes a sort of convergence of tort and contract law. Needless to say, such a convergence challenges the traditional approaches to liability. With respect to contractual liability, the proposed approaches consisting of moving along the chain of command are easier said than done when we examine the problem by considering two fundamental issues to assess and calculate contractual damages to compensate for the loss suffered: the issue of causation and the subsequent issue of damage quantification and foreseeability. In the first perspective (ie causation), AI decision-making is increasingly likely to be based on algorithms of staggering complexity and obscurity. The developers – and certainly the users – of those algorithms would not necessarily be able to deterministically control the outputs of their AI systems.45 Put differently, the use of AI systems in contracting seems capable of disrupting the chain of causation and it also possesses serious obstacles to the quantification and liquidation of damages. Specifically, AI’s designers do not program all the possible scenarios in advance, nor give specific instructions for each of them; rather, they set a goal for the machine and let AI process the data input, learn from it and decide the best course of action to reach its goal. This leads to the scenario where the AI’s programmers may not have exact understanding of how it reached such a goal or what the stages leading to success were; in other words, they cannot explain the 42 EU Commission, Staff Working Document, Liability for emerging digital technologies Accompanying the document Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions Artificial intelligence for Europe SWD/2018/137 final at eur-lex.europa.eu/legal-content/ en/ALL/?uri=CELEX%3A52018SC0137. 43 C Twigg-Flesner, Guiding Principles for Updating the Product Liability Directive for the Digital Age (Pilot ELI Innovation Paper), European Law Institute (January 2021) at ssrn.com/abstract=3770796. 44 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, [1985] OJ L210/29–33 at eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A31985L0374. 45 The chapter does not concern consumer contracts where the liability issue in contracts entered and executed by AI may assume specific characteristics because of the existence of many mandatory rules in European Contract Law. See ch 12 in this volume.

Remedies for Artificial Intelligence  213 AI’s ‘thought process’ leading to the final result. Such an issue will become more complex with the advent of probabilistic computing, that is, computing which is, say, neither deterministic nor autonomous, but based on a probability that something is the correct answer. The same is true for AI’s failures which cannot always be explained or understood by humans. Precisely, the so called ‘black box’ nature of AI creates challenges of interpretability and eventually affects causation and allocation of liability. Indeed, identifying the cause of an AI system’s failure to perform is the key element for establishing the material breach and resulting damages, or a link between defect and damage in product liability claims. In the context of judicial proceedings, if a plaintiff cannot go back along the chain of data processing and recreate the circumstances of AI’s reasoning process to understand what led to a specific breach, its action may very well be doomed as he will not be able to fulfil the basic evidentiary requirements regarding negligence and/or causation. Additionally, and in the second perspective (ie damage assessment and calculation), assuming that it would be possible to assess causation where AI systems are used in contract law, the calculation of damages before the courts may find serious obstacles. It is well known that the purpose of the principle of foreseeability in contract law is to ensure that the damages to be paid by the non-performing party are linked to the contract and do not fall outside the scope of that contract and are therefore totally unexpected by the other party. However, as said, the more advanced the AI is, the less predictable it becomes. This is because many forms of modern AI function based on unsupervised learning, as opposed to supervised learning. In cases of supervised learning, the AI’s designers (and potentially users if they participate in the process) have considerable control over the results of an operation as they provide the basis for the AI’s decisions; they can therefore foresee, at least up to a certain point, how the AI will react to new data (eg with intelligent telephones who can identify someone in a photo).46 However, in cases of unsupervised learning (such as AI relying on deep learning mechanisms), the algorithms are only given input data without corresponding output values, and are left free to function ‘as they please’ in order to learn more about the data and present interesting findings. This lack of predictability or foreseeability challenges contractual liability principles: a defendant will only be found liable if it could reasonably anticipate and prevent the potential results of an action. Moreover, when trying to apply contract law principles to AI’s actions and assuming causation – or better probability – may be proven, technology may cause problems in calculating damages. On the side of fault, because AI’s actions are unpredictable, it is difficult for a person operating or interacting with it to anticipate the probability that it will eventually inflict harm on others. Thus, the optimal precautions that should be put in place by its programmers or operators, 46 S Ashour, ‘Artificial Intelligence: A Roadblock in the Way of Compliance with the GDPR?’ OBLB Blog, 1 April 2021 at www.law.ox.ac.uk/business-law-blog/blog/2021/04/artificial-intelligenceroadblock-way-compliance-gdpr.

214  Cristina Poncibò compromises establishing ex ante safety measures that should be taken by potential victims engaging with it to deal with the potentially new and unpredictable types of harm that may be inflicted by it. In this context, we may hardly expect human stakeholders to be able to take preventive measures to prevent harm caused by AI. Similarly, when AI acts in an unexpected way after having learned from its own experiences and reinforced itself, it will be difficult to conclude a breach of contract on the part of the manufacturers or programmers if they can demonstrate that the AI was properly developed and tested before release, that their employees and auxiliaries were well trained and supervised and that they implemented proper quality control mechanisms. In fact, it is true that AI’s lack of foreseeability poses similar problems under product liability principles. In many jurisdictions, the law specifically states that manufacturers are only liable for defects or inadequate instructions when there was a foreseeable risk of harm posed by the product. Once again, because AI-related risks are unforeseeable by nature, they simply cannot be covered by the product/design defect or duty of warning and instruction doctrine.

VII.  The Quest for Explainability To deal with the problems mentioned in the paragraph before, legal scholars are also discussing the role to be played by the principle of explainability of automated decision making. It may be relevant in both an ex-ante and, interestingly, ex post perspective. The concept sounds extremely vague. ‘Explainability’ also constitutes one of the pillars of the decision-making processes that are based solely on data automated processing, ie, the cases in which the decisions taken, without human intervention, have legal or, in any case, significant effects on the relevant data subjects, as is the case with AI systems (for instance, an algorithm decides whether or not a loan is to be granted after the performance of a number of sophisticated processing of the borrower’s personal data, matching them with other data): to the extent that the law permits such a decision to be made, the controller shall provide to the relevant data subject ‘significant information on the logics used’ and explain ‘the importance and the expected consequences of such processing for the relevant data subject’ (sections 13, para 2, sub-para f) and 14, para 2, sub-para g), of the GDPR).47 Some authors have argued that in order to make AI more explainable and to remedy this important shortcoming, its designers should be legally required

47 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), [2016] OJ L119/1–88 at eur-lex.europa.eu/eli/reg/2016/679/oj.

Remedies for Artificial Intelligence  215 to disclose algorithms’ codes and implement a way to record all aspects of their functioning, which would allow one to reconstruct and understand the causes of its behaviour and facilitate liability assessments. However, this suggestion is not always possible with modern AI and also raises important issues with regard to trade secrets and competition law. More importantly, it may be helpful to rely on the work of authors who, while discussing remedies under the GDPR, have confirmed the possibility to discuss a right to explanation of specific automated decisions. Precisely, they note that: However, there are several reasons to doubt both the legal existence and the feasibility of such a right (the right to an explanation). In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13–15) about the logic involved, as well as the significance and the envisaged consequences of automated decision–making systems, what we term a ‘right to be informed.48

The ‘explanation’ of automated decision-making concerns the functioning of the whole system ie, the logic, significance, envisaged consequences and general functionality of an automated decision-making system, eg, the system’s requirements specification, decision trees, pre-defined models, criteria and classification structures. Additionally, it may regard specific decisions, ie, the rationale, reasons, and individual circumstances of a specific automated decision, eg, the weighting of features, machine-defined case-specific decision rules, information about reference or profile groups. Put in this way, it sounds promising with respect to the contracting machine’s breach of contract to clarify, ex-post, the decision-making process of the AI and possible errors.49 As anticipated, it also has implications in an ex-ante perspective in machine design: explainability is often understood as a characteristic of the AI system. As a consequence of the above, the chapter concludes that explainable AI systems should be preferred to Blackbox AI, primarily because the latter present serious and unmanageable legal risks with respect to data protection and liability, just to mention two examples.50 Anyway, the quest for explainability represents an attempt to preserve, in some form, the human control over AI systems. This could be done primarily by helping humans (judges, lawyers, parities) to understand ex post what has wrongly occurred in the mind of the machine. In other words, in claiming for such a right, we – humans – reaffirm our capability of understanding and, more important, controlling the contracting process. Unfortunately, the machine does not perfectly 48 See (n 37) above. 49 P Hacker, R Krestel, S Grundmann and F Naumann, ‘Explainable AI under contract and tort law: legal incentives and technical challenges’ (2020) 28 Artificial Intelligence and Law 415–439. 50 A similar critique has been advanced with respect to the right to an explanation under the GDPR, see L Edwards and M Veale, ‘Slave to the Algorithm? Why a ‘Right to an Explanation’ Is Probably Not the Remedy You Are Looking For’ (2017) 16 Duke Law & Technology Review at ssrn.com/ abstract=2972855.

216  Cristina Poncibò think like a human, at least to date. However, it is honest to note the same possibility to really explain the AI decision-making processes, especially Blackbox AI, is a challenge for AI scientists to date, even more so for lawyers one can imagine. This chapter stresses that explainability is an illusion that helps to reassure humans and promise to confirm our vanishing domain over contracting machines. In reality, the choice of entrusting AI to contracts – that grounds on models that are unknown to us and that do not include the most elementary rules of human common sense – involves the consequence of a loss of predictability and control over those processes. These costs remain quite neglected in the mainstream discourses who tend to praise the emergence of automation in every aspect of life.51

VIII.  Coding Law and Ethics into AI Algorithms In light of the difficulties mentioned above, it is necessary to examine new solutions about remedies dealing with both an ex-ante and an ex-post perspective of contractual activities. In this chapter we advance the idea that the ex-post remedies of contract law run into difficulties when the party in breach of the agreement in question has used an AI system. These questions are not simply academic. They lie at the core of a stakeholder’s rights and remedies – in fact, they go to the fundamental question of whether there are any rights or remedies at all. Thus, in this chapter we note that the presence of AI implies a necessary shift from an ex-post enforcement of the contract to an ex-ante regulation of the instrument. At the end, responsibility still lies with humans. By pointing this out, our effort aims at keeping responsibility where it will do the most good: to encourage humans that design and deploy AI technology to anticipate the role of the technology in producing states of affairs. Considering the current state, it is evident that remedies have been designed for the old world and, thus, they could not effectively be applied to tomorrow’s world of contracting. Paradoxically, the diffusion of ‘do it yourself ’ contracts, allowed by the use of artificial intelligence or platforms such as distributed ledgers and blockchain, could have the side effect of increasing the ‘administrative’ interference of public authorities in contractual activity: it will not be enough to regulate the contractual relationship, but it will be necessary to regulate the same digital tools of bargaining, and to regulate the digital environment in which it takes place to make it transparent and controllable. It is not an exercise in blind optimism to predict that all of this is destined to happen. Precisely with regard to the described setting of the problem, it should be noted that the application of intelligent machines or artificial intelligence to contractual



51  Pagallo

(n 15).

Remedies for Artificial Intelligence  217 activity can allow the objective verifiability of the processes of which it is composed (or, better to say, in which the contractual affair is broken down or can be broken down as it unfolds (negotiations, pre-contractual information, representation of alternatives, etc). This seems an important point, precisely, and above all, in view of the solution of the most delicate problem: that of the legal treatment to be reserved for the pathologies of the contract deriving from malfunctions of the intelligent machine. Now, it is evident that a system of remedies will have to be designed that are placed at an earlier time than that in which traditional ones operate. This chapter confirms the shortcomings of creating legal fictions with respect to the disruptive potential of AI systems in contract law and specifically contractual remedies. In the end, other solutions are needed to bring humans back to the centre of the scene. To be clear, it will not be enough to regulate the contractual relationship, but it will be inevitable to ex-ante regulate AI systems that are used in the contracting process and the entire eco-system in which it occurs. In practice, this means preventing contractual pathologies by coding legal constructs into AI algorithms. The question is whether such a new and promising approach is really feasible. In this respect, one may question what will be the implications of the draft Regulation, if approved, in terms of contractual liability in cases where AI systems are involved. In fact, the EU seems to be willing to take the envisaged path by promoting a regulatory regime, through ex-ante regulatory procedures and industry self-regulation – more precisely, standardisation processes – before AI technologies are deployed.52 That said, one may question what will be the implications of the draft Regulation, if approved, in terms of contractual liability in cases where AI systems are involved in contracting and the preliminary answer goes in the direction of verifying the standards at issue and their fulfilment by the contracting parties. There is no need to stress that in the field of AI and emerging technologies, the processes of technical standardisation often relies on professionals and start-up companies (rarely on established MNEs) and thus these innovators – according to the proposed new EU rules – will have to bear the burden of the costs of innovation and the risks of liability for standards development with respect to technologies that are not mature yet and thus still pose challenges to scientists and AI experts as well. It is also true that the effects of standardisation on liability are still unclear under general contract law, while scholars struggle to clarify this issue.53 52 Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (COM/2021/206 final). See also EU Commission, White Paper on Artificial Intelligence: a European approach to excellence and trust, 19 February 2020 at ec.europa.eu/info/files/white-paper-artificialintelligence-european-approach-excellence-and-trust_en. 53 P Verbruggen, ‘Tort Liability for Standards Development in the United States and the European Union’, in J Contreras (ed), Cambridge Handbook of Technical Standardization Law (Cambridge University Press, 2019) Vol II and Tilburg Private Law Working Paper Series No. 12/2018, at ssrn.com/ abstract=3254864.

218  Cristina Poncibò Now, AI systems are at the forefront of technological innovation and thus it is a new area characterised by high complexity and the need of specific expertise. It is difficult to say whether EU guidance on this issue will definitely help private actors in reducing the burden of standardising AI and innovation. Additionally, it seems that the GDPR model appears in the provisions of the draft AI Act. Similarities come in two forms, a structural and a case-specific one. From a structural point of view the GDPR effect is best viewable in the suggested AI supervision and enforcement model: In essence, the draft AI Act replicates the GDPR model of establishing supervisory authorities at national levels (Article 3(42) and 59 of the draft AI Act) that are to be coordinated at EU level by a Board (in this case, the European AI Board, see Article 56 of the draft AI Act). In addition, same as with the GDPR, the principle of accountability underlies the draft AI Act (see, for example, Article 23 or Article 26(5)).54 The chapter notes that, from a case– specific perspective, provisions in the draft AI Act that are visibly affected by the GDPR (not including, of course, those provisions that directly refer to it, as is for example the case in Article 10(5))55 include, indicatively, the household exemption of Article 3(4), the certification mechanisms (declarations of conformity and codes of conduct in Articles 48 and 69), the AI registration system (see Articles 51 and 60), or the mandatory appointment of representatives in the EU for any AI non-EU actors (see Article 25 of the AI Act).56 While there is certainly nothing wrong with the EU Commission trying to export a successful regulatory model such as that of the GDPR into other fields of law, difficulties are bound to emerge when attempting to do so while regulating AI because the two cases are fundamentally different. Personal data protection aims to protect a fundamental human right and regulates only a single, easily identifiable human activity, that of personal data processing.57 AI regulation is much wider in scope. It presumably aims both to protect individuals and to help the development of AI. It does not refer to any specific activity or field but is rather all encompassing, aiming at any and all AI instances in human life. Any attempt to catalogue AI in its entirety so as to bring it under the supervision of a single state authority reveals a highly structuralist, bureaucratic approach that the authors believe will prove ineffective, if for nothing else, then only due to the sheer volume of the work anticipated for such an overambitious task. The EU proposal ends in communicating to the reader a sense of distrust of new AI technologies and a desire to insert government control in an effort to stem imagined and yet-to-be imagined dangers. Concern that the Commission’s approach is overly prescriptive and too generalised, and that AI has too many

54 See (n 37) above. 55 ibid. 56 See (n 43) above. 57 P Pałka, ‘Data Management Law for the 2020s: The Lost Origins and the New Need’s (2020) 68 Buffalo Law Review 559.

Remedies for Artificial Intelligence  219 applications and forms for a one-size fits all regulation, is widespread to date. The EU, either by design or by sheer luck, happened to be the one indisputably authorised to regulate digital life, perhaps under a regulatory-instrumentalist mindset.58 Paradoxically, we stress that the use of AI systems or AI platforms will have the side effect of increasing the administrative interference of public authorities in contractual activity: it will not be enough to regulate the contractual relationship, but it will be necessary to regulate the same digital tools of bargaining, and to regulate the digital environment in which it takes place to make it transparent and controllable.

IX. Conclusion The contract is an institution designed by and for humans, even when there are legal persons who conclude it. The contract is human through the wills that create it and shape its content. Contract law is also known to be very psychological and it also includes some moral considerations (eg good faith, fairness). In particular, contract law runs into difficulties when the bad actor in question is neither a person nor a corporation, but an AI. For now, artificial intelligence prevails in the daunting tasks and low added value. But as it progresses, AI expertise could remove humans from the decision-making process and reduce them to the role of simple signatory of what, by hypothesis, was mathematically decided to maximise the utility they will get from the contract. The point is that hyper automation is useful and efficient for mass contracts with simple services, but still seems too difficult to replace humans when the benefits are more complex or human flexibility regains its place in contract law. In particular, our analysis has highlighted the humanity of contract law (biases, psychology, morality) and the difficulties in adapting the traditional contractual remedies to this new reality because of vanishing consent, the extreme automation, and the growing complexity and obscurity of contracts concluded by an AI that will become more autonomous in the near future. According to legal scholars, the main way to avoid these problems may be to move responsibility up the chain of command from an AI to its human or corporate masters – either the designers of the system or the manufacturers of the AI embedded system (product liability), or the owners or users who deploy it (agency). Nevertheless, such solutions are not feasible in all cases: the developers – and certainly the users – of those algorithms would not necessarily be able to deterministically control the outputs of their AI. AI may represent a Blackbox also

58 R Brownsword Law, Technology and Society. Reimagining the Regulatory Environment (Routledge, 2019).

220  Cristina Poncibò for experts.59 To put it differently, the presence of AI systems challenges remedies in contract law according to different angles and the law shows a limited resilience in our case. Thus, we also considered new solutions to remedy contractual pathologies. This seems to be the suggestion of the EU Commission in its recent proposals. In this respect, the chapter also critically addresses the human illusion to ‘get back the control’ over contracts concluded by an intelligent machine and particularly the case of explainability in the perspective of contract law. The risk is that explainability represents more a narrative – to reassure humans of their control – than a feasible strategy. Unfortunately, in consideration of the vanishing human factor in contracting by using AI this will not even always be possible, especially with more autonomous AI and black box AI. In this chapter, we also argue that is makes sense to focus on the ex-ante regulation of AI contractual environments. In light of the above considerations, therefore, it seems evident that there is the need to design a system of ex-ante tools that are placed at an earlier time than the one in which the traditional remedies operate, providing for the disciplining and homologation of the same digital tools of bargaining and regulating the digital environment where it takes place in order to make it transparent and controllable from an ex-ante perspective. To conclude, we need to have the illusion of controlling AI and more generally emerging technologies. Nevertheless, the truth is that AI systems undermine or substitute human will and human fallibility and morality in a domain (contract law) where humanity was nevertheless the guiding principle until very recently. Also, the blockchain revolution, if any, was more manageable because it has to do with automation only, not a sort of autonomy of the machine from human masters. We believe that hyper-automation may be efficient for mass contracts with simple services, but still seems too difficult to replace humans when the benefits are more complex or the human psychology regains its place in contract law. The prestige of technology characterises the times we are living in, especially during the pandemic. However, the question is whether AI will one day understand what is right and good. In our times, AI systems may be good servants, but they are bad masters.

59 In fact, start-up companies may develop algorithms to clear the box of AI, ie to support humans (corporations and their clients) to understand how AI made its predictions. The goal is to develop a trustworthy and human-centric AI. The question is whether a court will be confident in their results in adjudicating a case. See, for instance, the goals of ClearBox AI at clearbox.ai/.

12 Artificial Intelligence and Platform Services: EU Consumer (Contract) Law and New Regulatory Developments MONIKA NAMYSŁOWSKA AND AGNIESZKA JABŁONOWSKA*

I.  Setting the Scene At the forefront of the striking development of artificial intelligence (AI) systems, which we observe today, we find many companies known for offering digital services to consumers, primarily online platforms. Resources they have at their disposal, especially large datasets and unprecedented processing power, are key to the success of now-dominant paradigm in AI, associated with machine learning. In consumer markets themselves, artificial intelligence is both omnipresent and in part elusive.1 AI powers a growing variety of goods and services, promising to simplify the daily lives of consumers. At the same time, AI is increasingly being used on the business side, eg to inform platform design, serve targeted advertisements and determine conditions of access to different goods and services. Arguably, what merits particular attention from the perspective of consumer law and policy are not just specific AI applications; neither, for that matter, the emergence of AI in consumer markets as such. Rather, the use of AI by powerful platform companies must be seen as part of a broader transformation of markets and society, which comes with a distinct set of benefits and risks. In this time of sea change, the European Union (EU) increasingly presents itself as a political actor that is prepared to react to the dark side of digital

* Prof. Monika Namysłowska is a professor at the Department of European Economic Law, Faculty of Law and Administration, University of Lodz. Dr. Agnieszka Jabłonowska is an assistant professor at the Institute of Law Studies, Polish Academy of Sciences. The research leading to this chapter was financed by the National Science Centre in Poland (project no. 2018/31/B/HS5/01169). 1 For an overview, see: A Jabłonowska, M Kuziemski, AM Nowak, H-W Micklitz, P Palka and G Sartor, ‘Consumer law and artificial intelligence: challenges to the EU consumer law and policy stemming from the business’ use of artificial intelligence. Final report of the ARTSY project’, Working Paper, EUI LAW, 2018/11.

222  Monika Namysłowska and Agnieszka Jabłonowska transformation. Such readiness seems to be due to a combination of its general positioning as a community of values and the fact that the most powerful platform companies are not based in the EU.2 Naturally, as an organisation geared to removing barriers to trade, the EU has also in many respects enabled the developments, which we observe today.3 Over past decades, however, EU integration has evolved in more complex ways, combining both enabling and protective movements. EU consumer law is an illustrative manifestation of that development. The present chapter seeks to contribute to the on-going discussions about the possible role of the EU in dealing with the socio-economic transformation associated with online platforms and AI from the perspective of consumer (contract) law. It does so by first connecting the insights about transformative effects of platforms and AI for markets and society with the broader debate about the nature of EU law and the associated role of consumer protection. We build upon prior research into European private law, emphasising its regulatory nature, instrumentalist rationality and an uneasy relationship with the traditional divide between public and private law, established in continental legal orders.4 We stress that, despite being fragmented, EU law tends to display a set of common themes, which notably include establishing a high level of consumer protection and – increasingly – of fundamental rights.5 Harmonised norms of contract law often form part of more comprehensive frameworks of a sectoral nature, which combine different modes of governance.6 Thus, when examining the role of the EU in the age of platforms and AI from a consumer protection perspective, different strands of existing and emerging acquis must be considered together. We argue that, in view of the opacity and complexity of algorithmic processes deployed by online platforms, their potential for modifying consumer behaviour, 2 See, eg, European Commission, ‘White Paper on Artificial Intelligence – A European approach to excellence and trust’, COM(2020) 65 final, 1. 3 On the role of liability exemptions, see, eg, G Sartor, ‘New aspects and challenges in consumer protection: Digital services and artificial intelligence’ (2020) 29–30 www.europarl.europa.eu/RegData/ etudes/STUD/2020/648790/IPOL_STU(2020)648790_EN.pdf (last accessed 21 April 2021). 4 H-W Micklitz, ‘The Visible Hand of European Regulatory Private Law – The Transformation of European Private Law from Autonomy to Functionalism in Competition and Regulation’ (2009) 28 Yearbook of European Law 3; R Michaels, ‘Of Islands and the Ocean: The Two Rationalities of European Private Law’, in R Brownsword, H-W Micklitz, L Niglia and Stephen Weatherill (eds), The Foundations of European Private Law (Hart Publishing, 2011) 139. For a critique of this classical understanding of law in the continental European tradition, see: Mathias Reimann, ‘The American Advantage in Global Lawyering’ (2014) 78 Rabels Zeitschrift für ausländisches und internationales Privatrecht 1. For a response, highlighting the distinct features of the European legal order, see: H-W Micklitz, ‘A European Advantage in Legal Scholarship?’, in R van Gestel, H-W Micklitz and EL Rubin (eds), Rethinking Legal Scholarship: A Transatlantic Dialogue (Cambridge University Press, 2017) 262. 5 MW Hesselink, ‘Contract theory and EU contract law’, in C Twigg-Flesner (ed), Research Handbook on EU Consumer and Contract Law (Edward Elgar Publishing, 2016) 519–520, 523–525; European Commission, Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC, COM(2020) 825 final, recital 3; European Commission, Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain union legislative acts, COM(2021) 206 final, recital 1. 6 Hesselink (n 5) 519–520.

Artificial Intelligence and Platform Services  223 as well as standardised nature of associated contractual practices, the objective of ensuring a high level of consumer protection provides justification for applying more protective, need-oriented norms to consumer relationships with platforms. We further investigate how selected concepts of EU consumer (contract) law could be read in the interest of enhanced consumer protection, including through increased legal certainty, arguing in favour of a coherent, yet sufficiently protective reading of key notions, such as the model of consumer and the standard of reasonableness. Finally, we connect the analysis of the existing EU acquis with more recent developments in the field of AI and platform regulation and draw insights for consumer law and policy.

II.  Online Platforms as a Driving Force for AI in Consumer Markets A.  A Broader Picture: Online Platforms and Consumer Relations Online platforms are not an isolated phenomenon, but form part of the broader move from industrial to information economy.7 In this new era, manufacturing is yielding its position of a key contributor to economic growth, and instead production, accumulation, and processing of information are gaining prominence.8 As observed by Julie E. Cohen, the nature of this shift is largely determined by the ongoing processes of propertisation, datafication, and platformisation.9 Consumer markets are deeply affected by each of these three processes. As consumer information problems in the digital age have shifted from scarcity to overload, online platforms have gradually established themselves as the necessary middlemen of online consumer experience.10 They were able to do so by acquiring an unparalleled expertise in data science and AI and bringing it to bear. However, it is debatable to what extent the emergence of platforms as new intermediaries actually serves or empowers consumers. Scholars have argued that consumers in the information economy are in fact treated in an instrumental manner, as their data are being constantly appropriated and transformed into economic value.11 This largely has to do with the prevalent model of monetising platform services, which – as is well known – is mostly based on advertising revenues. 7 JE Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism (Oxford University Press, 2019) 6. 8 ibid. 9 ibid 15, 40. 10 ibid 41, 75; D Bawden and L Robinson, ‘The Dark Side of Information: Overload, Anxiety and Other Paradoxes and Pathologies’ (2009) 35 Journal of Information Science 180. 11 Cohen (n 7) 49; S Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (Profile Books, 2019) 351; K Yeung, ‘“Hypernudge”: Big Data as a Mode of Regulation by Design’ (2016) 20 Information, Communication & Society 118.

224  Monika Namysłowska and Agnieszka Jabłonowska Against this background, the chapter takes a closer look at the design of platform services and their related advertising features from the perspective of the EU consumer acquis, including consumer contract law. The focus on consumer relations with platforms can be explained by the latter’s pivotal role both in the navigation of online consumer experience and in the digital advertising ecosystem. Websites operated by Google and Facebook are the most visited in the world and their providers are at the forefront of AI innovation. Both traders have also gradually established themselves as key players in main online advertising segments, namely search (Google), display (Facebook) and open display (Google).12 As the power of platforms grows, so do concerns about their impact on consumer behaviour.13 Due to monetisation strategies signalled above, platforms’ key consumeroriented functions, eg search and social networking, are only one of many factors driving business decisions. This holds true both for the particulars of advertising delivery and for the design of platforms as a whole. As we discuss in the following section, with the rise of AI, the mechanisms of serving advertisements have become more and more advanced. Global digital service providers can ensure that commercial communications reach narrowly defined audiences.14 Predictive models can further be deployed to optimise delivery of adverts towards conversion goals via recurrent feedback loops.15 Increasing sophistication of content targeting and delivery raises a number of concerns, including the possibility of consumer vulnerabilities being exploited16 or adverts being served in a discriminatory manner.17 As is increasingly well-understood in the scholarship, personalisation of online advertisements is not just an isolated phenomenon, but rather a manifestation of broader business logic. Interactions through online platforms are said to take place within carefully designed digital architectures, which emanate again from the monetisation model.18 Accordingly, consumers find themselves in immersive

12 With a focus on the UK, see Competition and Markets Authority, ‘Online platforms and digital advertising: Market study final report’, 2020, 211 assets.publishing.service.gov.uk/media/ ­ 5fa557668fa8f5788db46efc/Final_report_Digital_ALT_TEXT.pdf (last accessed 20 April 2021). 13 Yeung (n 11) 119. 14 See, eg, S Grafanaki, ‘Autonomy Challenges in the Age of Big Data’ (2017) 27 Fordham Intellectual Property, Media & Entertainment Law Journal 803, 859; S Winter, E Maslowska and AL Vos, ‘The Effects of Trait-based Personalization in Social Media Advertising’ (2021) 114 Computers in Human Behavior 106525. 15 Cf. Yeung (n 11) 121–123. 16 M Ebers, ‘Beeinflussung und Manipulation von Kunden durch Behavioral Microtargeting: Verhaltenssteuerung durch Algorithmen aus der Sicht des Zivilrechts’ (2018) 7 Multimedia und Recht 423. 17 M Ali, P Sapiezynski, M Bogen, A Korolova, A Mislove and A Rieke, ‘Discrimination through optimization: How Facebook’ Ad Delivery Can Lead to Skewed Outcomes’ (2019) Proceedings of the ACM on Human-Computer Interaction 199. 18 A Zakon, ‘Optimized for Addiction: Extending Product Liability Concepts to Defectively Designed Social Media Algorithms and Overcoming the Communications Decency Act’ [2020] Wisconsin Law Review 1107.

Artificial Intelligence and Platform Services  225 and playful environments, in which they constantly generate new data and receive social validation.19 While this gamified experience may seem to match consumer preferences, it is notable that individuals who wish to limit their use of platform services often find this decision exceedingly hard to implement. This difficulty becomes more understandable when particular design choices deployed in online platforms are viewed through the lens of neuroscience and social psychology. Thus, display of notifications may exploit the features of the brain’s reward systems, triggering a dopamine activity comparable to that observed in a gambling context.20 Similarly, the infinite scrolling features, which have been quickly adopted by many online platforms, as well as autoplay features, whereby related videos are switched on automatically after having finished watching the previous one, have been identified as habit-forming and addictive.21 Aspects of platform design can further capitalise on recognised cognitive biases to steer consumer behaviour in line with operators’ goals. Examples include the bandwagon effect, whereby individuals may value something more because others seem to do so, the related fear of missing out, or priming, whereby exposure to one stimulus influences the response to the subsequent one.22 As Facebook’s emotional contagion study showed, real-life emotions can be affected by tweaks in technological design, without consumers’ awareness of the external influence to which they have been subject.23

B.  The Role of Artificial Intelligence While AI systems are not the only part of the behavioural apparatus at play in online platforms, they are central to its overall design and performance. Advanced algorithms can be deployed to improve the understanding of particular user characteristics and, on this basis, predict future behaviours. To illustrate, both of these dimensions are highly relevant to Facebook’s advertising delivery system, which combines elements of audience targeting and automated auction.24 At first stage, advertisers choose their target audience from a menu of options created by the operator based on volunteered, observed and inferred consumer data, which they 19 On the role of gamification in the adoption of facial recognition, see A Ellerbrok, ‘Playful Biometrics: Controversial Technology through the Lens of Play’ (2011) 52 The Sociological Quarterly 528. 20 T Haynes, ‘Dopamine, Smartphones & You: A battle for your time’ sitn.hms.harvard.edu/ flash/2018/dopamine-smartphones-battle-time (last accessed 20 April 2021). 21 H Andersson, ‘Social media apps are ‘deliberately’ addictive to users’ www.bbc.com/news/ technology-44640959 (last accessed 20 April 2021). 22 Cf. A Mathur, G Acar, MJ Friedman, E Lucherini, J Mayer, M Chetty and A Narayanan, ‘Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites’ (2019) Proceedings of the ACM on Human-Computer Interaction 81:6; Yeung (n 11) 122. 23 ADI Kramer, JE Guillory and JT Hancock, ‘Experimental evidence of massive-scale emotional contagion through social networks’ (2014) 111 Proceedings of the National Academy of Sciences of the United States of America 8788. 24 Facebook, ‘Good Questions, Real Answers: How Does Facebook Use Machine Learning to Deliver Ads?’ www.facebook.com/business/news/good-questions-real-answers-how-does-facebook-usemachine-learning-to-deliver-ads (last accessed 20 April 2021).

226  Monika Namysłowska and Agnieszka Jabłonowska can further complement with their own datasets.25 While the collection of volunteered data (ie information which the user actively shares with the trader) is fairly straightforward for both parties, obtaining observed and inferred data requires additional collecting and processing operations, in which platforms have gained extensive expertise.26 Thus, for example, AI systems can be deployed to improve data observation (eg through automated text analysis) and to infer information about user attributes, interests or opinions, which are not immediately apparent from the data provided and observed.27 While the precise nature and scale of these operations are difficult to assess, anecdotal evidence concerning the ability of traders to infer information about the pregnancy of female customers based on their purchasing history or about emotional states of teenagers based on their activities through social media, illustrates the potential of AI-powered inferences.28 As was signalled above, the same predictive logic which allows traders to complete gaps in knowledge about consumers can also be used to predict their behaviours.29 Turning again to Facebook’s example, it is no surprise that machine learning is equally important to the second part of the platform’s advertising delivery system, namely the automated auction. What is being computed at this stage is the probability that a given person will take an action desired by the advertiser, such as clicking a link or purchasing a product. The analysis likely includes a broad variety of data stemming both from the users and from the advertisers.30 Finally, similar algorithmic processes can inform the display of other types of content to consumers (news items, videos, etc), so as to maintain their high engagement.31 It is thus no exaggeration to say that platforms not only determine the ambit of online consumer experience, but are also capable of being highly persuasive and even include aspects of direct behavioural conditioning.32 Besides creating direct

25 G Venkatadri, A Andreou, Y Liu, A Mislove, KP Gummadi, P Loiseau and O Goga, ‘Privacy Risks with Facebook’s PII-based Targeting: Auditing a Data Broker’s Advertising Interface’ 2018 IEEE Symposium on Security and Privacy 89. 26 See also Expert Group for the Observatory on the Online Platform Economy, Work stream on Data, ‘Final report’ platformobservatory.eu/app/uploads/2020/07/04Dataintheonlineplatformeconmy. pdf 11–12. 27 M Kosinski, D Stillwell and T Graepel, ‘Private Traits and Attributes are Predictable from Digital Records of Human Behavior’ (2013) 110 Proceedings of the National Academy of Sciences of the United States of America 5802; I Lee, ‘Social Media Analytics for Enterprises: Typology, Methods, and Processes’ (2018) 61 Business Horizons 199, 203–205. 28 For further examples along with an in-depth analysis from the perspective of data protection law, see S Wachter and B Mittelstadt, ‘A Right To Reasonable Inferences: Re-Thinking Data Protection Law in The Age of Big Data and AI’ [2019] Columbia Business Law Review 494, 506–509. 29 S Biddle, ‘Facebook Uses Artificial Intelligence to Predict Your Future Actions for Advertisers, Says Confidential Document’ theintercept.com/2018/04/13/facebook-advertising-data-artificialintelligence-ai (last accessed 21 April 2021). 30 See, eg, Ali et al (n 21) 199:3–199:8. 31 M Zanker, L Rook and D Jannach, ‘Measuring the Impact of Online Personalisation: Past, Present and Future’ (2019) 131 International Journal of Human-Computer Studies 160, 161. 32 ibid; Cohen (n 7) 82–83; S Zuboff, ‘Big other: Surveillance Capitalism and the Prospects of an Information Civilization’ (2015) 30 Journal of Information Technology 75, 84–85; S Hong and SH Kim, ‘Political Polarization on Twitter: Implications for the Use of Social Media in Digital Governments’ (2016) 33 Government Information Quarterly 777; S Vaidhyanathan, Antisocial Media: How Facebook

Artificial Intelligence and Platform Services  227 threats to consumer autonomy, psychological well-being and economic interests, platforms’ optimisation for ‘commercial surveillance’ can lead to a multitude of other risks related to security, discrimination, political manipulation and polarisation of the public debate.33

III.  Instrumental Rationality of the EU acquis Before addressing the role of EU law in dealing with the socio-economic transformation associated with platforms and AI it is helpful to consider important particularities of EU integration. In particular, regulatory fragmentation and instrumental rationality34 represent no irregularity in EU legal order, but an inherent design feature stemming from primary law. Competences of the EU are limited by the principles of conferral, subsidiarity and proportionality.35 Article 114 of the Treaty on the Functioning of the European Union (TFEU), from which an overwhelming part of harmonised acquis derives its legal basis, authorises the EU institutions to adopt measures which have as their object the establishment and functioning of the internal market.36 The provision further specifies that the Commission, in its proposals concerning consumer protection, should take as a base a high level of protection, while the European Parliament and the Council should seek to achieve this objective within their respective powers. Common rules adopted on the basis of Article 114 TFEU are thus primarily geared towards improving the functioning of the internal market while keeping a high level of consumer protection in view. What precisely constitutes a proper functioning of the internal market has been a subject of debate.37 The removal of obstacles to fundamental freedoms and distortions of competition is certainly part of this framework, but does not seem to exhaust the scope of possibilities.38 Increasingly, common standards are also motivated by the need to provide all types of market actors with the necessary confidence to engage in the internal market.39 Disconnects Us and Undermines Democracy (Oxford University Press, 2018); S Biddle, PV Ribeiro and T Dias, ‘Invisible Censorship: TikTok Told Moderators to Suppress Posts by “Ugly” People and the Poor to Attract New Users’ theintercept.com/2020/03/16/tiktok-app-moderators-users-discrimination (last accessed 21 April 2021). 33 Cohen (n 7) 92. 34 Micklitz (n 4); Michaels (n 4); Hesselink (n 5); S Weatherill, The Internal Market as a Legal Concept (Oxford University Press, 2017) 154. 35 Art 5 of the Treaty on European Union [2012] OJ C326/13. 36 Consolidated version of the Treaty on the Functioning of the European Union [2012] OJ C326/47. 37 See, eg, MW Hesselink, ‘Unjust Conduct in the Internal Market: On the Role of European Private Law in the Division of Moral Responsibility between the EU, Its Member States and Their Citizens’ (2016) 35 Yearbook of European Law 410, 450–451. 38 See, eg, judgment of the Court of 8 June 1010 in case C-58/08 Vodafone and Others, ECLI:EU:C:2010:321, para 32 and the open-ended language used (‘in particular’). 39 For the origins, see P Sutherland, ‘The Internal Market after 1992: Meeting the Challenge. Report Presented to the Commission by the High Level Group on the Functioning of the Internal Market’ (1992) SEC(92) 2044 final, 4. With respect to digital policy, see, eg, European Commission (n 2) 9, mentioning the ‘ecosystem of trust’.

228  Monika Namysłowska and Agnieszka Jabłonowska Setting aside the controversies created by this reading,40 it suffices to note that it may, in practice, expand the scope of legitimate EU action of relevance to consumers. As far as the level of consumer protection is concerned, indication of a ‘high level’ in Article 114(3) remains of key importance and can be further particularised with the help of Article 169(1) TFEU. As the Court of Justice explicitly found in its case law, provided that the conditions for recourse to Article 114 TFEU are fulfilled, the EU legislature cannot be prevented from relying on that legal basis on the ground that consumer protection is a decisive factor in the choices to be made.41 Against this background it is fair to say that EU law, including contract law, is instrumental to several objectives, of which the establishment of a high level of consumer protection forms part.42 While this does not imply that the highest possible level of protection needs to be ensured,43 the requirement should also not be reduced to merely mitigating negative externalities of market integration. The importance of protecting health, safety and economic interests of consumers at a high level, as referred to in Article 169(1) TFEU, has been additionally reinforced through the explicit recognition of consumer protection in the Charter of Fundamental Rights as a principle to be observed by EU institutions.44 Without doubt, the protection of consumers in the digital economy is also closely interlinked with other principles and rights, such as privacy and personal data, the freedom of expression and information and access to justice, which may add further weight to protection considerations. When considering the level of consumer protection afforded by particular legal norms it is useful to refer to the framework of two competing ethics, developed by Chris Willett.45 According to this reading, the ethic of consumer self-reliance is located on one end of the protection spectrum while the ethic of consumer need 40 Consumer confidence narrative was feared to, on the one hand, afford the EU legislature with a far-reaching power to regulate the internal market and, on the other hand, instrumentalise consumers for internal market goals. Cf. H Unberath and A Johnston, ‘The Double-Headed Approach of the ECJ Concerning Consumer Protection’ (2007) 44 Common Market Law Review 1237, 1254–1255; T Wilhelmsson, ‘The Abuse of the “Confident Consumer” as a Justification for EC Consumer Law’ (2004) 27 Journal of Consumer Policy 317; J Stuyck, ‘The Transformation of Consumer Law in the EU in the Last 20 Years’ [2013] Maastricht Journal of European and Comparative Law 385, 390. 41 Judgment of the Court of 8 June 1010 in case C-58/08 Vodafone and Others, para 36. 42 Hesselink (n 5) 520, 524. 43 See, eg, judgment of the Court of 13 May 1997 in case C-233/94, Germany v Parliament and Council, ECLI:EU:C:1997:231, para 48; judgment of the Court of 8 October 2020 in case C-641/19, PE Digital, ECLI:EU:C:2020:808, para 30 (mentioning the balance between a high level of consumer protection and competitiveness of undertakings). 44 Charter of Fundamental Rights of the European Union [2012] OJ C326/391. M Jagielska and M Jagielski, ‘Are Consumer Rights Human Rights?’, in J Devenney and M Kenny (eds), European Consumer Protection: Theory and Practice (Cambridge University Press, 2012) 350–351; I Benöhr and H-W Micklitz, ‘Consumer Protection and Human Rights’, in G Howells, I Ramsay and Thomas Wilhelmsson (eds), Handbook of Research on International Consumer Law (Edward Elgar, 2018) 21–25; T Lock, ‘Rights and Principles in the EU Charter of Fundamental Rights’ (2019) 56 Common Market Law Review 1201, 1214. 45 C Willett, ‘Re-Theorising Consumer Law’ (2018) 77 The Cambridge Law Journal 179.

Artificial Intelligence and Platform Services  229 is situated on the other. The former is expressed in rules allowing businesses to escape responsibility for poor quality outcomes and to legitimise imposition of outcomes that are harsh on consumers, due to a focus on traders’ processes.46 The latter, in turn, is expressed in rules that prioritise protecting consumers from poor quality and harsh outcomes due to their position of weaknesses.47 As observed by the author, many key consumer protection measures adopted at the EU level fall on the more protective side of the spectrum.48 This includes the norms on unfair commercial practices, fairness control of standard terms and conformity with the contract, which we elaborate on in the following section. Among the less protective measures, information duties, which are also widely applied in the EU acquis, assume a prominent position. As we will see below, when consumers find themselves in a position of structural asymmetry vis-à-vis powerful providers equipped with AI, they cannot be adequately protected through improved disclosure alone.49

IV.  A High Level of Consumer Protection in the Age of Platforms and AI A.  The Need for Need-oriented Norms As was signalled before, socio-economic implications of commercially driven analytics, first introduced and mastered by leading online platforms, reach far and deep. Technological companies assist consumers in navigating their online experience, while at the same time systematically affecting their behaviour in line with the providers’ own economic goals. Such influence begins with the design of online architectures in a way that promotes greater use of respective applications.50 Although there is principally nothing wrong about traders’ aspirations to make their products or services appeal to consumers, it is debatable to what extent consumer preferences are indeed the prime consideration in platform design choices. What matters most from a commercial perspective are, after all, the revenues obtained from other parties. Considering the unparalleled capacities of platforms to influence consumer behaviour, harmful consequences of this incentive structure can be far-reaching. The complexity and opacity of underlying algorithmic processes further exacerbate the risks to which consumers can be exposed in platform settings. Personalised choice architectures determined by online platforms can be 46 ibid 180, 185–187. 47 ibid 187–189. 48 ibid 209. 49 N Helberger, H-W Micklitz, M Sax and J Strycharz, ‘Surveillance, Consent and the Vulnerable Consumer. Regaining Citizen Agency in the Information Economy’ (2021) 51 www.beuc.eu/publications/beuc-x-2021-018_eu_consumer_protection.0_0.pdf (last accessed 21 April 2021). 50 Yeung (n 11) 121.

230  Monika Namysłowska and Agnieszka Jabłonowska continuously reconfigured on the basis of new data, making big data nudging both ‘nimble, unobtrusive and highly potent’.51 As the infamous emotional contagion study showed, consumers can be subject to data-driven experiments without even realising that. Personalisation of choice architectures results in the creation of ‘experience cocoons’, perceived only at individual level and remaining unknown to others.52 Also in more general terms, understanding the mechanics behind the seemingly innocuous digital services, including the potential of inferred knowledge, is challenging, if not impossible. The information asymmetry between the consumers and platforms is striking and pertains both to the nature of the service and to consumers themselves.53 Since platform markets show a tendency towards concentration, partly due to the network effects at play, consumers cannot easily shift to alternative providers.54 Individual negotiation of the bargain is also out of question, since platform services are generally offered on a take-it-or-leave-it basis. The above factors provide a strong indication that the achievement of a high level of consumer protection in relation to AI-driven platform services requires a more protective, need-oriented outlook. Faced with the digital asymmetry of a structural nature,55 consumers exposed to the adverse impacts of platforms and AI will find it prohibitively hard to exercise self-reliance. Moreover, as also observed by Willett, rules oriented towards consumers’ needs tend to be more effective in improving legal clarity/certainty compared to standards following the ethic of selfreliance, or at least are not less clear and certain than their counterparts.56 All of these features are highly relevant to the European AI and platform regulation, both as a response to the problems of opacity and complexity and as an aspect of promoting the proper functioning of markets and establishing a high level of consumer protection. Existing norms of the EU consumer acquis, offering a promising reference point for the AI age, are discussed in the subsequent sections.

B.  Unfair Commercial Practices One of the most significant EU acts of relevance to data-driven conduct in the digital economy is Directive 2005/29/EC on unfair commercial practices (UCPD).57 51 ibid 122. 52 B Bodó, N Helberger, K Irion, F Borgesius, J Möller, B van de Velde, N Bol, B van Es and CH de Vreese, ‘Tackling the Algorithmic Control Crisis – the Technical, Legal, and Ethical Challenges of Research into Algorithmic Agents’ (2017) 19 Yale Journal of Law and Technology 133, 139. 53 Zuboff (n 32) 83; G Wagner and H Eidenmüller, ‘Down by Algorithms? Siphoning Rents, Exploiting Biases, and Shaping Preferences: Regulating the Dark Side of Personalized Transactions’ (2019) 86 The University of Chicago Law Review 581, 582. 54 O Bracha and F Pasquale, ‘Federal Search Commission – Access, Fairness, and Accountability in the Law of Search’ (2008) 93 Cornell Law Review 1149, 1180–1183; E Wauters, E Lievens and P Valcke, ‘Towards a better protection of social media users: a legal perspective on the terms of use of social networking sites’ (2014) 22 International Journal of Law and Information Technology 254, 259–261. 55 Helberger et al (n 49) 51. 56 Willett (n 45) 179, 193–194. 57 Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 c­ oncerning unfair business-to-consumer commercial practices in the internal market and amending Council

Artificial Intelligence and Platform Services  231 The Directive is increasingly recognised in the literature as an important building block of the EU regulatory landscape relating to AI.58 Due to the broad definition of business-to-consumer (B2C) commercial practices in Article 2(d) of the Directive, the act can be applied to a variety of AI-driven practices, engaged in by the traders before, during and after a commercial transaction related to a product. The UCPD does not apply to ‘products’ as such, yet the line between the products and commercial practices – particularly as regards platform services – becomes more and more blurred.59 Finally, even though the Directive remains without prejudice to contract law and, in particular, to the rules on the validity, formation or effect of a contract, it certainly is not without relevance to consumer transactions.60 The protection of consumers against unfair commercial practices is achieved through three types of provisions: a general clause, specific provisions prohibiting misleading and aggressive commercial practices and per se prohibitions set out in Annex I. The important role of the UCPD in the age of online platforms and AI stems from the introduction of a general prohibition of unfair B2C commercial practices in Article 5(1) and their broadly framed definition in the subsequent paragraph. In principle, unfair commercial practices are understood as practices which are contrary to the requirements of professional diligence and which materially distort or are likely to materially distort the economic behaviour of the average consumer with regard to the product. The prohibition of all unfair B2C commercial practices provides the legal basis for combating the unfair business conduct also in the field of AI, which undoubtedly contributes to ensuring a high level of consumer protection. However, it is ambiguous whether the general clause in Article 5(2) is a sufficiently strong instrument of consumer protection. Although its broad scope allows it to cover any type of unfair commercial practice that appears on the market, the broad scope of Article 5(2) can also be its weakness. The open-ended nature of the ban may reduce its preventive force (market actors do not receive sufficient guidance as to which commercial practices are unfair), particularly if it is the consumer who needs to prove infringement.61 For the same reason, the general clause may fail to provide the consumers and consumer protection actors with legal certainty

Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council (‘Unfair Commercial Practices Directive’), OJ L149, 11.6.2005, 22–39. 58 S Scheuerer, ‘Artificial Intelligence and Unfair Competition – Unveiling an Underestimated Building Block of the AI Regulation Landscape’ [2021] GRUR International 1. 59 Judgment of the Court of 4 July 2019, C-393/17, Kirschstein, ECLI:EU:C:2019:563, para 42. See also A Jabłonowska, ‘Fine Lines Between National Rules on ‘Products’ and, ‘Commercial Practices’: Judicial Déjà-Vu in Case C-393/17 Kirschstein’ (2019) 8 Journal of European Consumer and Market Law 244. 60 Art 3(2) of the UCPD. See also, T Wilhemsson, ‘Scope of the Directive’, in G Howells, H-W Micklitz and T Wilhelmsson (eds), European Fair Trading Law. The Unfair Commercial Practices Directive (Ashgate, 2006) 72–73. 61 Art 12 of the UCPD addresses evidentiary aspects of the proceedings related to unfair commercial practices only fragmentarily. See also Helberger et al (n 49) 78, who argue in favour of reversing the burden of proof for data exploitation strategies.

232  Monika Namysłowska and Agnieszka Jabłonowska needed to enforce applicable rules, even assuming that objectionable conduct can be perceived and documented.62 This is not significantly altered by the specific provisions of Articles 6–9, which equally require the assessment of contested commercial practices in terms of their impact on the average consumer. As we elaborate in the subsequent sections, the approximation of the standard of professional diligence and the average consumer model in case of AI practices mitigates these concerns to some extent. However, it still does not guarantee predictability of consumer protection in a given factual setting, does not overcome the ‘algorithmic control crisis’63 and may not exert a sufficient deterrent effect. A solution to some of these shortcomings can be sought in the Annex I to the UCPD, commonly referred to as the ‘blacklist’.64 All practices contained in the list are considered to be unfair and are therefore prohibited without the need for case-by-case assessment. Despite the use of vague concepts in certain prohibitions, the blacklisted bans offer a higher legal certainty, which, together with an outright prohibition of their application in each Member State, exerts a strong preventive effect.65 For these reasons, under currently applicable rules, a blacklist of unfair commercial practices seems to be the strongest instrument of consumer protection, capable of clearing the market from certain unacceptable practices and business models. However, virtually no practices contained in Annex I relate to data-driven influence on consumer behaviour, such as manipulation of consumer emotions through platform design or advanced forms of targeting achieved with help of AI. Following recent amendments to the UCPD, introduced through the Modernisation Directive,66 the list currently contains 35 per se prohibitions, including some explicitly concerned with digital technology. Of interest is especially the newly added point 11a, referring to the provision of search results in response to a consumer’s online search query without clearly disclosing any paid advertisement or payment specifically for achieving higher ranking of products within the search result. Nonetheless, the lack of transparency of sponsored content is far from the only pitfall associated with platform design. Due to the principle of full harmonisation, to which the UCPD is subject, the list must be

62 A Garde, ‘Can the UCP Directive Really Be a Vector of Legal Certainty’, in W van Boom, A Garde and O Akseli (eds), The European Unfair Commercial Practices Directive: Impact, Enforcement Strategies and National Legal Systems (Ashgate, 2014) 111–116. 63 Bodó et al (n 52) 133. 64 See, eg, M Namysłowska, ‘The Blacklist of Unfair Commercial Practices: The Black Sheep, Red Herring or White Elephant of the Unfair Commercial Practices Directive?’, in W van Boom, A Garde and O Akseli (eds), The European Unfair Commercial Practices Directive: Impact, Enforcement Strategies and National Legal Systems (Ashgate, 2014). 65 Garde (n 62) 117, 119. 66 Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules [2019] OJ L328/7.

Artificial Intelligence and Platform Services  233 regarded as exhaustive and cannot be extended through Member State action or interpretative guidelines.67

C.  Unfair Terms in Consumer Contracts In the field of consumer contracts, Directive 93/13/EEC on unfair contract terms (UCTD)68 assumes a role comparable to that assigned to the UCPD in relation to unfair commercial practices. The act establishes a horizontal framework for the fairness assessment of non-individually negotiated terms in B2C contracts. As such, it remains of direct relevance to one of the main enablers of the observed digital landscape, namely the standardisation of contractual practice. While standardisation of contracts may be efficiency-enhancing,69 its possible downsides from the perspective of consumers are being recognised.70 The use of standard terms by the trader allows him to better allocate the resources needed for drafting contracts with multiple counter-parties and benefit from superior knowledge of the terms as its drafter and repeat player. Consumers, in turn, not only do not have respective knowledge, but are also typically in an inferior bargaining position or even faced with a take-it-or-leave-it situation. Accordingly, the use of standard terms tends to reinforce the asymmetry of knowledge and power between the trader and consumer and may justify the development of protective counter-measures.71 Harmonisation of the rules on unfair terms belongs to the earliest legislative initiatives in the field of consumer protection taken at EU level (back then the European Economic Community). The Act’s relatively strong need orientation is notable, especially considering that the consumer protection dimension of EU primary law was not yet well-established at the time of Directive’s adoption.72 67 Judgment of the Court of 14 January 2010, C-304/08, Plus Warenhandelsgesellschaft, ECLI:EU:C:2010:12, operative part. See also M Namysłowska and A Jabłonowska, ‘Member State Interests and the EU Law on Unfair B2C and B2B Practices’, in M Varju (ed), Between Compliance and Particularism: Member State Interests and European Union Law (Springer, 2019) 119–201. 68 Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts [1993] OJ L95/29. 69 RA Hillman and JJ Rachlinski, ‘Standard-Form Contracting in the Electronic Age’, 77 New York University Law Review 429, 437–440. For a reconstruction of the broader discourse in favour of standard terms, see D Caruso, ‘Black Lists and Private Autonomy in EU Contract Law’, in D Leczykiewicz and S Weatherill (eds), The Involvement of EU Law in Private Law Relationships (Hart Publishing, 2013) 304–306. 70 Hillman and Rachlinski (n 69) 440; Caruso (n 69) 309–310. In relation to online platforms, see: Wauters, Lievens and Valcke (n 54) 255–261. 71 On the detailed background philosophies of unfair terms control, see T Wilhelmsson, ‘Various Approaches to Unfair Terms and Their Background Philosophies’ (2008) 14 Juridica International 51. On the rationale of the UCTD, see P Rott, ‘Unfair Contract Terms’, in C Twigg-Flesner (ed), Research Handbook on EU Consumer and Contract Law (Edward Elgar Publishing, 2016) 288–289. For an opposing view, emphasising reputational considerations, see, eg, LA Bebchuk and RA Posner, ‘One-Sided Contracts in Competitive Consumer Markets’ (2006) 104 Michigan Law Review 827. 72 On the tensions between primary law and the early phase of harmonisation of consumer law, see S Weatherill, ‘The Commission’s Options for Developing EC Consumer Protection and Contract Law: Assessing the Constitutional Basis’ (2002) 13 European Business Law Review 497, 498–502.

234  Monika Namysłowska and Agnieszka Jabłonowska Finding a compromise on what ultimately became Directive 93/13/EEC was not uncomplicated, which also finds reflection in particular drafting decisions taken by the lawmakers. The Directive showed a number of inherent contradictions, arising from the combination of concepts stemming from different national traditions.73 Crucially, however, the act laid ground for an EU-wide fairness review of the terms in B2C contracts which were not ‘individually negotiated’ between the parties, with the consequence that the relevant terms are not binding upon consumers. The threshold of unfairness has been set by reference to a ‘significant imbalance’ in the parties’ rights and obligations to the detriment of the consumer, caused contrary to the requirement of ‘good faith’.74 To preserve parties’ autonomy, substantive assessment does not generally extend to the so-called ‘core terms’, provided that transparency requirements are fulfilled,75 and consumers may choose to keep unfair terms in force as long as they do so on basis of free and informed consent, in full awareness of the associated consequences.76 Finally, Article 7(1) of the Act explicitly refers to the UCTD’s preventive goal, by requiring the Member States to ensure that adequate and effective means exist to prevent the continued use of unfair terms in business-to-consumer contracts. Over subsequent years, the Court of Justice has elaborated on the interpretation of the Act, largely in the direction of enhanced consumer protection. The scope of substantive fairness assessment has been read rather broadly and procedural avenues were developed to ensure that the fairness control assumes a practical dimension.77 One of the notable judicial developments concerns the transparency requirement, set out in Articles 4(2) and 5 of the UCTD. Arguably, this line of reasoning could potentially mitigate at least certain dimensions of the algorithmic control crisis. Indeed, while it is true that boilerplate contracts form a prominent part of today’s information economy, the range of actions that platforms may perform through complex algorithms may not necessarily be apparent from their standard terms. Disclosure duties, known for example from Directive 2011/83/EU on consumer rights (CRD), could fill this gap to some extent, but their importance is generally limited to the pre-contractual stage. By contrast, the reading of the transparency principle in the UCTD could play a part both prior to and following contract conclusion. In particular, in its reading of the UCTD, the Court of Justice has consistently stressed that information, before concluding a contract,

73 H-W Micklitz and N Reich, ‘The Court and Sleeping Beauty: The Revival of the Unfair Contract Terms Directive (UCTD)’ (2014) 51 Common Market Law Review 771, 773. 74 Art 3(1) of the UCTD. 75 Art 4(2) of the UCTD. Note, however, that the level of consumer protection can be increased through Member State action: judgment of the Court of 3 June 2010, C-484/08, Caja de Ahorros, ECLI:EU:C:2010:309, operative part. 76 See, eg, judgment of the Court of 9 July 2020, C-452/18, Ibercaja Banco, ECLI:EU:C:2020:536, paras 25, 29 and the case law cited. 77 For a detailed account, see Commission Notice, ‘Guidance on the interpretation and application of Council Directive 93/13/EEC on unfair terms in consumer contracts’ [2019] OJ C323/4.

Artificial Intelligence and Platform Services  235 on the terms of the contract and the consequences of concluding it is of fundamental importance for a consumer.78 It is on that basis that the consumer decides whether he wishes to be bound by the terms previously drawn up by the seller or supplier.79 Moreover, as has been noted in the scholarship, the Court appears to indirectly recognise that information contained in standard terms can also be of value to consumers following contract conclusion.80 Finally, the Court has underlined that the principle of transparency has not only a formal, but also a substantive dimension, whereby the former refers among others to grammatical and linguistic aspects, while the latter to the economic consequences deriving from the contract.81 In relation to platform services, economic consequences falling under the principle of substantive transparency could be linked among others to the operation of automated advertising delivery mechanisms, which the traders could be required to characterise in view of the impact on consumers. It is, nonetheless, debatable whether a case could be brought against the trader requesting him to specify the terms transparently and if so, how compliance with this obligation could be verified. An explicit disclosure duty related to the use of automated decision-making and its impact on the provision of digital services would thus seem preferable,82 while the transparency principle could play a supplementary role. A further question can be asked whether adverse consequences of a non-economic nature could not be equally relevant in relation to platform services. This could include, for example, information about experiments to which the platform users can be subjected or information concerning the deployment of AI systems for recognition of emotions.83 Despite these uncertainties, it seems that the principle of transparency in the UCTD, combined with the possibility of substantive fairness assessments of non-individually negotiated terms according to the fairness yardstick as well as disclosure duties, could contribute to increasing the level of consumer protection in the age of platforms and AI.

78 Judgment of the Court of 21 March 2013, C-92/11, RWE Vertrieb, ECLI:EU:C:2013:180, para 44; judgment of the Court of 21 December 2016, C-154/15, C-307/15 and C-308/15, Gutiérrez Naranjo, ECLI:EU:C:2016:980, para 50. 79 ibid. 80 C Leone, ‘Transparency revisited – on the role of information in the recent case-law of the CJEU’ (2014) 10 European Review of Contract Law 312, 322–323. 81 See, eg, judgment of the Court of 30 April 2014, C-26/13, Kásler and Káslerné Rábai, ECLI:EU:C:2014:282, paras 71–73. On the distinction between formal and substantive transparency, see J Luzak and M Junuzović, ‘Blurred Lines: Between Formal and Substantive Transparency in Consumer Credit Contracts’ (2019) 8 Journal of European Consumer and Market Law 97. 82 This proposal was recently put forward in M Loos and J Luzak, ‘Update the Unfair Contract Terms directive for digital services’ (2021) 29–30 www.europarl.europa.eu/thinktank/en/document. html?reference=IPOL_STU(2021)676006 (last accessed 5 May 2021). 83 Cf. proposal for a bill to prohibit the usage of exploitative and deceptive practices by large online operators and to promote consumer welfare in the use of behavioural research by such providers (S.1084 – DETOUR Act), introduced in the US Senate.

236  Monika Namysłowska and Agnieszka Jabłonowska

D.  Contracts for the Supply of Digital Services While the UCPD and the UCTD seek to primarily clear the market from instances of unfairness, they do not generally determine what traders ought to do in their respective fields of activity. Such a more positive dimension can be observed in another domain of the EU consumer acquis, namely the rules on the conformity of consumer goods and services with the contract. While the focus of the EU lawmaker in this regard has long remained on the sale of goods, recent years have also seen an expansion of legislative activity in relation to digital content and digital services. Thus, Directive 2019/770 (DCD)84 explicitly sets out both subjective and objective requirements for conformity of digital content and digital services with the contract as well as consumer remedies in the event of non-conformity. Since the Act represents a legislative novelty for both the EU and most Member States, and the national implementing measures will only begin to apply from 2022,85 the impact of new norms on the platform economy cannot be easily predicted. However, the key premises of the Act are relatively need-oriented and as such correspond with the goal of establishing a high level of consumer protection.86 When considering the relevance of Directive 2019/770 for online platforms and AI systems with which they operate, attention must first be drawn to the scope of the Act. The Directive applies among other to contracts under which the trader supplies or undertakes to supply a digital service to the consumer and the consumer provides or undertakes to provide counter-performance in the form of price or personal data.87 Digital services are defined relatively broadly as services that allow the consumer to create, process, store or access data in digital form; or to share or otherwise interact with data in digital form uploaded or created by the consumer or other users of that service.88 Recital 19 explicitly provides that the Directive should cover, inter alia, services for video and audio sharing as well as social media. In respect of data as a counter-performance an exemption from scope applies where the personal data provided by the consumer are exclusively processed by the trader for the purpose of supplying the digital content or digital service in accordance with the Directive or for allowing the trader to comply with legal requirements to which the trader is subject, and the trader does not process those data for any other purpose. Accordingly, the exemption does not pertain to business models, in which consumer data is extensively used for advert monetisation and the existence of a contractual relationship under national law is ascertained.89 84 Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services [2019] OJ L136/1. 85 Art 24 of the DCD. 86 Similarly Willett (n 45) 191–194, for the law of the UK. 87 Art 3(1) of the DCD. 88 Art 2(2) of the DCD. 89 On the importance of contract definition under national law, see D Staudenmayer, ‘Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning

Artificial Intelligence and Platform Services  237 Establishing that platform services fall into the remit of Directive 2019/770 on certain aspects concerning contracts for the supply of digital content and digital services is only a starting point for assessing its importance for platforms and AI. What ultimately determines the level of consumer protection afforded by the Act are its substantive requirements of conformity. As was already signalled, the DCD operates with a mix of subjective and objective requirements, whereby the former refer to contract stipulations, while the latter describe conditions laid down in the law which digital content must respect, independently of the contract.90 Conformity should be assessed, inter alia, by considering the fitness of digital service for purpose for which services of the same type would normally be used, taking into account, where applicable, any existing Union and national law, technical standards or, in the absence of technical standards, applicable sector-specific industry codes of conduct. Digital services should further possess the qualities and performance features, which are normal for services of the same type and which the consumer may reasonably expect, given the nature of the digital service and taking into account any public statement made by or on behalf of the trader, with limited exceptions. The connection established between the fitness for purpose requirement and existing mandatory provisions, technical standards and codes of conduct merits attention, as it is capable of particularising the criterion in a way that promotes a high level of consumer protection, provided that the relevant norms are developed.91 At the same time, it is noteworthy that such a connection is limited to the fitness for purpose requirement and does not explicitly resurface, for example, in relation to qualities of the digital services and their performance features. As we argue below, however, applicable mandatory provisions may nonetheless be relevant for assessing consumer reasonable expectations, which in this case determine the yardstick for assessment. The importance of public statements is also worth highlighting, as it lends contractual meaning to advertising and various image building strategies, including in the course of crisis management. Further room for analysis exists in relation to the quality of being ‘normal for services of the same type’, considering the previously mentioned tendency of platform markets towards concentration. Arguably, to ensure a high level of consumer protection an assessment of this criterion in relation to platform services cannot become purely self-referential. Overall, considering the asymmetry of bargaining power between platform providers and consumers, the introduction of objective conformity requirements in relation to services they offer is to be welcomed. Like in the case of the UCTD, contracts for the supply of digital content and digital services (Text with EEA relevance)’, in R Schulze and D Staudenmayer (eds), EU Digital Law: Article-by-Article Commentary (Bloomsbury, 2020) 71–73, 79; K Sein and G Spindler, ‘The new Directive on Contracts for the Supply of Digital Content and Digital Services – Scope of Application and Trader’s Obligation to Supply – Part 1’ (2019) 15 European Review of Contract Law 257, 253–265. 90 Staudenmayer (n 89) 113; Arts 7 and 8 of the DCD, respectively. 91 Cf. M Ebers, ‘Liability for Artificial Intelligence and EU Consumer Law’ [2021] Journal of Intellectual Property, Information Technology and Electronic Commerce Law 218.

238  Monika Namysłowska and Agnieszka Jabłonowska the consumer can opt out of the protection afforded by the harmonised contract rules in relation to a particular characteristic of digital services, provided that he was specifically informed of a deviation and expressly and separately agreed to it when concluding the contract.92 Affording contractual relevance to the conditions explicitly laid down in the law reflects a strong need orientation of the analysed Act and offers consumers a higher degree of protection compared to subjective requirements only. Aside from the procedural side of the Directive, which may indeed show some shortcomings93 but remains outside the scope of the present chapter, the impact of the DCD on platform services will depend on the reading of objective requirements. This, in turn, will likely rest upon the two recurring concepts in the EU acquis, namely the model of consumer and the standard of reasonableness.

E.  Recurring Concepts As seen from above, the existing EU acquis provides for important reference points for establishing a high level of consumer protection in the digital economy shaped to a large extent by platforms and AI, but is not free from lacunae. Open-ended nature of some of the key norms allows them to respond to the changing market conditions, but may come at an expense of reduced legal certainty. What mitigates these concerns to some degree is, arguably, the underlying conceptual framework shared across the acquis, and composed of recurring themes such as the model of consumer and reasonable expectations. Approximation of these notions in line with the need ethic is liable to promote consumer protection goals, both in respect of the normative substance and through increased legal certainty.

i.  Consumer Model Whenever qualification of the business conduct is linked to the perspective of consumer, the legal assessment is necessarily guided by a particular ‘image’ of the consumer.94 Ascertaining the applicable image is therefore as important to the ultimate degree of consumer protection afforded by the norm as the type of an instrument envisaged by that norm (eg, information duty, conformity requirement, prohibited practice). Over past decades of consumer law evolution, the underlying consumer ‘images’ have developed as well. To illustrate, Directive 2005/29/EC on unfair commercial practices refers, on the one hand, to the model of an ‘average consumer’ and, on the other hand, to consumers ‘whose characteristics make them 92 Art 8(5) of the DCD. 93 Ebers (n 91) 218–219. 94 D Leczykiewicz and S Weatherill, ‘The Images of the Consumer in EU Law’, in D Leczykiewicz and S Weatherill (eds), The Images of the Consumer in EU Law: Legislation, Free Movement and Competition Law (Hart Publishing, 2016) 1.

Artificial Intelligence and Platform Services  239 particularly vulnerable to unfair commercial practices’. Co-existence of distinct images reflects the tensions arising from EU law’s instrumental rationality and the importance attached by it to several, partly divergent goals. In the analysed context of platforms and AI the following two questions merit special attention: first, whether the same consumer model can be applied across different areas of the EU acquis and, second, how the relevant model should be understood in respect of data-driven platform services. Of the two consumer models referred to in the UCPD, the concept of an average consumer appears to play a primary role. In line with the case law of the Court of Justice, the notion refers to a consumer who is reasonably well-informed and reasonably observant and circumspect.95 The relevant judicial reading dates back to the first landmark cases under the free movement law, when proportionality of national restrictions was analysed.96 The notion, however, was gradually transferred to the field of legislative harmonisation, as the UCPD directly demonstrates. A similar spill-over effect has also gradually taken place in the domain of EU contract law. In respect of standard terms control, the average consumer model was accepted as a reference point in the assessment of transparency.97 References to the average consumer can also be found, for example, in the case law on conformity of goods98 and pre-contractual information disclosure.99 Due to the instrumental nature of EU law, and accordingly lesser significance attached to the divide between private and public law, a common conceptual framework applied across a variety of instruments is not without its merit. Indeed, relying on a single and primary model across different domains has its advantages: it supports coherent application of the law and can be viewed as rational when considering different legal aspects of a single situation concerning the same consumer. However, reliance on the average consumer notion as a reference point should not undermine a high level of consumer protection. At the same time, the image of a reasonably well-informed and reasonably observant and circumspect consumer can be considered as setting a comparably high bar for consumer protection.100 As was argued before, there are good reasons to believe that in relation to AI-driven practices in the platform economy, which can be highly complex and opaque, a more protective approach is needed. Attention has been drawn in the scholarship

95 Judgment of the Court of 16 July 1998, C-210/96, Gut Springenheide and Tusky, ECLI:EU:C:1998:369, para 31. 96 Judgment of the Court of 6 July 1995, C-470/93, Mars, ECLI:EU:C:1995:224, para 24. 97 Judgment of the Court of 30 April 2014, C-26/13, Kásler and Káslerné Rábai, para 74. See also M Durovic, ‘The Subtle Europeanization of Contract Law: The Case of Directive 2005/29/EC on Unfair Commercial Practices’ (2015) 5 European Review of Private Law 715, 722. 98 Judgment of the Court of 23 May 2019, C-52/18, Fülla, ECLI:EU:C:2019:447, para 40. 99 Judgment of the Court of 23 January 2019, C-430/17, Walbusch Walter Busch, ECLI:EU:C:2019:47, para 39. 100 V Mak, ‘The Consumer in European Regulatory Private Law’, in D Leczykiewicz and S Weatherill (eds), The Images of the Consumer in EU Law: Legislation, Free Movement and Competition Law (Hart Publishing, 2016) 384–385.

240  Monika Namysłowska and Agnieszka Jabłonowska to the structural asymmetry between consumers and powerful online traders, which far exceeds mere information asymmetry.101 This leads some authors to conclude that in digital marketplaces, most if not all consumers are potentially vulnerable.102 From this perspective the notion of a vulnerable consumer currently set out in the EU acquis is seen as overly restrictive, as it is limited to static states, such as age or credulity.103 Accordingly, proposals have been made to introduce the language of digital asymmetry and digital vulnerability to the EU consumer law and policy. The advocated shift should make it possible to better recognise the nature of vulnerability as ‘a universal state of defencelessness and susceptibility to (the exploitation of) power imbalances that are the result of increasing automation of commerce, datafied consumer-seller relations and the very architecture of digital marketplaces’.104 According to this reading, digital asymmetry is a structural phenomenon that affects all consumers and ‘cannot be overcome by providing ever more information’.105 While the language of digital vulnerability and digital asymmetry could suggest that a major overhaul of the existing consumer acquis is needed to ensure a high level of consumer protection in digital markets, we agree with the authors of reported analyses that – until that happens – existing notions can and should be read in a way that promotes stronger protection under new circumstances.106 Although it is true that the notion of vulnerable consumer is probably too narrow as it primarily refers to static states, the concept of an average consumer has arguably a more dynamic nature. As is explicitly recognised in recital 18 of the UCPD, what qualifies as being ‘reasonably well-informed and reasonably observant and circumspect’ should be determined ‘taking into account social, cultural and linguistic factors’. The same recital further notes that, in applying the notion, ‘national courts and authorities will have to exercise their own faculty of judgement (…) to determine the typical reaction of the average consumer in a given case’. A behavioural assessment forms part of judicial reading, even if it is carried out not in respect of an individual situation, but rather in respect of a given type of service or practice. The assessment is therefore simplified, but leaves the national court a margin for discretion in assessing what an average consumer is deemed to know and how he behaves in the relevant type of setting.107 Finally, the average consumer notion can be ‘personalised’108 by referring to the concept of an average member of a group to whom a practice is directed. This appears to be a preferred approach of 101 Helberger et al (n 49), 51. 102 ibid 5. 103 ibid 8–10. 104 ibid 5. 105 ibid 51. 106 The authors maintain that both concepts ‘could be anchored in the legislative framework UCDP de lege lata’, but also support legislative reforms, see ibid, 79. 107 Judgment of the Court of 3 February 2021, C-922/19, Stichting Waternet, ECLI:EU:C:2021:91, para 34. 108 P Hacker, ‘Manipulation by Algorithms. Exploring the Triangle of Unfair Commercial Practice, Data Protection, and Privacy Law’ [2021] European Law Journal (forthcoming).

Artificial Intelligence and Platform Services  241 the Court of Justice, which has so far provided very limited guidance on the notion of a vulnerable consumer, but instead has expanded the idea of directing from the UCPD to the rules on consumer contracts. Specifically, in Walbusch Walter Busch the Court referred to the perspective of the ‘average consumer targeted by the communication’ in assessing whether the means of communication allows limited space or time to display information.109 This choice is notable, since, first, unlike the UCPD, Directive 2011/83/EU on consumer rights does not explicitly recognise the relevance of directing and, second, the Court refrained from adopting the ‘vulnerable consumer’ as a yardstick, despite reference to this notion in the CRD’s preamble110 and in the interpretation proposed by the Advocate General.111 To conclude, the average consumer model has been gradually applied across the different fields of the EU acquis and features very prominently in the case law of the Court of Justice. While the notion does seem to place a high bar for consumer protection, in our view it should be read dynamically in relation to given types of services or practices, in line with consumer protection goals. We agree that a digital asymmetry exists in the current digital landscape driven by platforms and AI, justifying the use of stronger instruments to ensure a high level of consumer protection. As the world changes, however, so can the assessment of the business conduct, and the notion of the average consumer is right to capture this fluidity.112 Such a conclusion does not imply that the notion of a vulnerable consumer should be marginalised. On the contrary, we believe that business models geared towards the exploitation of vulnerabilities – be it static or situational – should be considered unfair, and in all circumstances.

ii.  Standard of Reasonableness Another recurring theme of relevance to the level of consumer protection across the EU acquis is the standard of reasonableness, and specially the notion of reasonable expectations. As was signalled before, the notion forms a prominent part of the objective conformity requirements in Directive 2019/770 on digital content and digital services, including in respect of the qualities and performance features. Recital 46 of the DCD specifies further that the standard of reasonableness ‘should be objectively ascertained, having regard to the nature and purpose of the digital content or digital service, the circumstances of the case and to the usages and practices of the parties involved’. The assessment of reasonableness therefore displays a similarity to the assessment of the model consumer. Against this background a question can be asked about the factors affecting the assessment of reasonable expectations in relation to data-driven platform 109 Judgment of the Court of 23 January 2019, C-430/17, Walbusch Walter Busch, para 39. 110 Recital 34 of the CRD. 111 Opinion of Advocate General Tanchev delivered on 20 September 2018, C-430/17, Walbusch Walter Busch, ECLI:EU:C:2018:759, para 73–74. 112 Scheuerer (n 58) 4.

242  Monika Namysłowska and Agnieszka Jabłonowska services. Arguably, no clear example exists of what is ‘normal’ in relation to dynamically changing and largely opaque platforms and what specific qualities consumers should be entitled to expect.113 One can thus wonder whether existing and emerging mandatory provisions, technical standards and codes of conduct could potentially assume a role in this regard, beyond the analysis of the fitness for purpose criterion. On the one hand, as was said, the DCD makes no explicit connection between provisions of this kind and the qualities and performance features. On the other hand, such an omission may be due to the fact that relevant stipulations instead assume a role in the assessment of reasonable expectations. An analysis of other parts of the EU consumer acquis may turn out to be helpful in answering this question. Beyond the DCD, the standard of reasonableness forms part of, among others, the requirement of professional diligence, which is a central yardstick for analysis in the UCPD. To recall, Article 2(h) of the UCPD defines professional diligence as ‘the standard of special skill and care which a trader may reasonably be expected to exercise towards consumers, commensurate with honest market practice and/or the general principle of good faith in the trader’s field of activity’. Article 6(5) of the Directive further establishes a relation between misleading omissions under the UCPD and non-compliance with information requirements set out in other EU acts. Moreover, recital 20 of the Act additionally refers to mandatory requirements and codes of conduct. The wording, especially for codes of conduct, is not straightforward. One could argue, moreover, that the analysis of professional diligence is more detached from the perspective of reference consumer than that of conformity requirements in contract law. In both cases, however, the standard is objectively ascertained and the overall legal rationality remains instrumental. In either case, as the UCPD shows, compliance or non-compliance with otherwise applicable stipulations does not, in and of itself, determine the qualification of conduct under a given set of norms, but can rather inform its assessment. Such a more fluid solution has also been preferred by the Court of Justice in its case law concerning the relationship between the UCPD and the UCTD. Specifically, in Pereničová, the Court found that a finding that a commercial practice is unfair has no direct effect on the assessment of the validity of a consumer contract under UCTD, but can be one element among others on which the competent court may base its assessment of the unfairness of contract terms.114 A similar approach could also be developed for the analysis of conformity requirements under the DCD.

113 More generally in respect of digital content, see MBM Loos, N Helberger, L Guibault, C Mak, L Pessers, KJ Cseres, B van der Sloot and R Tigner, ‘Analysis of the applicable legal frameworks and suggestions for the contours of a model system of consumer protection in relation to digital content contracts. Final report: Comparative analysis, law & economics analysis, assessment and development of recommendations for possible future rules on digital content contracts’ (2011) 277 hdl.handle. net/11245/1.345662 (last accessed 5 May 2021). 114 Judgment of the Court of 15 March 2012, C-453/10, Pereničová and Perenič, ECLI:EU:C:2012:144, paras 43–44. See also Durovic (n 97) 721, 730–732.

Artificial Intelligence and Platform Services  243 The reading of the standard of reasonableness in the broader consumer acquis can also play a role beyond establishing the importance of existing mandatory provisions, technical standards and codes of conducts. Recent reform of the EU consumer rules provides a good example. As was mentioned before, the Modernisation Directive has moderately extended the list of commercial practices prohibited in all circumstances set out in Annex I to the UCPD. From the perspective of reasonableness, the newly added point 23b is worth considering. The provision establishes a per se prohibition in respect of ‘stating that reviews of a product are submitted by consumers who have actually used or purchased the product without taking reasonable and proportionate steps to check that they originate from such consumers’. Recital 47 of the Modernisation Directive clarifies that reasonable and proportionate steps in this regard could include ‘technical means to verify the reliability of the person posting a review, for example by requesting information to verify that the consumer has actually used or purchased the product’. One can thus plausibly assume that the standard of reasonableness applied to AI-driven platform services may require the operators to take technical steps, for example by adjusting their personalisation algorithms.

V.  Regulatory Developments: Proposed Artificial Intelligence and Digital Services Acts Even though the above discussed EU consumer acquis does provide important reference points, which could contribute to addressing the risks associated with platforms and AI, the analysed legal acts also display shortcomings at both material and procedural level, which ultimately reduce the afforded degree of consumer protection. Due to the limited frame of the analysis, our focus has remained on selected material aspects of the consumer law, drawing attention to relationships between the rules on consumer contracts and other parts of the acquis. As we have shown, existing norms do cover important dimensions of platform services and AI, and are relatively need-oriented, yet significant room for improvement remains in terms of clarity and comprehensiveness. This, in turn, invites further reflection as to the potential role of the ongoing regulatory developments in the EU, focused specifically on platforms and AI. Indeed, an increased regulatory focus on online platforms and AI featured prominently among the political priorities of Ursula von der Leyen, who in December 2019 took office as the President of European Commission.115 Approximately a year later, a proposal for a so-called Digital Services Act (DSA) was put forward, followed by a proposal for an Artificial Intelligence Act. While the proposals are currently subject to legislative negotiations and can still undergo 115 U von der Leyen, ‘A Union that strives for more: My agenda for Europe’ (2019) 13 op.europa.eu/s/ oOKu (last accessed 20 April 2021).

244  Monika Namysłowska and Agnieszka Jabłonowska significant changes, it is worth drawing attention to their central features from a consumer protection perspective. Overall, as far as platforms’ relations with consumers are concerned, the following types of interventions are primarily envisaged through the AI proposal: per se prohibitions and information duties. As for the former, Article 5 of the proposal prohibits, among others, the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness or exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person (pertaining to the group) in a manner that causes or is likely to cause that person or another person physical or psychological harm. Proposed Article 52 additionally sets out transparency obligations for certain AI systems. In particular, AI systems intended to interact with natural persons should be designed and developed in such a way that natural persons are informed that they are interacting with a system, unless this is obvious from the circumstances and the context of use. Moreover, persons exposed to emotion recognition or biometric categorisation systems should be informed about those systems’ operation. As seen from above, the provisions of the proposed AI Act relevant to consumer interactions through online platforms are fairly narrow. The proposed per se prohibitions are subject to numerous qualifications and do not extend to the protection of economic interests. A more extensive part of the proposal pertains to high-risks systems. AI systems deployed by online platforms, however, are not generally categorised as such. In respect of the non-high-risk systems, fulfilling additional requirements concerning, for example, the quality of data, is merely encouraged on the basis of codes of conduct. The omission of global digital services providers from the scope of the proposed AI Act could be explained by a possible legislative decision to address platformspecific issues as part of the DSA. Indeed, the proposal does engage with some of the topics discussed above, like personalisation of digital advertising. However, the assessment of the proposal from a consumer protection perspective is not straightforward. On the one hand, information duties are again the most prominent regulatory technique applied. Thus, Article 24 of the proposed DSA envisages an obligation for online platforms to identify each specific advertisement as such and inform its recipients on whose behalf the advertisement was displayed and what main parameters were used to determine the advertising recipient. On the other hand, the situation is to some extent different in respect of the ‘very large online platforms’.116 As far as advertising is concerned, the information approach continues to be dominant, but assumes a distinctive function. Specifically, the proposed Article 30 includes additional transparency obligations for very large online platforms in the form of an advertisement repository, which could allow an external assessment of promotional messages displayed.117 Moreover, an 116 Art 25 of the proposed DSA. 117 Cf. the proposed definition of advertisement in Art 2(n), going beyond commercial communications.

Artificial Intelligence and Platform Services  245 interesting solution has been proposed in respect of recommender systems, in which case very large online platforms would be required to set out in their terms and conditions not only the main parameters used in their systems, but also the available options to modify or influence such parameters, including at least one option which is not based on profiling.118 Finally, the proposal envisages an obligation for very large online platforms to put in place reasonable, proportionate and effective mitigation measures, tailored to the identified specific systemic risks stemming from the functioning and use made of their services in the Union.119 It is notable, however, that the three categories of systemic risks, which the proposal explicitly mentions as ones that the platforms should analyse in-depth, do not relate to platforms’ own activities, but rather to a potential misuse of their services by third parties.120 Like in the case of the proposed AI Act, a significant scope for further action in the public interest is left to codes of conduct.121 Be it as it may, the proposed Digital Single Act is certainly noteworthy for proposing regulatory solutions which can be located on the different parts of the continuum between the need and self-reliance ethics, with a platform-specific focus.

VI.  Insights for EU Consumer (Contract) Law Recent regulatory developments focused more specifically on online platforms and AI do address certain aspects of consumer protection and confirm the importance of strong instruments in the new technological landscape. At the same time, both proposals are themselves far from comprehensive and highlight the continued relevance of existing norms.122 Accordingly, the role of the existing consumer (contract) law will likely remain central to consumer protection in the age of platforms and AI. In the remaining part of this chapter, we reflect on the future of consumer protection, drawing insights from the previous analysis and the observed regulatory developments.

A.  Per se Prohibitions The proposed AI Act supports the use of per se prohibitions in relation to algorithmic practices which can be seen as particularly problematic. As we have argued before, bans of this kind have the advantage of providing additional legal certainty and exerting a strong preventive effect. While they do not in themselves respond to the algorithmic control crisis, there are good reasons to believe that they would deter traders from channelling significant resources towards developing systems

118 Art

29(1) of the proposed DSA. 27(1) of the proposed DSA. 120 Art 26(1) and recital 57 of the proposed DSA. 121 Arts 35 and 35 of the proposed DSA. 122 See, eg, recital 28 of the proposed AI Act. 119 Art

246  Monika Namysłowska and Agnieszka Jabłonowska which clearly are illegal. The benefits of using such systems would have to be weighed against both the investment made and the potential fines imposed by competent authorities, should an infringement come into view. As was mentioned, however, the limited per se bans envisaged in the proposal for the AI Act provide only a small improvement to the level of consumer protection. As such, they should be seen as inspiration for a further debate and a possible reform of the UCPD. First tangible proposals for an amended blacklist of B2C commercial practices have begun to emerge from the scholarship. In particular, Helberger et al. suggest that the list should extend, among others, to the building of digital exploitation strategies that claim to serve non-economic interests whereas the overall purpose is commercial or that establish, maintain, and reinforce situational monopolies, as well as to the use of psychographic profiles to exercise emotional or psychological pressure with the goal of selling products.123 The blacklist should continue to be complemented, in our view, with an elaborated reading of the provisions on misleading and aggressive practices as well as general clause, taking account of the dynamic nature of the model consumer. Considering the slow pace of legal adaptation, enhanced consumer protection in the latter respect could be achieved via non-legislative guidance.124

B.  Fairness of Standard Terms As the reading of the transparency principle in the UCTD becomes more consumeroriented, the UCTD could assume a growing role in relation to standardised platform contracts and contribute to the opening of platforms’ organisational and technological black boxes. Such a direction finds support in the recent legislative developments and the tendency to explicitly set out the topics which should be covered in the terms of traders as well as some of related consequences. Items emerging from these platform-specific instruments could guide the assessment of substantive transparency in platform contracts also under the UCTD. A further question could be asked whether also non-economic consequences, to the detriment of the consumer, should not be seen as relevant in respect of transparency. This could include, for example, emotional experiments to which the platform users can be subject. In this case, simple information that emotional analysis is used could be enough to show compliance with the proposed AI Act, but may not suffice under the UCTD. In any case, de lege lata, at least the economic consequences of emotional surveillance seem to be covered by the principle of substantive transparency. The analysis of the UCTD and some of the recent legislative developments also shows the potential of complementing protection through information with 123 Helberger et al (n 49) 79. 124 See also Communication from the Commission to the European Parliament and the Council, New Consumer Agenda. Strengthening consumer resilience for sustainable recovery, COM(2020) 696 final, 10, 13.

Artificial Intelligence and Platform Services  247 need-oriented instruments, so as to increase the level of consumer protection, particularly in concentrated markets. As for the UCTD, the principle of transparency remains intrinsically connected to the fairness analysis of non-individually negotiated contract terms, with a possible consequence of deeming these terms not binding on the consumer (individual dimension) as well as having them subject to an injunction order (collective dimension). Also in this respect, insights emerging from platform-specific instruments, such as the proposed DSA, could serve as a reference point. For example, whether or not consumers are provided with an availability of changing parameters in order to avoid profiling could be of relevance to the fairness assessment.

C.  Conformity with the Contract Last but not least, attention should be drawn to the possible role of the recently adopted Directive 2019/770 on certain aspects concerning contracts for the supply of digital content and digital services. Objective requirements of conformity introduced by the act could contribute to enhancing the level of consumer protection, particularly if the standard of reasonableness is read in an inclusive manner, taking account of mandatory provisions, technical standards and, potentially, codes of conduct that serve consumer interests. As seen from the recent developments all three types of norms are likely to play a part in the future framework for platforms and AI.125 It does not seem overly far-fetched to connect these developments also to the EU consumer acquis, including the law of consumer contracts. After all, contract law in EU does have a more instrumental and regulatory nature and the possibility of opting out from the level of protection in consumer contracts remains possible, provided that it is a result of consumers’ free and informed consent. To ensure that conditions for such decision-making are present in the information economy driven by platforms and AI, both the rules on prohibited practices and information duties will have a role to play. Finally, the provisions aimed at overcoming the algorithmic control crisis, such as the proposals on platform auditing and ad repositories, can also indirectly improve the level of consumer protection, including under contract law. As experience has shown, increased scrutiny over platform providers – including through the accounts of whistle-blowers – may prompt them to make statements about the efforts taken to respond to the mounting concerns. Statements of this kind can form part of the objective conformity requirements for digital content and services, which in turn affords them with added legal meaning.

125 Both technical standards and codes of conduct feature prominently in the proposed DSA and AI Act. In respect of technical standardisation the development of ISO standards is also noteworthy, see, eg, ISO/IEC JTC 1/SC 42.

248

13 Artificial Intelligence and Anticompetitive Collusion: From the ‘Meeting of Minds’ towards the ‘Meeting of Algorithms’? GIUSEPPE COLANGELO

I. Introduction Within the mass of literature devoted to describing and analysing the impact of artificial intelligence (AI) in the modern economy and society, a significant part is represented by studies addressing the use of algorithms by firms to predict market trends, customise services and set prices. The potential of prediction machines is considered immense and their success depends on training data, ie the inputs needed in order to start getting reasonable outcomes, and feedback data which is obtained by mapping actual outcomes to the input data that generated predictions of those outcomes and enables an algorithm to make better predictions over time.1 By incorporating feedback data, prediction machines are able to learn from outcomes, hence improving the quality of the next prediction. Apparently, we live in the age of algorithms. Indeed, decision-making is increasingly transferred to algorithms. As consumers, we are surrounded by connected devices that make independent decisions and are guided by digital personal assistants. Hence, we should understand how their computational process works2 and probably react by becoming algorithmic consumers.3 On the supply side, the widespread use of algorithms is also affecting the competitive landscape in

1 A Agrawal, J Gans and A Goldfard, ‘How to Win with Machine Learning’, [2020] Harvard Business Review hbr.org/2020/09/how-to-win-with-machine-learning (last accessed 3 March 2021). 2 A Gal, ‘It’s a Feature, not a Bug: On Learning Algorithms and what they teach us’, (2017) Background note, OECD Roundtable on Algorithms and Collusion www.oecd.org/daf/competition/ algorithms-and-collusion.htm (last accessed 3 March 2021). 3 MS Gal and N Elkin-Koren, ‘Algorithmic Consumers’ (2017) 30 Harvard Journal of Law and Technology 309.

250  Giuseppe Colangelo which firms operate.4 According to the European Commission’s report on the e-commerce sector, a growing number of firms are using algorithms to dynamically price their products, namely to track the online prices of competitors and automatically adjust their own prices according to the observed prices of the latter.5 Along with significant benefits and efficiencies, the wide-scale use of algorithms, in particular those used for dynamic price setting, raises competition concerns.6 Notably, pricing software may facilitate collusive outcomes and even lead to a new form of collusion. In particular, pricing algorithms may make explicit collusive agreements more stable, by making it easier to monitor prices, thereby limiting the incentives to deviate or helping to detect deviations, and they may promote new forms of tacit collusion by triggering automatised coordination independently of any human intervention and even autonomously learning to play collusive strategies (so-called algorithmic collusion). Indeed, pricing algorithms show different levels of sophistication ranging from those designed to follow parameters chosen by humans to more advanced types which, rather than being programmed to solve a specific problem, learn from data and use the features at their disposal to perform a task (self-learning algorithms).7 Within the latter scenario, three categories of machine learning algorithms have commonly been identified, distinguishing between supervised and unsupervised learning, and reinforcement learning. While in the first two cases algorithms are provided with a static training set of data (in the supervised learning, the dataset is also annotated with correct answers), reinforcement learning algorithms rely on a

4 OECD, ‘Algorithms and Collusion: Competition Policy in the Digital Age’, (2017) www.oecd.org/ competition/algorithms-collusion-competition-policy-in-the-digital-age.htm (last accessed 3 March 2021). 5 European Commission, ‘Final report on the E-commerce Sector Inquiry’, COM(2017) 229 final, para 13. See also UK Competition and Markets Authority, Pricing Algorithms, (2018) 17–18 assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/746353/ Algorithms_econ_report.pdf (last accessed 3 March 2021), founding that pricing algorithms have become prevalent within some online retail markets: ‘Not only are large retailers such as Amazon taking advantage of algorithms to re-price their goods, but so are smaller online retailers’; and Portuguese Competition Authority, ‘Digital ecosystems, Big Data and Algorithms’, (2019) 43–45 www.concorrencia.pt/vPT/Estudos_e_Publicacoes/Estudos_Economicos/Outros/Documents/ Digital%20Ecosystems,%20Big%20Data%20and%20Algorithms%20-%20Issues%20Paper.pdf (last accessed 3 March 2021), reporting that 37% of firms are using monitoring algorithms and 47.4% of firms systematically track the online prices of their competitors. Among the inquired firms that reported using software to systematically track the online prices of their competitors, 78.6% noted they adjust their prices in responses to changes in the online prices of their competitors. However, only 7.9% of the inquired firms reported using software that sets prices automatically. 6 European Commission (n 5) paras 13 and 33. 7 Gal (n 3); UK Competition and Markets Authority (n 5) 9–12. See also Autorité de la Concurrence and Bundeskartellamt, ‘Algorithms and Competition’, (2019) 9–11 www.bundeskartellamt.de/ SharedDocs/Meldung/EN/Pressemitteilungen/2019/06_11_2019_Algorithms_and_Competition. html (last accessed 3 March 2021), distinguishing between fixed algorithms and self-learning algorithms; and Spanish National Commission on Markets and Competition and Catalan Competition Authority, ‘Artificial Intelligence and Competition’, (2020) 8–9 acco.gencat.cat/web/.content/80_acco/ documents/arxius/actuacions/20200930_CNMCS-AND-ACCOS-JOINT-CONTRIBUTION-TOTHE-PUBLIC-CONSULTATION-ON-AI.pdf (last accessed 3 March 2021).

Artificial Intelligence and Anticompetitive Collusion  251 process of experimentation receiving feedback from a model of the environment. A specific class of reinforcement learning algorithms are Q-learning algorithms which select the optimal policy learned from their previous actions (trial-anderror process), with no environment model. Because antitrust rules have been designed to deal with human facilitation of coordination, they require some form of mutual understanding among firms (‘meeting of the minds’) focusing on the means of communication used by players in order to coordinate. Mere interdependent conduct or collusion without communication (conscious parallelism) is lawful. Hence, competition policy has traditionally struggled with tacit collusion. The main concern is that algorithms (in particular, self-learning algorithms) may amplify the oligopoly problem, expanding the grey area between unlawful explicit collusion and lawful tacit collusion.8 Indeed, Q-learning algorithms are expected to expand this blind spot by coordinating independently of human intervention and even autonomously learning to collude without communicating with one another.9 Against this background, the risks posed by algorithmic collusion have fuelled a lively debate. Two approaches have emerged. Some scholars consider AI collusion to be a realistic scenario and question the ability of current antitrust rules to deal with algorithmic-facilitated coordination.10 In a world that has dispensed

8 OECD (n 4) 25 and 34–36. 9 J Asker, C Fershtman and A Pakes, ‘Artificial Intelligence and Pricing: The Impact of Algorithm Design’, (2021) NBER Working Paper No. 28535 www.nber.org/papers/w28535 (last accessed 8 March 2021); E Calvano, G Calzolari, V Denicolò and S Pastorello, ‘Artificial Intelligence, Algorithmic Pricing and Collusion’ (2020) 110 American Economic Review 3267; ECG Calzolari, V Denicolò and S Pastorello, ‘Algorithmic Pricing: What Implications for Competition Policy?’ (0291) 55 Review of Industrial Organization 1. See also ZY Brown and A MacKay, ‘Competition in Pricing Algorithms’ (2022) American Economic Journal: Microeconomics (forthcoming), arguing that algorithmic pricing may meaningfully increase prices even in markets with several firms in competitive equilibrium, hence if policymakers are concerned that algorithms will raise prices, then the concern is much more broad than that of collusion; K Hansen, K Misra and M Pai, ‘Algorithmic Collusion: Supra-competitive Prices via Independent Algorithms’ (2021) 40 Marketing Science 1, suggesting that collusion is likely even when algorithms do not observe each other’s prices; and T Klein, ‘Autonomous algorithmic collusion: Q-learning under sequential pricing’, (2021) 52 The RAND Journal of Economics 538. 10 See, eg, I Abada and X Lambin, ‘Artificial intelligence: Can seemingly collusive outcomes be avoided?’ (2020) papers.ssrn.com/sol3/papers.cfm?abstract_id=3559308 (last accessed 5 March 2021); Asker, Fershtman, and Pakes (n 9); S Assad, R Clark, D Ershov and L Xu, ‘Algorithmic Pricing and Competition: Empirical Evidence from the German Retail Gasoline Market’ (2020) CESifo Working Paper No. 8521 www.cesifo.org/en/publikationen/2020/working-paper/algorithmicpricing-and-competition-empirical-evidence-german (last accessed 5 March 2021); F Beneke and M-O Mackenrodt, ‘Artificial Intelligence and Collusion’ (2019) 50 IIC 109; E Calvano, G Calzolari, V Denicolò, J E Harrington and S Pastorello, ‘Protecting consumers from collusive prices due to AI’ (2020) 370 Science 1040; A Ezrachi and M Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016); MS Gal, ‘Algorithms as Illegal Agreements’ (2019) 34 Berkeley Technology Law Journal 67; JE Harrington, ‘Developing Competition Law for Collusion by Autonomous Price-Setting Agents’ (2018) 14 Journal of Competition Law and Economics 331; A Heinemann and A Gebicka, ‘Can Computers Form Cartels? About the Need for European Institutions to Revise the Concertation Doctrine in the Information Age’ (2016) 7 Journal of

252  Giuseppe Colangelo with the need for meetings, conversations, and price announcements, current antitrust rules appear unfit to detect and challenge these new forms of collusion.11 In short, as tragically foreseen by Ezrachi and Stucke, the increasing use of algorithms will disrupt antitrust law, eventually leading to the end of competition as we know it.12 Other scholars point to the lack of evidence, downplaying algorithmic collusion as merely speculative and considering this scenario to be based on strict underlying assumptions, hence very difficult to achieve.13 From this perspective, the expanding use of algorithms raises issues familiar to antitrust enforcers that are well within the existing canon. As argued by Maureen Ohlhausen, former US FTC Commissioner, there is nothing inherently suspect about using computer algorithms to look carefully at the world around you before participating in markets. So, from my perspective, if conduct was unlawful before, using an algorithm to effectuate it will not magically transform it into lawful behavior. Likewise, using algorithms in ways that do not offend traditional antitrust norms is unlikely to create novel liability scenarios.14

Somehow, between these two approaches, policy makers and competition authorities have so far endorsed a wait-and-see approach. According to the UK Competition and Markets Authority (CMA), the mechanisms by which algorithms could have an additional impact beyond traditional risk factors are quite speculative and algorithmic pricing is more likely to exacerbate ‘traditional’ risk factors (such as transparency and the speed of price setting), thereby facilitating European Competition Law & Practice 431; J Maylahn and A den Boer, ‘Learning to collude in a pricing duopoly’, (2022) Manufacturing & Service Operations Management (forthcoming); SK Mehra, ‘Antitrust and the Robo-Seller: Competition in the Time of Algorithms’ (2016) 100 Minnesota Law Review 1323; A Spiridonova and E Juchnevicius, ‘Price algorithms as a threat to competition under the conditions of digital economy: Approaches to antimonopoly legislation of BRICS countries’ (2020) 7 BRICS Law Journal 94; G Zheng and H Wu, ‘Collusive Algorithms as Mere Tools, Super-Tools or Legal Persons’ (2019) 15 Journal of Competition Law & Economics 123. 11 Gal (n 10) 116. 12 Ezrachi and Stucke (n 10). 13 See, eg, A Gautier, A Ittoo and P Van Cleynenbreugel, ‘AI algorithms, price discrimination and collusion: a technological, economic and legal perspective’ (2020) 50 European Journal of Law and Economics 405; A Ittoo and N Petit, ‘Algorithmic Pricing Agents and Tacit Collusion: A Technological Perspective’ in H Jacquemin and A De Streel (eds), L’intelligence artificielle et le droit (Larcier, 2017) 241; J Johnson and D Sokol, ‘Understanding AI Collusion and Compliance’, forthcoming in Daniel Sokol and Benjamin van Rooij (eds), Cambridge Handbook of Compliance; N Petit, ‘Antitrust and Artificial Intelligence: A Research Agenda’ (2017) 8 Journal of European Competition Law & Practice 361; T Schrepel, ‘Collusion by Blockchain and Smart Contracts’ (2019) 33 Harvard Journal of Law & Technology 117; U Schwalbe, ‘Algorithms, Machine Learning, and Collusion’ (2019) 14 Journal of Competition Law & Economics 568. 14 MK Ohlhausen, ‘Should We Fear The Things That Go Beep in the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing’ (2017) 11 www.ftc.gov/ public-statements/2017/05/should-we-fear-things-go-beep-night-some-initial-thoughts-intersection (last accessed 5 March 2021). See also L Bernhardt and R Dewenter, ‘Collusion by code or algorithmic collusion? When pricing algorithms take over’ (2020) 16 European Competition Journal 312; and P Van Cleynenbreugel, ‘Article 101 TFEU’s Association of Undertakings Notion and Its Surprising Potential to Help Distinguish Acceptable from Unacceptable Algorithmic Collusion’ (2020) 65 Antitrust Bulletin 423.

Artificial Intelligence and Anticompetitive Collusion  253 collusion in markets which are already susceptible to human coordination.15 In a similar vein, the French and German antitrust authorities, as well as the UK Digital Competition Expert Panel, have concluded that, in the situations considered so far, the current legal framework is sufficient to tackle possible competitive concerns, without disregarding the possibility of revising the antitrust toolkit and regime should further evidence of algorithmic collusion emerge.16 In brief, this approach is expressed well by Margrethe Vestager, the Executive Vice-President of the European Commission in charge of antitrust rules and digital policy: ‘It’s true that the idea of automated systems getting together and reaching a meeting of minds is still science fiction …. But we do need to keep a close eye on how algorithms are developing … so that when science fiction becomes reality, we’re ready to deal with it.’17 Nonetheless, the European Commission published an open public consultation on the need for a possible new competition tool that, among several policy options, would have allowed it to intervene when a structural lack of competition prevents the market from functioning properly, such as oligopolistic market structures with an increased risk for tacit collusion, including markets featuring increased transparency due to algorithm-based technological solutions.18 Against this backdrop, the chapter aims to address this controversial issue, investigating whether current antitrust rules are suited to facing new challenges posed by AI, whether algorithmic interactions (‘meeting of algorithms’) could be treated similarly to a ‘meeting of minds’ or whether new regulatory tools are needed. The chapter is structured as follows. Section A illustrates the economic and legal framework for understanding collusion. Section B sets the scene, describing different collusive scenarios in which algorithms may play a role and potential challenges for competition policy. Section C analyses approaches and solutions to address risks posed by algorithmic collusion. Section D concludes by arguing that existing antitrust rules are still fit for the task.

A.  Collusion: Economic Theory and Antitrust Rules Collusion is commonly described as any form of agreement among rivals to maximise joint profits by coordinating prices and output. Indeed, while a monopolistic 15 UK Competition and Markets Authority (n 5) 48. 16 Autorité de la Concurrence and Bundeskartellamt (n 7); UK Digital Competition Expert Panel, ‘Unlocking digital competition’ (2019) assets.publishing.service.gov.uk/government/uploads/system/ uploads/attachment_data/file/785547/unlocking_digital_competition_furman_review_web.pdf (last accessed 5 March 2021). 17 M Vestager, ‘Algorithms and Competition’ (2017) remarks at the Bundeskartellamt 18th Conference on Competition ec.europa.eu/competition/speeches/index_theme_17.html (last accessed 5 March 2021). 18 European Commission, ‘New Competition Tool’, Inception impact assessment (2020) ec.europa. eu/info/law/better-regulation/have-your-say/initiatives/12416-New-competition-tool (last accessed 5 March 2021).

254  Giuseppe Colangelo firm may unilaterally maximise its profits, in an oligopolistic market the profit maximisation strategy requires a joint effort and certain conditions which facilitate coordination among competing firms. As game theory has demonstrated, since firms’ decisions are interdependent in an oligopolistic scenario, a collusive outcome may be achieved not only through explicit agreements, but also tacitly, with each participant deciding its own strategy independently of its rivals. However, despite firms being aware of their mutual interdependence in an oligopolistic market, the cooperative strategy is threatened by strategic behaviours because each player has an incentive to cheat and deviate from the common policy by undercutting rivals’ prices. Therefore, the collusive equilibrium is the result of a reward-punishment scheme, meaning that, if oligopolistic players want to maximise their profits, they need not only to share a common policy, but also to safeguard the agreement by monitoring each participant’s adherence to it and by punishing any deviations. Some factors and market conditions can support the stability of collusive equilibrium over time, such as market concentration, the existence of barriers to entry, firms’ cost structures and market shares, product differentiation, demand patterns and depressed market conditions, the interaction frequency, and the degree of market transparency.19 Against this backdrop, antitrust law challenges the means used by market players to reach a collusive outcome, rather than prohibiting collusion as such. Notably, the European Article 101 TFEU refers to agreements between undertakings, decisions by associations of undertakings and concerted practices, while section 1 of the US Sherman Act applies to every contract combination in the form of trust and conspiracy. Instead of providing a precise definition of agreement, the rationale is to include different forms of coordination in order to distinguish between joint conduct and independent behaviour. Indeed, autonomous parallel behaviour resulting from awareness of oligopolistic interdependence (so-called conscious parallelism) is not prohibited because antitrust rules do ‘not deprive economic operators of the right to adapt themselves intelligently to the existing and anticipated conduct of their competitors’.20 Therefore, the case law has clarified that, irrespective of the form, the existence of an agreement requires ‘a concurrence of wills’ on the implementation of a 19 However, counter to the conventional wisdom that reduced market concentration lowers the risk of anticompetitive behaviour, see JM Barry, JW Hatfield, SD Kominers and R Lowery, ‘Not From Concentrate: Collusion in Collaborative Industries’ (2021) papers.ssrn.com/sol3/papers.cfm?abstract_ id=3787280 (last accessed 12 March 2021), showing that collaborative industries can sustain anticompetitive collusive behaviour no matter how unconcentrated the industry becomes. Hence, according to the Authors, in some instances, lower market concentration makes collusion easier, since smaller firms may be more dependent on collaboration with rivals and thus may be easier to punish if they undercut collusion. Further, see L Garrod and M Olczak, ‘Explicit vs tacit collusion: The effects of firm numbers and asymmetries’ (2018) 56 International Journal of Industrial Organization 1, finding that cartels are less likely to arise in markets with a few symmetric firms. 20 EU Court of Justice, 16 December 1975, Joined Cases 40–48, 50, 54–56, 111, 113 and 114/73, Suiker Unie v Commission, para 174. See also EU Court of Justice, Joined Cases C-89, 104, 114, 116–117, 125–129/85, Ahlström Osakeyhtiö v Commission (Wood Pulp). On the US side, see Bell Atl. Corp. v Twombly, 550 U.S. 544, 601–602 (2007).

Artificial Intelligence and Anticompetitive Collusion  255 policy, ‘the pursuit of an objective, or the adoption of a given line of conduct on the market’, the form in which it is manifested being unimportant so long as it constitutes the faithful expression of the parties’ intention,21 or a ‘meeting of minds’, ‘a unity of purpose or a common design and understanding’, as well as ‘a conscious commitment to a common scheme’.22 Furthermore, in the EU the concept of concerted practices has been introduced, defined as any direct or indirect contacts intended to influence the conduct of other firms, with the aim of filling potential gaps by precluding coordination between firms which, ‘without having reached the stage where an agreement, properly called, has been concluded, knowingly substitutes practical co-operation between them for the risks of competition’.23 As clarified by courts, the concepts of agreements and concerted practices are intended to catch forms of collusion having the same nature which are distinguishable from each other only by their intensity and the forms in which they manifest themselves.24 Moreover, in order to tackle forms of coordination which are intermediate between agreements and conscious parallelism, courts have intervened in cases of plus factors and facilitating practices (such as price announcements and information exchanges), ie elements which may work as indirect indications of an agreement suggesting that firms have not acted independently because either the parallel conduct seems unnatural or it has been facilitated. Nonetheless, given that under certain conditions oligopolists can coordinate their business behaviours without entering into an arrangement, antitrust authorities have traditionally struggled with tacit collusion. Therefore, in order to address the oligopoly problem, the very notion of agreement has been questioned because it is deemed to be too formalistic, is hard to make operational, and is unconnected with the modem theory of oligopoly. Notably, it has been suggested that the agreement requirement be reformed by interpreting it as applicable to all interdependent behaviour that is successful in producing oligopoly prices.25 After all, as argued by Kaplow, ‘successful interdependent coordination that produces supra-competitive pricing leads to essentially the same economic consequences regardless of the particular manner of interactions that generate this outcome’.26 Against this background, it comes as no surprise that the expanding use of algorithms in business decision-making has reinvigorated the debate about the need to revisit the notion of agreement. Indeed, according to one strand of literature, algorithms are expected not only to increase the likelihood of collusion by affecting some factors and market conditions and making collusive agreements 21 EU General Court, 26 October 2000, Case T-41/96, Bayer AG v Commission, paras 69 and 173. See also EU General Court, 3 December 2003, Case T-208/01, Volkswagen AG v Commission. 22 Interstate Circuit Inc. v U.S., 306 US 208, 810 (1939); American Tobacco Co. v U.S., 328 U.S. 781, 809–10 (1946); Monsanto Co. v Spray- Rite Service Corp., 465 U.S. 752, 768 (1984). 23 EU Court of Justice, 14 July 1972, Cases C-48, 49, 51–57/69, ICI v Commission (Dyestuff). 24 EU Court of Justice, 4 June 2009, Case C-8/08, T-Mobile Netherlands and others v NMa, para 23. 25 L Kaplow, ‘On the Meaning of Horizontal Agreements in Competition Law’ (2011) 99 California Law Review 683. 26 Kaplow (n 25) 686.

256  Giuseppe Colangelo more stable and easier to sustain, but also to learn autonomously over time to collude, rather than being mere tools in human hands. If this scenario were realistic, algorithmic collusion would amplify the oligopoly problem, widening the cases of tacit collusion or consciously parallel behaviour that fall outside the reach of current antitrust provisions.

B.  Algorithms in Action: Scenarios and Challenges for Competition Policy As previously mentioned, the first and most common hypothesis attributes to algorithms the ability to influence the implementation of collusion, making collusive agreements more stable, regardless of whether they are explicit or tacit. Notably, the use of algorithms may affect those factors and market conditions identified in economic literature as favourable parameters for supporting the collusive equilibrium reached by rival companies.27 Regarding market structure, algorithms can reduce coordination costs, facilitating collusion even in less concentrated markets, thereby making the number of firms in the market a less relevant factor. Further, by collecting and analysing large amounts of data quickly and in real-time, algorithms are going to increase market transparency, thereby facilitating the detection of deviations. Moreover, the speed in processing market data will also increase the frequency of interactions enabling rapid and effective punishments of deviations through fast price adjustments. The effect on entry barriers is instead considered ambiguous because while the possibility of implementing dynamic and personalised pricing strategies due to algorithms may on the one hand decrease entry barriers, the need to feed algorithms with a huge amount of information may on the other hand increase these barriers. However, there are also countervailing effects. Indeed, the innovative nature of algorithms may reduce the homogeneity of products and services offered and allow companies to develop new business models, thereby increasing asymmetries among market players and making their alignment of incentives harder to achieve and sustain. As a consequence, the collusive equilibrium and stability would be negatively affected. Against this backdrop, several scenarios may emerge. First, algorithms can be used as tools to implement and facilitate explicit collusion by either acting as messengers among companies, enabling the alignment of their market behaviours without recourse to conventional means28 or monitoring their conduct to enforce a collusive agreement, also enabling a rapid reaction in the case of deviations.29 In this scenario, algorithms may further support the stability 27 Autorité de la Concurrence and Bundeskartellamt (n 7) 17–19; OECD (n 4) 20–24. 28 Ezrachi and Stucke (n 10). 29 Autorité de la Concurrence and Bundeskartellamt (n 7) 27–30; OECD (n 4) 26; UK Competition and Markets Authority (n 5) 22–23.

Artificial Intelligence and Anticompetitive Collusion  257 of an explicit collusive agreement by making it easier to detect and respond to deviations, and by reducing the chance of errors and agency slack, which occur when, despite an agreement having been reached among senior managers within a firm, salespeople and employees may have incentives to undermine the cartel.30 This possibility of collusion has been addressed with real-world cases. Indeed, antitrust authorities have detected implemented cartels thanks to the use of dynamic pricing algorithms, ie software designed to monitor market changes and automatically react, adjusting conspirators’ prices in order to avoid eventual undercuts. Notably, the US Department of Justice in Topkin31 and the UK CMA in Trod32 found evidence of anti-competitive arrangements concerning retail sales of posters and frames sold on Amazon marketplace, which were implemented by the use of automated repricing software configured to automatically adapt prices, making sure that each company was not undercutting the other. As noted by the CMA, while repricing software is normally used by online sellers to compete with other online sellers by automatically adjusting the prices of their products in response to the live prices of rivals’ products, in the case in question the repricing software was configured by the conspirators to restrict price competition between them in order to give effect to the collusive agreement.33 More recently, the European Commission fined four consumer electronics manufacturers (Asus, Denon & Marantz, Philips and Pioneer) for imposing fixed or minimum resale prices on their online retailers, taking advantage of the use of pricing algorithms to monitor and automatically adapt retail prices to those of competitors.34 As a result, the pricing restrictions imposed on low pricing online retailers had a broader impact on overall online prices for the respective consumer electronics products and the use of sophisticated monitoring tools also allowed the manufacturers to effectively track resale price setting in the distribution network and to intervene swiftly in the case of price decreases. Because in this scenario the algorithms play a secondary role, serving as a mere tool to facilitate and enforce an explicit coordination already established between humans, it is not problematic for antitrust authorities to evaluate this conduct within the standard definition of agreement and concerted practice. However, pricing algorithms may also lead to tacit coordination and may extend tacit collusion beyond the boundary of oligopoly. In particular, the collusive outcome may be reached via third party algorithms, companies could unilaterally 30 UK Competition and Markets Authority (n 5) 23. 31 US Department of Justice, 6 April 2015, U.S. v David Topkin www.justice.gov/atr/case/us-v-davidtopkins (last accessed 6 March 2021). 32 UK Competition and Markets Authority, 12 August 2016, Case 50223, Online sales of posters and frames www.gov.uk/cma-cases/online-sales-of-discretionary-consumer-products (last accessed 6 March 2021). 33 UK Competition and Markets Authority (n 32) para 5.47. 34 European Commission, 24 July 2018, Cases AT.40465 (Asus), AT.40469 (Denon & Marantz), AT.40181 (Philips), AT.40182 (Pioneer).

258  Giuseppe Colangelo use algorithms to facilitate conscious parallelism, and finally, self-learning algorithms may even autonomously collude.35 Under the first hypothesis, competitors adopt the same algorithmic pricing model and third-party providers of algorithm services act as a hub in a so-called hub-and-spoke scenario, allowing coordination without the need for direct communication or contact between the companies. Hub-and-spoke arrangements are triangular schemes combining vertical relationships with a hidden horizontal one, namely a cartel in which the facilitating firm (the hub) organises a collusive outcome (the rim) among firms acting at one level of the supply chain (the spokes) through vertical restraints. In the algorithmic landscape, French and German antitrust authorities have distinguished between alignment at code level and alignment at data level.36 The former could arise if companies delegate their strategic decisions to a common third party who acts using an algorithm. The latter could involve rivals using the algorithm as a means for an information exchange or a software supplier causing an alignment of input data by relying on a common data pool between competitors. The UK CMA has considered the hub-and-spoke conspiracy scenario as the most immediate risk.37 Nonetheless, it poses competition issues that could be addressed under existing antitrust rules. Notably, according to the case law, because it is the rim that connects the spokes, proof of a hub-and-spoke cartel requires evidence of a horizontal agreement among the spokes (the so-called rim requirement), the level of knowledge required by the spokes being under discussion, ie awareness or just foreseeability.38 Nonetheless, two additional hypotheses appear more troublesome from the perspective of the antitrust enforcement. Notably, companies may unilaterally design pricing algorithms to react to rivals’ pricing or may rely on algorithms which, learning by themselves, may arrive at tacit coordination without the need

35 Ezrachi and Stucke (n 10) have labelled these scenarios as ‘Hub and Spoke’, ‘Tacit Collusion on Steroids – The Predictable Agent’, and ‘Artificial Intelligence, God View, and the Digital Eye’, respectively. See JE Harrington, ‘Third Party Pricing Algorithms and the Intensity of Competition’ (2020) ase.uva.nl/ binaries/content/assets/subsites/amsterdam-school-of-economics/2020/harrington-2020-third-partypricing-algorithm-and-the-intensity-of-competition.pdf (last accessed 6 March 2021), showing that third party development of a pricing algorithm has an anticompetitive effect even when only one firm in a market adopts it. See also B Ong, ‘The Applicability of Art. 101 TFEU to Horizontal Algorithmic Pricing Practices: Two Conceptual Frontiers’, (2021) 52 IIC 189, about the limits of the Art 101 TFEU. 36 Autorité de la Concurrence and Bundeskartellamt (n 7) 31–42. 37 UK Competition and Markets Authority (n 5) 31. Further, the Spanish Competition Authority is currently investigating a possible case of algorithmic collusion in the real estate brokerage market (Press Release, 2020) www.cnmc.es/sites/default/files/editor_contenidos/2020219%20NP%20 Intermediation%20Market%20EN_.pdf (last accessed 6 March 2021): in this case, coordination would have been implemented through the use of software and computer platforms and would have been facilitated by companies specialising in computer solutions through the design of the property management software and its algorithms. 38 See, eg, EU Court of Justice, 21 January 2016, Case C-74/14, Eturas UAB and others v Lietuvos Respublikos konkurencijos taryba; United States v Apple, Inc. (The eBook Case), 791 F.3d 290 (2nd Cir. 2015).

Artificial Intelligence and Anticompetitive Collusion  259 for any human intervention and without communicating with one another.39 In the former case, because algorithms have been designed to respond intelligently to the conduct of competitors, the mere interaction of algorithms increases the likelihood of reaching a conscious parallelism, without requiring companies to engage in any communication.40 Hence, the question for antitrust enforcers is whether this algorithmic interaction may constitute a form of coordination (algorithmic communication), facilitated for instance by signalling practices (ie announcing the intent to change a relevant parameter of competition, such as the price). In the latter case, because there is no human intervention and no communication between algorithms, the possibility of attributing their conduct to a firm may even be questioned. However, algorithms may also bring pro-competitive benefits.41 Notably, algorithms promote both static and dynamic efficiencies, enabling firms to reduce transaction costs and costs of production, and promoting the improvement of existing products and services and the development of new ones. Further, on the demand side, by providing more information and bringing new tools, algorithms empower consumers, fostering their engagement, raising their awareness and making them conscious decision-makers in the markets. Finally, algorithmic systems can even be used to detect collusion between firms to ensure competitive prices. On efficiency grounds, in the Webtaxi decision the Luxembourg Competition Authority has recently exempted an agreement under which taxi operators jointly determined the fares for their services via an algorithm provided by a booking platform which took into account certain parameters (eg distance, traffic conditions, price per kilometre).42 The antitrust authority stated that efficiencies resulting from the use of the algorithmic pricing model outweighed anti-competitive restrictions. In particular, while the latter were considered limited since the combined market shares of the taxi operators involved were below a 30 per cent threshold, the benefits resulting from a more efficient allocation of resources were

39 See Asker, Fershtman, and Pakes (n 9) illustrating the impact of the design on equilibrium prices. In particular, with regard to reinforcement learning, the authors compare the prices by artificial intelligences algorithms according two different learning approaches, ie the asynchronous learning approach, which allows algorithms to learn only from actions that are actually taken, and the synchronous learning approach, which allows them to conduct counterfactuals to assist learning. The results of the paper show that these two approaches lead to markedly different market prices, namely, when future profits are not given positive weight by the artificial intelligences algorithm, synchronous updating leads to competitive pricing, while asynchronous can lead to pricing close to monopoly levels. 40 See Vestager (n 17) considering the challenges that automated systems create ‘very real’. 41 OECD (n 4) 14–15; UK Competition and Markets Authority (n 5) 20–21. UK Competition and Markets Authority, ‘Algorithms: How they can reduce competition and harm consumers’ (2021) www.gov.uk/government/publications/algorithms-how-they-can-reduce-competition-and-harmconsumers (last accessed 5 March 2021). 42 Conseil de la Concurrence Grand-Duché de Luxembourg, 7 June 2018, Decision No. 2018-FO-01, Webtaxi, concurrence.public.lu/fr/decisions/ententes/2018/decision-2018-fo-01.html (last accessed 5 March 2021).

260  Giuseppe Colangelo significant because operators were able to adapt the supply in peak and off-peak hours, thereby reducing the number of empty rides and, consequently, also cutting pollution, and customers enjoyed a uniform and centralised offer of services available on a 24/7 basis. Finally, as highlighted by the European Commission in its recent White Paper on Artificial Intelligence, AI may also represent an additional useful tool for antitrust law enforcement.43 Indeed, the potential of AI in the processing of large amounts of data and pattern recognition offers relevant opportunities for competition law enforcement, hence antitrust authorities may increasingly rely upon the use of algorithms and AI-powered analytical tools.44

C.  Is it Time for Algorithmic Antitrust? The analysis undertaken in the previous section on the different scenarios in which algorithmic-facilitated coordination may take place provides some insights. Antitrust authorities have been able to tackle the algorithmic challenge, uncovering anti-competitive coordination in scenarios where algorithms were used as tools to assist companies in implementing and facilitating explicit collusion and where third-party providers of algorithm services acted as a hub in a hub-andspoke conspiracy. However, antitrust enforcement would struggle to deal with automated systems programmed to react to each other or self-learning algorithms autonomously able to reach tacit coordination. Despite the fact that no real cases about the last two scenarios have so far been observed, several scholars are calling for the current antitrust rules to be modified to meet the increasing use of algorithms. After all, according to Ezrachi and Stucke, we are actually witnessing the end of competition as we know it.45 Notably, Heinemann and Gebicka invite antitrust enforcers to revisit the concepts of agreement and concerted practice,46 and the OECD endorses a clearer 43 European Commission, ‘White Paper on Artificial Intelligence: A European approach to excellence and trust’, (2020) COM(2020) 65 final ec.europa.eu/info/publications/white-paper-artificialintelligence-european-approach-excellence-and-trust_en (last accessed 6 March 2021). 44 See Hellenic Competition Commission, ‘Computational Competition Law and Economics: Issues, Prospects’ (2020) Inception Report; T Schrepel, ‘Computational Antitrust: An Introduction and Research Agenda’ (2021) 1 Stanford Computational Antitrust 1, illustrating the project launched by the Stanford University Codex Center, which seeks to develop computational methods for the automation of antitrust procedures and the improvement of antitrust analysis; M Huber and D Imhof, ‘Machine learning with screens for detecting bid-rigging cartels’ (2019) 65 International Journal of Industrial Organization 277, suggesting that the combination of machine learning and screening is a powerful tool for detecting bid rigging. See also A von Bonin and S Malhi, ‘The Use of Artificial Intelligence in the Future of Competition Law Enforcement’ (2020) 11 Journal of European Competition Law & Practice 468, analysing opportunities and risks of the use of AI, and referring to A Sanchez-Graells, ‘Screening for Cartels’ in Public Procurement: Cheating at Solitaire to Sell Fool’s Gold?’ (2019) 10 Journal of European Competition Law & Practice 199, in order to describe the criticisms of the UK CMA’s ‘Screening for Cartels’ algorithmic tool aimed at helping procurers screen their tender data for signs of illegal bid rigging activity. 45 Ezrachi and Stucke (n 10). 46 Heinemann and Gebicka (n 10).

Artificial Intelligence and Anticompetitive Collusion  261 definition of agreement, although admitting that at this point it is hard to draw conclusions on whether algorithmic interactions (‘meeting of algorithms’) should be treated in a way similar to a ‘meeting of minds’.47 Gal too finds treating the adoption of algorithms as facilitating practices or plus factors troublesome.48 Because algorithms perform many functions and may bring benefits, it is necessary to verify whether they are justified by pro-competitive considerations balancing anticompetitive risks against efficiencies. Moreover, the very content of the prohibition would be difficult to define in a clear way so as to ensure legal certainty to market players. By sharing concerns about the effectiveness of a form-based approach, it has also been proposed that the focus on the market effects of algorithms be shifted. Thomas suggests integrating the notion of concerted practices with an economic effects analysis;49 and, with regard to price-matching algorithms, Siciliani argues that antitrust enforcers should rely on established price-cost tests in order to determine when the use of such algorithms is illegitimate.50 On a different note, other scholars support an ex-ante intervention instead of pursuing the standard antitrust analysis, which may consist in market investigations or sector inquiries, a review of merger control, or regulation. In particular, Ezrachi and Stucke propose to test algorithmic collusion in controlled or artificial environments such as a regulatory sandbox and to launch market investigations in order to better inform authorities of algorithms’ dynamics.51 Harrington suggests reading the mind (ie the code) of artificial agents to make collusion illegal, rather than the communication that facilities it.52 Therefore, he proposes to define a blacklist of algorithms per se unlawful according to their specific design, resolving the problem of liability by restricting the class of allowable algorithms or by prohibiting algorithms with specific features that support prices above the competitive level. Moreover, Zheng and Wu propose a market-based regulatory approach, namely to introduce a Pigouvian tax on the negative externality of competition caused by algorithmic pricing, by charging a certain amount of fees according to technical and economic criteria on all the firms that use algorithms in a non-oligopolistic market where sustainable supra-competitive prices are apparent.53 Lamontanaro suggests, instead, implementing a whistleblower bounty program that would improve the detection of algorithmic cartels and allow authorities to gain the expertise necessary to enforce antitrust laws without impeding innovation.54 47 OECD (n 4) 36–39. 48 Gal (n 10) 111–112. 49 S Thomas, ‘Harmful Signals: Cartel Prohibition and Oligopoly Theory in the Age of Machine Learning’ (2019) 15 Journal of Competition Law & Economics 159. 50 P Siciliani, ‘Tackling Algorithmic-Facilitated Tacit Collusion in a Proportionate Way’ (2019) 10 Journal of European Competition Law & Practice 31. 51 Ezrachi and Stucke (n 10). 52 Harrington (n 10). 53 Zheng and Wu (n 10). 54 A Lamontanaro, ‘Bounty Hunters for Algorithmic Cartels: An Old Solution for a New Problem’ (2020) 30 Fordham Intellectual Property, Media and Entertainment Law Journal 1259.

262  Giuseppe Colangelo Finally, some scholars point to algorithmic compliance requiring companies to ensure antitrust compliance by design and by default, rather than building pricing algorithms in a way that allows them to collude.55 In short, the challenges brought by AI and, more generally, by digital markets are casting doubt on the ability of conventional remedies to ensure competition. By exploring alternative radical remedies, for instance Gal and Petit assess the mandatory sharing of algorithmic learning, ie a duty to share the knowledge produced by learning algorithms trained on data unlawfully collected or exploited.56 The European Commission has explored a (quasi) regulatory approach. Opening a public consultation on the need for a possible new competition tool, the Commission envisaged the possibility of intervention when a structural lack of competition prevents the market from functioning properly, such as oligopolistic market structures with an increased risk of tacit collusion, including markets featuring increased transparency due to algorithm-based technological solutions.57 Notably, in its inception impact assessment, the Commission referred explicitly to the fact that ‘the growing availability of algorithm-based technological solutions, which facilitate the monitoring of competitors’ conduct and create increased market transparency, may result in the same risk even in less concentrated markets’.58 However, in the proposal presented by the European Commission in December 2020, the planned new competition tool has been folded into the Digital Markets Act and apparently watered down into market investigations that will allow the Commission to qualify companies as gatekeepers, dynamically update the obligations on gatekeepers when necessary, and design remedies to tackle systematic infringements of the Digital Markets Act rules.59

55 Bernhardt and Dewenter (n 14); A Deng, ‘From the Dark Side to the Bright Side: Exploring Algorithmic Antitrust Compliance’ (2020) papers.ssrn.com/sol3/papers.cfm?abstract_id=3334164 (last accessed 5 March 2021); PG Picht and B Freund, ‘Competition (Law) in the Era of Algorithms’ (2018) 39 ECLR 403; PG Picht and GT Loderer, ‘Framing Algorithms: Competition Law and (Other) Regulatory Tools’ (2019) 42 World Competition 391; S Vezzoso, ‘Competition by design’, in B Lundqvist and M Gal (eds), Competition Law for the Digital Age (E Elgar, 2019) 93. See also Vestager (n 17). Moreover, see J Moore, E Pfister and H Piffaut, ‘Some Reflections on Algorithms, Tacit Collusion, and the Regulatory Framework’ (2020) 1 Antitrust Chronicle 14, 19, considering antitrust enforcement and dedicated regulation as complements in ensuring the effectiveness of competition policy vis-à-vis algorithmic pricing, notably envisioning a framework where firms could be required (or incentivised) first to test their algorithms prior to deployment in real market conditions (risk assessment), then to monitor the consequences of deployment (harm identification). 56 MS Gal and N Petit, ‘Radical Restorative Remedies for Digital Markets, (2021) 37 Berkeley Technology Law Journal, arguing that such remedy has four main virtues, that is it creates better conditions for restoring competition for the market, it can be applied almost immediately, it can limit the enjoyment of illegally obtained comparative advantages without harming consumers by prohibiting the use of algorithms which increase consumer welfare, and it does not require continual supervision. 57 European Commission (n 18). See also F Beneke and MO Mackenrodt, ‘Remedies for algorithmic tacit collusion’ (2021) 9 Journal of Antitrust Enforcement 152, exploring how fines and structural and behavioural remedies can serve to discourage collusive results achieved by autonomous algorithmic agents. 58 European Commission (n 18) 1. 59 European Commission, ‘Proposal for a Regulation on contestable and fair markets in the digital sector (Digital Markets Act)’, COM(2020) 842 final.

Artificial Intelligence and Anticompetitive Collusion  263 Nonetheless, the Commission will enjoy investigation powers, such as request for information, interviews and on-site inspections, which explicitly include the access to databases and algorithms.60 The initiative undertaken by the European Commission would represent a change of paradigm deemed to significantly alter the antitrust toolkit. And this would happen despite no evidence of algorithmic collusion having yet emerged. Conversely, it is worth noting that competition authorities have endorsed a wait-andsee approach so far. Notably, the UK CMA considers speculative the mechanisms by which algorithms could have an additional impact beyond traditional risk factors, noting that it is unclear how likely the predictable agent and autonomous machine models of coordination are to materialise at this point.61 In a similar vein, the UK Digital Competition Expert Panel finds it hard to predict whether greater use of algorithms will lead to algorithmic collusion.62 Further, according to the French and German antitrust authorities, it is as yet unknown whether algorithmic communication is a realistic scenario, hence, in the situations considered so far, the current legal framework is sufficient to tackle possible competitive concerns.63 In the meantime, several competition authorities have proceeded to the appointment of a chief technology officer and have started relying on data science experts by putting in place special information technology units in order to assist then in developing advanced forensic techniques and data analytics.64 For instance, the French Competition Authority has announced the creation of a new specialided department (the Digital Economy Unit) aimed at strengthening the human resources dedicated to detecting and analysing the behaviour adopted by players in the digital sector.65 By the same token, the Netherlands Authority for Consumers and Markets has launched a trial in which it seeks to find out how it can monitor in practice the functioning of algorithms that businesses use.66 And the UK CMA has launched the ‘Analysing Algorithms Programme’ seeking views and evidence from academics and industry experts on the potential harms to competition and consumers caused by the deliberate or unintended misuse of algorithms.67 60 European Commission (n 59) Arts 19–21. 61 UK Competition and Markets Authority (n 5) 31 and 48. 62 UK Digital Competition Expert Panel (n 16) 15 and 110. 63 Autorité de la Concurrence and Bundeskartellamt (n 7). 64 A useful overview is provided by the Hellenic Competition Commission (n 44) 16–55. 65 Autorité de la Concurrence, ‘The Autorité de la concurrence announces its priorities for 2020’ (2020) www.autoritedelaconcurrence.fr/en/press-release/autorite-de-la-concurrence-announces-itspriorities-2020 (last accessed 5 March 2021). See also UK Competition and Markets Authority, ‘A new pro-competition regime for digital markets’ (2020) www.gov.uk/government/news/cma-advisesgovernment-on-new-regulatory-regime-for-tech-giants (last accessed 5 March 2021), considering essential that the new Digital Markets Unit builds up a great deal of expertise and knowledge, including the capability to understand the role of algorithms and artificial intelligence. 66 Authority for Consumers and Markets, ‘ACM launches a study into the functioning of algorithms in practice’ (2020) www.acm.nl/en/publications/acm-launches-study-functioning-algorithms-practice (last accessed 6 March 2021). See also Authority for Consumers and Markets, ‘Oversight of algorithms’ (2020) www.acm.nl/en/publications/study-oversight-algorithmic-applications (last accessed 6 March 2021). 67 See also UK Competition and Markets Authority (n 41), accompanying the launch of the CMA’s Analysing Algorithms Programme.

264  Giuseppe Colangelo The lack of empirical evidence leads some scholars to downplay algorithmic collusion as merely speculative and unlikely. Notably, Petit argues that the AI literature generates predictions on the basis of strict assumptions and the findings are not complemented by a symmetrical investigation of the destabilising effect of algorithms on harm to competition.68 For instance, Miklós-Thal and Tucker suggest that better demand forecasting resulting from algorithms, machine learning, and artificial intelligence can lead to lower prices and higher consumer surplus, thereby negatively affecting the sustainability of collusion in an industry.69 By the same token, Gautier et al. note that the collusive behaviour of most algorithms discussed in the literature were assessed in strict, laboratory and experimental settings and that the majority of models proposed tend to overlook some important characteristics and uncertainties of real businesses.70 In a similar vein, Schwable identifies challenges to learning collusion, which makes it less easy and inevitable than is often suggested.71 Since algorithmic collusion is virtually non-existent, Schrepel considers the focus of much academic research on this scenario as the result of a publication bias.72 And, in any case, even if algorithmic collusion were to become a frequent practice, it would remain old wine in new bottles. Indeed, as also maintained by Ohlhausen, the expanding use of algorithms is not prone to creating enforcement blind spots, but rather raises familiar issues that are well within the existing canon.73 Following this line of reasoning, Van Cleynenbreugel advances the claim that platforms relying on algorithms may be considered as ‘associations of undertakings’ within the meaning of Article 101 TFEU.74 In this way, decisions to rely on self-learning algorithms would fall within the scope of antitrust provisions, even in the absence of a concurrence of wills or proof of contact between two or more undertakings, hence without indulging in a discussion on the existence of an agreement or concerted practice. In short, according to this strand of literature, concerns about algorithmic collusion are not justified. However, it is worth mentioning a recent study by Assad et al. analysing the German retail gasoline market, which is an early adopter of algorithmic-pricing software, and suggesting that AI adoption has a significant effect on competition by increasing prices and margins.75 Further, because margins start increasing 68 Petit (n 13). See also Itoo and Petit (n 13) considering algorithms not determinatively, and perhaps not even significantly causal of tacit collusion. 69 J Miklós-Thal and C Tucker, ‘Collusion by Algorithm: Does Better Demand Prediction Facilitate Coordination Between Sellers?’ (2019) 65 Management Science 1455. 70 Gautier, Ittoo and Van Cleynenbreugel (n 13). Notably, the authors point to limitations related to the assumption that the products of the competing firms are homogenous, that the demand is linear or deterministic, that firms can only set a restricted range of prices and that they compete solely on the basis of prices. 71 Schwable (n 13). 72 Schrepel (n 13). 73 Ohlhausen (n 14). See also Bernhardt and Dewenter (n 14) considering collusion by code covered mainly by existing provisions. 74 Van Cleynenbreugel (n 14). 75 Assad, Clark, Ershov and Xu (n 10). See also Spanish National Commission on Markets and Competition and Catalan Competition Authority (n 7) 6 arguing that the possibility of algorithmic collusion is ‘a reality, theoretically and empirically demonstrated’.

Artificial Intelligence and Anticompetitive Collusion  265 about a year after market-wide adoption of algorithmic pricing software, according to the authors this also suggests that algorithms in that market have learned tacitly-collusive strategies. Against this backdrop, it seems appropriate to call for a cautionary approach. It is undisputed that existing antitrust rules are fully suited to addressing algorithmic-facilitated coordination in scenarios of explicit collusion and huband-spoke conspiracy. Further, the impact of algorithms on those factors and market conditions traditionally identified as favourable parameters for supporting the collusive outcome is ambiguous.76 While algorithms can reduce coordination costs and facilitate both the detection of deviations and the rapid implementation of effective punishments, there are also countervailing effects which may destabilise the collusive equilibrium. The ongoing debate revolves around the hypotheses of algorithmic collusion and essentially around the possibility to consider them as realistic scenarios. Lacking empirical evidence and real-world cases of algorithmic collusion, the wait-and-see approach embraced by some competition authorities is wise and reasonable. Although the prospect of an algorithmic antitrust may be fascinating, it is indeed premature to advocate for a reform of antitrust rules and concepts or for legislative actions aimed at introducing regulatory measures while it remains unclear whether the predictable agent and autonomous machine models of coordination will materialise.

D.  Concluding Remarks Five years ago, Margrethe Vestager concluded a speech arguing that antitrust authorities should not panic about the way algorithms are affecting markets, nonetheless recognising the need to keep a close eye on how algorithms are developing in order to be ready when science fiction becomes reality.77 Though this scenario still seems a long way off, the European Commission is apparently convinced that the time has come to intervene. Indeed, the public consultation on a new competition tool that it launched aimed, among several policy options, at allowing the Commission to intervene in oligopolistic market structures with an increased risk of tacit collusion, including markets featuring increased transparency due to algorithm-based technological solutions.78 Rather than following a wait-andsee approach, the Commission seemed ready to take the stand supporting the narrative according to which existing antitrust law is unfit to handle algorithmicfacilitated coordination. 76 See also A Ezrachi and M Stucke, ‘Sustainable and Unchallenged Algorithmic Tacit Collusion’ (2020) 17 Northwestern Journal of Technology and Intellectual Property 217, 226–228, acknowledging that algorithmic tacit collusion ‘will not affect every (or even most) markets’ and that it is most likely in markets already susceptible to collusion. 77 Vestager (n 17). 78 European Commission (n 18).

266  Giuseppe Colangelo However, in the proposal for a Regulation on contestable and fair markets in the digital sector (Digital Markets Act) presented in December 2020 the new competition tool has been watered down into market investigations that will allow the Commission to qualify companies as gatekeepers, dynamically update the obligations on gatekeepers when necessary, and design remedies to tackle systematic infringements of the Digital Markets Act rules. Further, the proposal contains no mention of algorithmic-facilitated coordination. After all, there is no evidence of algorithmic collusion so far and, as acknowledged by British, French and German competition authorities, the mechanisms by which algorithms could have an additional impact beyond traditional risk factors are quite speculative.79 Hence, algorithmic pricing is more likely to exacerbate traditional risk factors, facilitating collusion in markets which are already susceptible to human coordination. In such a scenario, existing antitrust rules are fit for the task. Finally, it is worth noting that some scholars urge focus on the potential anticompetitive use of blockchain technology. Indeed, as noted by Catalini and Tucker, in the same way that the decentralised nature of blockchain technology allows for network effects to emerge without assigning market power to a platform operator, the absence of a central entity could facilitate collusion and make antitrust enforcement more difficult.80 Blockchain solutions might facilitate both the sharing of competitively sensitive information and the implementation of anticompetitive agreements, especially when they involve the use of smart contracts. Notably, as argued by Schrepel, by allowing the implementation of agreements whose constraint stems from cryptographic rules, blockchain and smart contracts transform collusion into a cooperative game.81 These perspectives have caught the attention of the US antitrust enforcers. In a recent speech Makan Delrahim, former Chief of the Antitrust Division at the US Department of Justice, acknowledged both the opportunities and challenges the blockchain involves from an antitrust perspective stating that, even though this technology offers tremendous potential value, there is also potential for misuse of well-crafted blockchain solutions.82 Therefore, blockchain represents a natural follow-up of this work and it is left for further research.

79 Autorité de la Concurrence and Bundeskartellamt (n 6); UK Competition and Markets Authority (n 5). 80 C Catalini and C Tucker, ‘Antitrust and Costless Verification: An Optimistic and a Pessimistic View of Blockchain Technology’ (2019) 82 Antitrust Law Journal 861. See also LW Cong and Z He, ‘Blockchain Disruption and Smart Contracts’, (2019) 32 The Review of Financial Studies 1754. 81 Schrepel (n 13). 82 M Delrahim, ‘Never Break the Chain: Pursuing Antifragility in Antitrust Enforcement’ (2020) Remarks at the Thirteenth Annual Conference on Innovation Economics www.justice.gov/opa/ speech/assistant-attorney-general-makan-delrahim-delivers-remarks-thirteenth-annual-conference (last accessed 5 March 2021).

14 Artificial Intelligence and Contracts: Reflection about Dispute Resolution PAOLA AURUCCI AND PIERCARLO ROSSI

I. Introduction Artificial intelligence may ensure more rational, consistent and, above all, impartial decisions than human decision makers. This is true also for specialised decision makers, such as judges, mediators and arbitrators. When we think of artificial intelligence as a technological system to solve problems or perform tasks more efficiently, the most courageous and visionary field of study is the field of justice. We are not (yet) talking about robot judges, but about tools that could support the judge in the decision-making phase or lawyers in the investigation of a case. A field in which AI is assuming the prominent role is ADR (Alternative Dispute Resolution),1 especially for arbitration. Traditionally, an ADR setting is composed of people who conduct hearings ‘in person’; however, technological transformation and digitalisation have allowed the development of artificial intelligence (AI) and blockchain technology that are currently disrupting the traditional format and conduct of ADR, as they would be able to increase the efficiency and quality of the decision. The COVID-19 pandemic has accelerated this trend of using AIs to increase the e­ fficiency and quality of ADR settings; for instance, if physical hearings are not feasible, parties and courts require online meetings, desktop sharing and video conferencing software that allow them to meet via the Internet in real time. This chapter is about the AI systems that already exist and that could be used in contractual dispute resolution. It starts with a short overview on the notion of smart contracts and how they reached an unexpected success thanks to the development of blockchain technology. Despite their self-executive feature that may apparently suggest that any need for third party contract enforcement or disputes resolution

1 The common forms of ADR are: negotiation, mediation, arbitration, early natural evaluation and adjudication. See forward J Barnett and P Treleaven ‘Algorithmic Dispute Resolution – The Automation of Professional Dispute Resolution Using AI and Blockchain Technologies’ (2018) 61 The Computer Journal 400–401.

268  Paola Aurucci and Piercarlo Rossi through arbitration or court litigation is rendered unnecessary,2 this is not the case since only agreements that are deterministic in nature can be automated – there could be bugs in the code contracts – which can be incomplete3 – other than unpredicted, opportunistic, contrary to expectation behaviour performed by the parties. Certain inherent characteristics of smart contracts, such as decentralisation, immutability and irreversibility make ex-post legal enforceability difficult through current human court or arbitration systems. While on-chain arbitration may promise decentralised, quicker and less-costly resolutions than a traditional process, it fails to provide the same service. Therefore, it appears that as stressed by Howell and Petrus4 the ‘entirely new paradigm of management potentially c­ apable of disrupting traditional forms of governance and management’ – anticipated by the use of blockchain as a technology that facilitates new methods of dispute ­resolution – is far from reality and contractual disputes will still ‘rely to a very great extent on the institutions that have been developed in the years to support human interactions’. The chapter then shows how predictive capabilities allowed by Big Data Analytics and AI can be exploited by parties to draft contracts that ‘fill their own gaps’ and ­interpret their own standards – without adjudication by a third party – learnings from real-world judicial processes and contingencies.5 It then goes deep in the analysis of how AI can be used to currently present the main opportunity to reduce costs and improve s to provide speedy, efficient and cheap dispute resolution and their technical, ethical and legal pitfalls.

II.  Smart Contract and Self-Driving Contract A.  Blockchain-Based Dispute Resolution Mechanisms Smart contracts are not a new concept.6 Starting from the nineties, Nick Szabo referred to them as ‘computerized transaction protocol that executes the terms of a contract’.7 But it is in recent years that the development of blockchain technologies, started a decade ago,8 has prompted renewed interest in smart contract and – at the same time – exploited their full potential. A blockchain is a form of ­distributed 2 P Ortolani ‘The impact of blockchain technologies and smart contracts on dispute resolution: arbitration and court litigation at the crossroads (2019) 24 Uniform Law Review 438. 3 Different factors may cause incompleteness of the contract. The most important are: ‘limited knowledge over future states of the world (ie fundamental uncertainty) together with limited human cognitive power (ie bounded rationality)’. See further D Allen, A Lane and M Poblet ‘The governance of blockchain dispute resolution’ (2019) 25 Harvard Negotiation Law Review 78. 4 BE Howell and PH Potgieter, ‘Uncertainty and dispute resolution for blockchain and smart contract institutions’ [2021] Journal of Institutional Economics 14. 5 A Casey and A Niblett, ‘Self-Driving Contracts’ (2017) 43 Journal of Corporation Law 1. 6 Allen, Lane and Poblet (n 3) 78. 7 N Szabo, ‘Smart Contracts’ (1994) at www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/ CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/smart.contracts.html. 8 S Nakamoto ‘Bitcoin: A Peer-to-Peer Electronic Cash System’ (bitcoin.org 2008) at bitcoin.org/ bitcoin.pdf.

Artificial Intelligence and Contracts: Reflection about Dispute Resolution  269 ledger which uses consensus algorithms to ensure that data stored on it are ­resilient and resistant to alteration. Initially designed to settle the transfer of property rights in an underlying digital token (ie bitcoin or other c­ryptocurrencies) by combining a shared ledger with a cryptographically-based incentive system designed to securely maintain it, without involving a centralised exchange platform. However, blockchain technology, with its decentralised distributed ledgers can also be used to record the fact of an agreement between the parties and the contractual terms agreed.9 Essentially, it provides an immutable record which can be used in case of dispute between the parties. The blockchain record is to be considered an improvement over paper records or simple digital renderings of them, that is easy to alter. Next to immutability and temper proof characteristics, blockchain security performance includes the transparency, traceability and auditability. Furthermore, thanks to its c­ ryptographic features, blockchain allows for pseudonymity whereby participants can engage in transactions without revealing their identity. This said, pioneering open blockchain platform Ethereum10 allows the whole or part of an agreement between users to be memorialised in code and executed automatically via blockchain. Thanks to this platform smart contracts can be programmed and deployed as an ‘automation program’ in a blockchain. The parties reach an agreement on the contractual clauses and on the timing relying on the ‘if-this-then-that’ logic statement which cannot be halted once validated by the users.11 In particular – as Santosuosso stresses out – ‘the parties enter, through their cryptographic keys, both the clauses they intend to enter into the contract and the operations that the system will carry out automatically. Thanks to the if/then sequence, if the system records the occurrence of the fact referred to in a certain clause (if), the contract will progress, but on the contrary if the clause is violated, the contract will automatically carry out the remedies agreed between parties themselves or by law’.12 The backup system guarantees that there will not be any misalignments where one party boasts the existence of some clauses and the other party boasts different clauses of the same contract. The blockchain is also equipped with a data saving system.13 In a few words, smart contracts could be defined as agreements – or parts of agreements – that are written into to operate within a decentralised or distributed blockchain network, and that can be automatically executed by that network when specific conditions are met without any support of an external party.14 9 Howell and Potgieter (n 4) 3. 10 Ethereum is one of the most prominent ventures based on blockchain technologies. Launched in 2015 to overcome some of the limitations of bitcoin and to guarantee users to enter into transactions more complex than a simple transfer of funds, the Ethereum platform allows users into an agreement, they are able to translate (at least significant parts of) that agreement into software script, thus relying on the technology (rather than on each other’s good faith and individual initiative) to ensure the performance of the obligations arising out of it. See, eg, Ortolani (n 2) 438. 11 A Santosuosso ‘About Smart Contract Dispute Resolution’, in B Cappiello and G Carullo (eds), Blockchain, Law and Governance (Springer Nature, 2021). 12 ibid 211. 13 ibid 211. 14 Allen, Lane and Poblet (n 3) 78.

270  Paola Aurucci and Piercarlo Rossi As De Filippi and Wright15 pointed out this ‘self-executing’ feature of smart contracts, which enables the automation of performance and enforcement, can increase contractual efficiency and mitigate the risks of opportunistic behaviour. Furthermore, O’Shields16 underlines that smart contract benefits may include increased speed and accuracy of business transactions, better, quicker, and cheaper enforcement of the contract. On the other hand, the commercial success of smart contracts depends on the ability to translate contractual obligations into algorithms, as well as on the ability to produce error-free, reliable computer code and a reliable system of dispute resolution. Realistically, it is important to accept that even when smart contracts are ­carefully parametrised in its clauses and is self-executing at a high degree of success, some sort of dispute may happen, for several reasons.17 This can be from bugs in the code and phantom transactions, that can be classified as on-chain disputes since they derived entirely from smart contract malfunction – to human error in attempting to carry out the contract or other unanticipated event or change that has occurred that make non-compliance more attractive than the agreed terms. The immutable performance of pre-agreed terms in smart contracts makes it ­difficult to manage any unanticipated ex-ante problems/scenarios, since it precludes renegotiation or flexibility in performing their obligations. For this reason, smart contracts seem not well-suited to an environment where unexpected outcomes can arise, unless an efficient and reliable system of dispute resolution is provided. This is crucial in order to exploit the full potential of these contracts. As stressed by Allen et al.18 at present, it is possible to choose between two approaches to dispute resolution for smart contracts. The first recognises that smart contracts can operate within the existing contract law framework, and can be adjudicated by the courts or existing ADR procedures. However, this ‘off-chain’ dispute resolution via negotiation or intervention of an authoritative third party (mediation, and adjudication by an arbiter or judge) seems to nullify innovative potential contained within smart contracts: decentralisation, consumer autonomy and complete transactional privacy.19 For this reason, blockchain technology has been proposed to be used for creating new, private decentralised dispute ­resolution mechanisms for Online Dispute Resolution (OSR).20 Arbitration that incorporates a blockchain (blockchain-based arbitration) seems the most effective method for resolving disputes that arise on a blockchain.21 15 P De Filippi and A Wright, Blockchain and the Law: The Rule of Code (Harvard University Press, 2018) 80. 16 R O’Shields ‘Smart Contracts: Legal Agreements for the Blockchain’ (2017) 21 North Carolina Banking Institute 177. 17 Santosuosso (n 11) 209. 18 Allen, Lane and Poblet (n 3) 93. 19 M Buchwald, ‘Smart Contract Dispute Resolution: The Inescapable Flaws of Blockchain-Based Arbitration’ (2020) 168 University of Pennsylvania Law Review 1373. 20 Allen, Lane and Poblet (n 3) 89. 21 ibid 1373.

Artificial Intelligence and Contracts: Reflection about Dispute Resolution  271 In practice, this means that emerging Ethereum platforms like Kleros22 and dispute resolution processes such as JUR23 enable contracting parties to pre-code an option for ex-post, fully decentralised arbitration.24 On paper blockchainbased arbitration offers the best of both worlds, allowing disputants and jurors to remain pseudonymous without sacrificing the right to expedient, cheap, and satisfactory adjudication. However, in reality, it is a little more complicated since blind deference and acceptance of such tools, based on oracle computer programs is the equivalent of ignoring their pitfalls. For example, as stressed by Howell and Potgieter, applications such as Kleros are constrained by the same limitations as the contracts they oversee, such as: limits to information availability, human cognition and communication. As pointed out by Buchwald ‘inability to compel discovery, skewed juror incentives, and limits to dispute complexity all render decentralized, pseudonymous on-chain applications perpetually inferior to off-chain traditional alternatives’.25 This suggests that, despite hopes held for blockchains to become complete, stand-alone institutional environments where all commercial interactions can take place, traditional institutions such as contract law, the courts, and arbitration processes are better-suited to addressing the consequences of unexpected and unpredicted outcomes than blockchain institutions, simply because they take account of the limits of human actors. This said, new predictive capabilities allowed by Big Data and AI, on the one hand can be used to improve the quality of computer-mediated contracts that become self-driving, on the other could improve the speed of the proceedings, cost saving and efficiency of ­traditional institutional mechanisms of contractual dispute resolution.

B.  Self-Driving Contracts The analysis of big data through artificial intelligence systems has enabled the deployment of predictive models – which allow for the identification of future behaviour from data – that could represent the core element to overcome the trade-off between immutability of ex-ante negotiated agreement and ex-post dispute resolution that characterised blockchain-based smart contracts. How? Simply replacing them with ‘self-driving contracts’. As pointed out by Casey and Niblett26 predictive capabilities allow parties to draft contracts that ‘fill their own gaps’ and interpret their own standards in the way a mediator or court might do today. 22 Kleros at www.kleros.io. Kleros could allow Ethereum to become a completely stand-alone ­alternative contracting environment. See further A Berg, C Berg and M Novak ‘Blockchains and Constitutional Catallaxy’ (2020) 31 Constitutional Political Economy 189. 23 Jur at www.jur.io/. 24 For a complete overview of private decentralised dispute resolution mechanisms see Allen, Lane and Poblet (n 3) 89–96. 25 Buchwald (n 18) 1423. 26 Casey and Niblett (n 5) 2.

272  Paola Aurucci and Piercarlo Rossi Starting from the recognised incompleteness of a contract, the authors imagine a situation where parties can agree on broad ex-ante objectives and let the AI translate these ex-ante broad intentions and goals into a specific term or directive based on information gathered after the parties have executed the initial agreement, on the environment in which the parties are operating. In their conception, artificial agents will allow parties to achieve their goal in light of real-time contingencies and will fill in contractual gaps without adjudication. To summarise, self-driving contractual approach on the one hand, as opposed to blockchainbased smart contracts, build upon principles-based contract law and appear to be more consistent with long-term and relational contracts. On the other, and maybe on this point the authors may go a bit too far, they allow the blurring of the distinction between rules and standards, between ex-ante agreements and ex-post dispute resolution, and between complete and incomplete contracts. However, as stressed by Howell and Potgieter27 it is questionable that predictive models relying upon historic data can ‘adequately prepare contracts that can cope with totally unexpected situations’ and stops short of providing an automated disputes resolution process, at least in the short term. At the moment, ‘it is difficult to imagine any better instrument than the current human court or arbitration systems’28 that, at the same time, thanks to AI, could rely on a wide range of new information, like previous jurisprudences/arbitral awards, to predict better decisions.

III.  The Use of AI in Dispute Resolution A.  The Role of AI in Dispute Resolution Today there are two uses of AI to support activity in dispute resolution. The first one is dedicated to court systems in different ways, such as the automatic analysis and preparation of deeds and documents, mainly for commercial and civil law through the classification of documents, recognising discrepancies and clustering individual clauses. Especially in the case of legal documents, it is easier to apply artificial intelligence where the system ‘understands’ from a semantic point of view what is in the documents. But it takes time and very precise forms of knowledge. In the US, LawGeex29 pitted an AI-based algorithm against 20 corporate lawyers to identify, in the shortest possible time, the existence of potentially invalidating clauses in contracts covered by strict confidentiality agreements. The computer is said to have beaten the lawyers not only in terms of speed, but also in terms of accuracy of responses. In these cases, we are not talking about predictive justice

27 Howell 28 ibid.

and Potgieter (n 4) 9.

29 LawGeex

at www.lawgeex.com.

Artificial Intelligence and Contracts: Reflection about Dispute Resolution  273 or robot judges. We are talking about specific areas where AI serves to automate certain tasks. The algorithmic justice in court systems is, however, already at an advanced stage of application in litigation in contexts such as insurance, but it could also be useful if applied in administrative justice to support the judge, for example in the matter of compensation for damages, especially for those losses in the field of public procurement. In other words, once the damage and responsibility have been recognised, the algorithm could calculate the compensation thanks to a series of data that currently we give to the administration department to determine the calculation criteria. In the civil field, for example in a case of compensation for damage caused by a road accident, we can imagine how a computer can establish the amount of damages that can be compensated on the basis of the surveys carried out by the investigators at the scene of the accident and the documentation produced. There are also disputes in the tax field that could be dealt with automatically or almost automatically, such as, for example, appeals against notices of assessment arising from bank audits. The advantage is constituted by the machine to point out the document that deviates from the standard and, therefore, the document has to be opened and analysed by the lawyer. A recent application of artificial intelligence in the field of criminal law is that of the Toga System, consisting of a database in which all the criminal cases governed by the Criminal Code and by the special legislation are registered, which allows the verification, among other things, of the competence, the admissibility of alternative procedures, the prescriptive terms and the duration of the precautionary measures, as well as the calculation of the penalty for each type of crime. Another recent application concerns the investigation about the collapse of the Morandi bridge, in which the Genova Public Prosecutor Office decided to use FBI software, equipped with particularly complex algorithms, with the aim of cross referencing all the data collected (corresponding to about 60 terabytes) with that of the electronic devices seized, with the technical documentation and the opinions of both the prosecutor and the defence consultants. Moreover, in the US, services of artificial intelligence dedicated to the world of law have already existed for some time. Just think of the ROSS Intelligence site30 which, equipped with a rich database of case law, allows lawyers to draw up documents taking into account the orientation of the judges on a given subject. The second one is the prominent role in the field of ADR. For legal prediction in ADR, AI is already being used in some applications such as ArbiLex31 that is a predictive data analysis tool that uses Bayesian machine learning to assess and quantify probabilities; or the Lex Machina legal analysis platform that has a timing analytics function that uses AI to predict the estimated time for a case before a specific judge. This function could also be used in relation to individual



30 Ross

Intelligence at blog.rossintelligence.com. at www.arbilex.coe.

31 ArbiLex

274  Paola Aurucci and Piercarlo Rossi arbitrators, provided that the application has sufficient and sufficiently specific data on these arbitrators to make accurate predictions.

B.  The Emerging Trends about AI in Dispute Resolution Different AI techniques are employed in dispute resolution. The best example is where AI provides computers with the ability to make decisions and learn without explicit programming. The two main branches are Knowledge-based systems (KBS) – ‘computer programs that reason, and where knowledge is explicitly represented as ontologies or rules rather than implicitly via code’32 – and Machine learning. The most recent development of AI mainly relates to the latter. Machine learning is an approach to computer science in which the solution to an optimisation problem is not coded in advance, but inductively derived with reference to data. The greatest practical successes with machine learning in legal applications have so far been achieved through the use of ‘supervised learning’ techniques, a process that starts with a dataset that is labelled by humans according to the dimension of interest (‘trained data’). Subsequently, the system analyses this dataset and determines how best to predict the relevant outcome variable by reference to the other available characteristics of the data. The ‘trained model’ – that is, the algorithm with the set of parameters that optimised performance on the training dataset – is then put to work on a new test dataset to see how well it performs in predicting outside the original training sample. These results are delivered via an interface that human experts can control and use.33 Natural language processing (NLP) and sentiment analysis are also AI technologies that are important for legal services. NLP can be defined as the application of computational techniques to the analysis and synthesis of natural language and speech, while sentiment analysis is the process of computationally identifying and categorising opinions expressed in a piece of text.34 AI applications that are used in dispute resolution may be designed for three purposes: tools for managing cases/processes; management and analysis from facts; and assisting the decision-making function through predictive models. Focusing on the last two purposes, tools for fact management and a­ nalysis are based on AI applications that identifies, extracts and analyses text in contracts and other documentation. Among the several existing tools, it is worth mentioning eBrevia,35 an e-discovery tool for the document review process, which uses machine learning to determine the relevant parts of documents. Another tool 32 Barnett and Treleaven (n 1) 402. 33 In unsupervised learning techniques, no desired outputs are given, and the system is left to find patterns within unlabelled data. See further W Barfield and U Pagallo, Advanced Introduction to Law and Artificial Intelligence (Edward Elgar Publishing, 2021) 14. 34 Barnett and Treleaven (n 1) 402. 35 eBrevia at www.ebrevia.com.

Artificial Intelligence and Contracts: Reflection about Dispute Resolution  275 to consider is Kira Systems36 that is basically an AI that can read a contract and underline the important terms, therefore enhancing one’s visibility into his/her contract. Kira can operate the due diligence process in minutes and aims in the end to reduce the time lawyers spend on reading a contract. Thus, Kira is not a tool that can give a decision over a dispute. However, it fulfils the first step of rendering a decision: reading and understanding the contract. It can be useful for decision makers to reduce the time spent reading a contract and to discover relevant information from contracts regardless of their structure. It is also important to mentioned ROSS Intelligence which is a research platform for law and jurisprudences like Westlaw or LexisNexis but much easier to use and more intuitive. Where Kira is a tool that allows for a quick reading of contracts, ROSS allows for the speedier finding of relevant laws, cases and responses for a given issue. ROSS then allows users to ask questions and receive information on recommended reading, related case law and secondary resources. Kira and ROSS are not tools that can settle a dispute. However, they present the advantage of explaining their reasoning when giving an answer. Finally, there are the decision support tools that allow decision makers to issue better quality awards and implement the legitimacy and acceptance of their decisions; these applications are mainly based on predictive data analysis tools and decision datasets. For predictive applications to be successfully implemented, data must be of a large volume and of sufficient variety and veracity.

C.  The Problems about AI in Dispute Resolution A robotic decision-making system, which would replace human decision makers, would require a process of handling and accepting cases, performing conflict checks, handling multiple appointments, cases and hearings at the same time, without the involvement of human input and thus in a fully automated manner. In addition, it should perform factual analysis of a given case, including electronic fact discovery and document analysis. It should also assess the credibility of witnesses during questioning, using, for example, facial recognition applications that can detect micro-expressions of a witness. Conversely, cases involving hidden variables, such as social or economic considerations not evident in legal or factual documents, bring a degree of uncertainty relevant to adjudication processes, which, at the current stage of AI development, cannot be properly accounted for by available systems. The key elements of a decision, even in the case of arbitration, can be found in laws, conventions and often in the legal practice shared by a community of scholars and practitioners. Commercial arbitration, for instance, is subject to international leverage: its legal foundations are the New York Convention and the UNCITRAL

36 Kira

System at kirasystems.com.

276  Paola Aurucci and Piercarlo Rossi Model Law on International Commercial Arbitration, international conventions/ texts with considerable success worldwide. An arbitration is a dispute resolution process conducted by a neutral third party, the arbitrator; the key feature, which distinguishes an arbitration from other ADR methods such as mediation or conciliation, is that if the parties do not agree, the arbitration process ends with a binding decision, otherwise known as an arbitral award. The function of the award is to render justice to the parties and it will have this function if it complies with the applicable decision-making principle and if the parties consider the process fair and the arbitrator independent and impartial. The key elements can be traced to three different areas: 1. Involvement of an independent and impartial third party: there is no arbitration without a third party, the arbitrator, who is distinct and neutral to the parties to the dispute and conducts the proceedings impartially. In legal practice, the appointment of an arbitrator and, more specifically, potential challenges relating to doubts about his or her independence and impartiality, are often a lengthy and contentious phase of the arbitration process. The IBA published the ‘Guidelines of Conflicts of Interest in International Arbitration’ in 2014, highlighting the practical relevance of this issue. The use of AIs charged with putting in place the process of selecting human arbitrators should be seen as an intermediate step towards a use whereby AIs would play the role of an independent and impartial third party at the forefront. In principle, AI-based systems can be designed in such a way that the system is separate from the parties and meets the above criteria. An AI-based dispute resolution system could only be functional if the system enjoyed some sort of status as a legal entity under the applicable laws. Consequently, whether a fully autonomous system can serve as an arbitrator depends on whether, at least for the purposes of administering an arbitration process, such a system enjoys legal capacity. Various attempts have been made to answer this question: the creation of a new type of legal personality specifically for ‘advanced’ robots (so-called e-personality) has been proposed, or the establishment of ‘self-driving corporations’ capable of acting as arbitrators, thus allowing for fully automated company management. 2. Management of the process: arbitration is a dispute resolution process; it involves the establishment and review of the facts of the case, the application of procedural rules agreed upon by the parties and/or mandated by the lex loci arbitrators (equal treatment of the parties, right to be heard, etc.) and the application of substantive law rules chosen by the parties or directed by the lex loci arbitrators. Currently, there is no general application of AI that is capable of handling the arbitration process in its entirety: what limits such a use concerns the availability of accurately labelled training data for machine learning tools. However, to the extent that arbitration laws are public and the management of the arbitration process must necessarily follow those laws, labelling might be feasible. Another important element of the process is the

Artificial Intelligence and Contracts: Reflection about Dispute Resolution  277 hearing during which the parties have the opportunity to present their case: if an artificial intelligence system can technologically ensure that the parties are heard in a hearing, there is no conceptual objection against characterising this part of the process as a “hearing”. Thus, a hearing with an AI system would functionally provide what is constitutive of a hearing, namely, provide a forum for the parties to be heard and present their case. 3. Pronouncement of decision: the proper function of arbitration is to issue a final and binding decision, an award, if the parties do not resolve their dispute during the process. Under the UNICTRAL Arbitration Rules, the parties may choose the substantive law applicable to their case, as is often the case in international commercial arbitrations. Often the law of the jurisdiction where the arbitral tribunal is located is chosen, thus ensuring that the lex loci arbitrators and the applicable substantive law belong to the same jurisdiction. Both functionally and in principle, AI-powered systems can undertake case analysis and make a decision on a specific case; however, common law legal systems are better suited to generate sufficient data for machine learning applications than civil law systems. A major obstacle to the widespread adoption of AI-based adjudication could be the ability of the application to explain the reasoning behind a particular decision. If AI-based systems will be able to perform the same functions as human arbitrators, not only at the same level, but also more efficiently and with a higher quality, questions regarding the applicable legal framework will inevitably arise. Although the current legal framework is not extensively developed in relation to the use of AI, it is expected that several jurisdictions will experiment with their regulation of AI-based arbitration systems in the future. The New York Convention on commercial arbitration does not include any reference to the nature of the arbitrator or tribunal, requiring only that contracting states recognise and enforce arbitration agreements and arbitral awards subject only to limited grounds for rejection. Despite this lack of specific provisions on the attributes or characteristics of an arbitrator, if we consider the technology available in 1958, when the NYC was adopted, its provisions were drafted with the implicit understanding that arbitrators could only be human.37 Accordingly, if the parties were to opt for arbitration in which human arbitrators are assisted by AI systems, this should be permitted 37 Since the NYC is an international treaty. Its interpretation must follow the rules of Arts 31–32 of the 1960 Vienna Convention on the Law of Treaties (VCLT): in these articles, the emphasis is on ‘the ordinary meaning to be given to the terms of the treaty in their context and in the light of its object and purpose’. Its purpose is to facilitate the cross-border enforcement of arbitral awards between Contracting States and to promote international trade. A decision is considered an ‘award’ within the meaning of the NYC when it decides and resolves a dispute in whole or in part in a binding manner. Consequently, if a decision originating from an AI-based system performs these functions, an interpretation of the NYC based on its object and purpose would lead to the conclusion that such a decision is an ‘award’ within the meaning of the NYC; an interpretation that takes into account the object and purpose of the Convention rather than the technology available at the time of its adoption. In sum, the NYC can be interpreted to accommodate the technological developments of the modern era and the recognition and enforcement of awards rendered by fully autonomous, AI-powered arbitrators.

278  Paola Aurucci and Piercarlo Rossi within the limits of the requirement that the parties be treated equally and that each party be given a full opportunity to present its case. At the present technological stage, it would not seem conceivable to replace arbitrators entirely by AI-based systems, but such intelligent machines certainly represent a valid support for the human arbitrator. However, software is only as good as the data it is given. If used in a dysfunctional and biased way, algorithms could lead to discriminatory results. Similarly, limited entries will yield selective information outcomes. Thus, while being able to offer a ‘binary answer based on probabilistic inference [AI] can obscure many controversies under the guise of objective analysis’. Furthermore, the practice of decision making is based on a combination of factors, including experience, emotion and empathy. Deciding a case is not simply based on inductive reasoning, but deductive reasoning, ie, giving consideration to specialist domain knowledge, expertise and practical understanding. Moreover, besides lacking the human discretion that is vital to dispute resolution or the cognitive skills central to legal decision-making, AI would also deprive court users of their right to be motivated for the outcome of their case.

D.  Ethical and Legal Concerns in the European Framework On 4 December 2018, the European Commission for the Effectiveness of Justice (CEPEJ) of the Council of Europe issued the ‘European Ethical Charter for the Use of Artificial Intelligence in Criminal Justice Systems and Related Environments.‘38 It is a document of exceptional importance, since it is the first time that at a European level, having taken note of the ‘growing importance of artificial intelligence (AI) in our modern societies and of the benefits expected when it will be fully used at the service of the efficiency and quality of justice’, some fundamental guidelines are identified, to which ‘the public and private subjects responsible for the project and development of the instruments and services of the AI’ must adhere. In particular, the Ethical Charter sets out the following principles: 1) principle of respect for fundamental rights; 2) principle of non-discrimination; 3) principle of quality and safety; 4) principle of transparency; 5) principle of guaranteeing human intervention.39 The last principle – also known as the ‘under user control’ principle – is the one that is of particular interest here, as it specifically aims to ‘preclude a prescriptive approach’ and ‘ensure that users are informed actors and exercise control

38 EPEJ, ‘European Ethical Charter for the Use of Artificial Intelligence in Criminal Justice Systems and Related Environments at www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-useof-artificial-intelligence-ai-in-judicial-systems-and-their-environment. 39 www.europarl.europa.eu/meetdocs/2014_2019/plmrep/COMMITTEES/IMCO/DV/2020/01-22/ RE_1194746_EN.pdf.

Artificial Intelligence and Contracts: Reflection about Dispute Resolution  279 over the choices made’ (ensure that users are informed actors and in control of the choices made). The statement, even in its essentiality, implies, therefore, the widest possibility of use of AI in the field of criminal justice, but under two conditions: that operators are qualified to use the AI system and that each decision is subject to human control (for example, by the judge using the automated system). These conditions tend to avoid what the Ethical Charter calls the ‘deterministic approach’, ie the risk of excessive automatism or standardisation of decisions. Having said that, let us see what applications artificial intelligence may have in the field of justice today or in the near future. The European Parliament adopted, in plenary session, on 12 February 2020, a resolution on automated decision-making processes, which follows that of 12 February 2019 on the general European industrial policy on artificial intelligence and robotics and that of 16 February 2017 on recommendations to the Commission on civil law rules on robotics, to take up the report on liability for artificial intelligence and other emerging digital technologies (prepared by the Expert Group on Liability and New Technologies and published on 21 November 2019) and to the Guidelines for Reliable AI published on 8 April 2019; and the Plan for a European Approach to Artificial Intelligence is in the pipeline.40 The tenet is certainly based on a consumer protection approach: having noted the superiority over humans of the precision and speed of learning algorithms as the reason for their widespread use, the risks of an artificial intelligence capable of making decisions without human supervision are a cause for concern. The main risk, given that machine learning is based on the recognition of patterns within data systems, is that of the systematisation of prejudices and discriminations, based on the very way in which the relevant mechanisms are designed: therefore, in order to protect consumers in the age of artificial intelligence, the need has been identified to develop tools for adequate consumer information at the moment in which they interact with artificial intelligence and automated decision-making processes, in order to make informed decisions on their use. It is therefore hoped that more incisive measures will be taken to provide solid protection for consumers’ rights, guaranteeing them against unfair and/or discriminatory commercial practices, or against risks arising from commercial artificial intelligence services, ensuring the greatest possible transparency in these processes and providing for the use only of non-discriminatory and high-quality data. Autonomously, the European Parliament, with its Research Service, adopted in April 2019 its report for the Framework for an Algorithmic Reliability and Transparency Authority.41

40 European Parliament, Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics at ww.europarl.europa.eu/doceo/document/TA-8-2017-0051_ EN.html#title1. 41 European Parliament, European Parliamentary Research Service Framework for an Algorithmic Reliability and Transparency Authority (2019) at www.europarl.europa.eu/RegData/etudes/STUD/2019/ 624262/EPRS_STU(2019)624262_EN.pdf.

280  Paola Aurucci and Piercarlo Rossi The Council of Europe has, for its part, established an Ad hoc Committee on Artificial Intelligence (CAHAI),42 which, under the system of extended multistakeholder consultations, aims to examine the feasibility and potential elements of a legal framework for the development, design and implementation of artificial intelligence based on the Council of Europe’s standards on human rights, democracy and the rule of law. To this end, the Committee will review the current state of the law, including with regard to digital technologies, but will also consider supranational or regional legal instruments, the outcomes of work undertaken by other Council of Europe bodies and other international or regional organisations, with particular attention to gender issues and the promotion of cohesive societies and protection of the rights of persons with disabilities.

IV. Conclusion Although it seems unlikely that AI will replace human decision makers in the near future, AI-based platforms such as Opus 2,43 Luminance,44 Kira and Ross Intelligence, etc have already made great strides in transforming the practice of adjudication, offering solutions that are data-driven, faster and can reduce the possibility of error. As widely known, many AI applications, such as the above-mentioned platforms, do not allow us to fully understand how they work or the logic behind them due to an effect called the ‘black box’, according to which the algorithmic models are mostly unpredictable. This characteristic is considered one of the biggest problems in the application of AI techniques, it makes the decisions of the machine non-transparent and often incomprehensible even in the eyes of experts or developers themselves.45 Among the principles set out, respect for human rights and non-discrimination by artificial intelligence applications is of fundamental importance. It is a question of ensuring, from the conception stage to practical application, that the solutions guarantee respect for the rights guaranteed by the European Convention on Human Rights and by Convention No. 108 of the Council of Europe on the protection of personal data. The principle of non-discrimination is expressly stated because of the ability of artificial intelligence processing – especially in criminal matters – to reveal existing discrimination by grouping or classifying data concerning persons

42 CAHAI, Ad hoc Committee on Artificial Intelligence at www.coe.int/en/web/artificial-intelligence/ cahai. 43 Opus2 at www.opus2.com. 44 Luminance at www.luminance.com. 45 R Goebel, A Chander, K Holzinger, F Lecue, Z Akata, S Stumpf, P Kieseberg and A Holzinger ‘Explainable AI: The new 42?’ 11015, Lecture Notes in Computer Science (2018) 300.

Artificial Intelligence and Contracts: Reflection about Dispute Resolution  281 or groups of persons. Public and private actors must therefore ensure that these applications do not reproduce or exacerbate such discrimination and do not lead to deterministic analyses or practices. Some qualitative challenges in terms of the methodology of analysis and automated processing of judicial decisions are also considered. It should be possible to process data using machine learning on the basis of certified originals and the integrity of such data should be guaranteed at all stages of processing. In addition, there should be a special care in the phase of the selection of the decisions that will be further processed by machine learning. These should be truly representative of the different realities on which the judge is called to rule and not correspond to predetermined analysis grids (eg designers might have a tendency to discard decisions that do not lend themselves to machine learning correlations of linguistic sequences or poorly motivated decisions). In addition, the need for a secure technological environment for storing and implementing machine learning models and algorithms is emphasised. Also of great importance is the principle of transparency of the methodologies and techniques used in the processing of judicial decisions. Emphasis is placed here on the need to make data processing techniques accessible and comprehensible, as well as to authorise external audits by independent parties in order to identify possible distortions. A system of certification of the various applications by these authorities is also encouraged. In addition, the need not to be passive and, on the contrary, to reinforce user autonomy in the use of artificial intelligence tools and services is emphasised. The judge, in particular, should be able to go back at any time to the judicial decisions and data that have been used to produce an outcome, and continue to have the possibility to deviate from them in view of the specificities of the case. Every user should be informed, in clear and comprehensible language, of the binding or non-binding nature of the solutions proposed by AI tools, of the various possible options and of his or her right to legal assistance and recourse to a court. It is considered that the challenges posed by artificial intelligence in the field of justice can only be dealt with more effectively through a contamination of knowledge: in this respect, the creation of multidisciplinary teams made up of judges, social science researchers and computer scientists is strongly recommended, both at the stage of developing and piloting and implementing the proposed solutions. In general, a gradual approach is recommended in integrating artificial intelligence into judicial systems: it is essential to check that the applications proposed (often by private companies) have added value for users and that they actually contribute to achieve the objectives of the public service of justice. It is important to render more transparent the AI applications in the dispute resolution systems. To obtain this objective, the approach could be to enact legal norms compelling the use of explainable AI models to software developers in the judicial domain. Explainability has been discussed as a requirement under

282  Paola Aurucci and Piercarlo Rossi data protection law. There are some doubts over the interpretation of the right to explanation. The debate has focused on two provisions. Article 22(3) GDPR holds that, in certain cases of automated processing, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

On the other hand, Recital 71 of the GDPR, in discussing Article 22(3) GDPR, famously states that the safeguards should, inter alia, include ‘the right […] to obtain an ­explanation of the decision reached after such assessment’. Since the right to explanation is contained only in the (non-binding) recital, and not in the binding text of Article 22(3) GDPR, it seems that a right to an explanation of i­ndividual decisions, which could include global or local explanations, does not follow from Article 22(3) GDPR. However, the asymmetry of information between the software developer and the final user (lawyer, decision maker or citizen) requires the guarantee of a contractual explainability in a similar way that the consumer is protected against the producer.

INDEX The Accord Project  164 Adobe PDF  161–2 advanced AI systems  202, 205, 207 adversarial machine learning  13 advertisements  21, 224–7 audience targeting  225–6 delivery  225–7, 235 discrimination  224 personalisation  224–5, 244 platform services  223–5 volunteered data  225–6 advice  137, 138, 147 agency autonomous agents  12 capacity to contract  66 drafting  34–5 fraud  32–3 liability  38–9 machine learning  68–9 negotiations  34–5 planning agents  61, 75–6 rational agents  63–4 remedies  219 shared intentionality thesis  62–9, 75, 79–80 software agents, using  34 vicarious liability  39 Airbnb  82, 89, 151, 153 ALaaS (AI as a Service)  20 Alexa  120 algorithms see also anticompetitive collusion and algorithms abstractive  184, 193–4 administrative justice  273 backpropagation algorithms  9 black box effect  280 blockchain  266, 269 clustering-based approaches  191 code, contracting in  160, 216–19 communication  25, 259, 263 concept-based approaches  192 court systems  273 damages  212, 273 deep learning  192–3 deterministic algorithms  30–2

dispute resolution  273, 279, 280 extractive algorithms  184, 191–2 fairness  16–17 financial markets  23 Framework for an Algorithmic Reliability and Transparency Authority (European Parliament)  279 graph-based approaches  192, 193 information asymmetries  24–5, 39 Itemset-based approaches  192 learning  20, 192, 274, 279 machine learning  20, 42, 67–9, 192, 274 optimisation-based approaches  192 per se prohibitions  245 personalised choice architectures  229–30 platform services  90, 222–3, 225, 229–30, 232, 247 price discrimination  22 probabilistic methods  10 Q-learning algorithms  251 remedies  204–5, 214–19 self-learning algorithms  32, 251, 258 smart contracts  66 specific performance  210–11 statistical-based approaches  10–11, 191 summarising documents  183–4, 191–4, 197–8 tax disputes  273 text summarisation  183–4, 191–3 topic-based approaches  192 translation  156 Unfair Terms in Consumer Contracts Directive  234 Allen, D  270 alternative dispute resolution (ADR)  22–3, 267, 273–4 Amazon  81, 89, 120, 257 analysis through AI advanced analysis capabilities  108 Big Data Analytics  368, 271 collusion  260 data to manage, conclude and analyse contracts, use of  81

284  Index deeds and documents, automatic analysis of  272 deterministic analysis  281 document analysis systems  108 facts, analysis from  274–5 Latent Semantic Analysis  192 management of contract lifecycle  100, 108–9 predictive analytics  83 revolution of contract through legal technologies  83, 90, 92 sentiment analysis  274 anonymisation  123, 131 Answer Wizard  10 anticompetitive collusion and algorithms  249–66 agreements between undertakings, decisions by associations of undertakings and concerted practices  254–6, 264 algorithm communication  259, 263 algorithmic antitrust, time for  260–5 blockchain  266 cartels, detection of  257–8 code level and data level, distinction between alignment at  258 Competition and Markets Authority (CMA)  252, 257–8, 263 competition policy  256–60 compliance  262 concerted practices  254–5, 260–1, 264 conscious parallelism  251, 254, 256–9 coordination  251–60, 265–6 decision-making  256 design  262, 266 Digital Markets Act  262, 266 economic theory and antitrust rules  253–6 efficiencies  259–61 enforcement  258–64, 266 European Commission  250, 253, 257, 260, 262–3, 265 explicit collusion  260, 265 feedback  251 gatekeepers  262, 266 hub-and-spoke scenarios  258, 260, 265 investigations  263–6 legal certainty  261 machine learning  250, 264 margins  264–5 meeting of algorithms  253, 261 meeting of minds  253, 255, 261 monopolies  253–4 oligopolies  251, 253–6, 261–2, 265

pricing  250–2, 255–62, 264–6 regulation  253, 261–2, 265 reinforcement learning  250–1 self-learning algorithms  251, 258 Sherman Act (US)  254 signalling practices  259 supervised and unsupervised learning  250 tacit collusion  251, 253–9, 262, 265 training  250–1 transparency  252–4, 256, 262, 265 trial-and-error process  251 unlawful explicit collusion  251 wait-and-see approach  265 Application Programming Interfaces (APIs)  106 ArbiLex  273 arbitration  267–78 awards  276 choice of law clauses  68 IBA Guidelines of Conflicts of Interest in International Arbitration  276 independence and impartiality  276 lex loci arbitrators  276–7 New York Convention  275–6, 277–8 self-driving corporations  276 third parties  276 UNCITRAL Arbitration Rules  277 UNCITRAL Model Law on International Commercial Arbitration  276 argumentation support tools  23 Aristotle  159 artificial intelligence, definition of  63–4 ASCII  160 Asilomar manifesto for AI principles (Future of Life Institute)  18 Assad, S  264 Association of Corporate Counsel (ACC)  111 asymmetries information, of  24–5, 39, 67–8, 230, 282 power imbalances  51, 237–8 software  95–6 structural asymmetry between consumers and traders  240–1 attribution  29–31, 60–1, 69–75 audience targeting  225–6 Austin, JL  174 automated services advertising delivery  225, 235 consent, defects in  205, 207 decision-making  36, 279

Index  285 Digital Content Directive  127–40, 143–4, 154 force majeure and frustration  208–9 logic used, information on  116 real legal services, combined with  138–40 specific performance  210–11 text summarisation  182, 183–6, 197–8 autonomous systems advanced AI  205 agents  12 decentralised autonomous organisations (DAOs)  42, 45–7, 48 deep learning  12 hyper-autonomous contracting, regulation of  42–50, 52, 56–8 liability  38–9 multilateral autonomous contracts  34 recontextualisation  55–8 remedies  202, 203–5, 219–20 smart contracts  41–2 average consumer  238–41 Azure (Microsoft)  10 bargaining power, asymmetries of  237–8 Barnett, Robert  78–9 BATNAs (best alternative to a negotiated agreement)  22–3 battle of forms  34 Bayesian machine learning  273 Bayesian networks  10, 15 behaviour biases  22, 28, 29 big data profiling  22 conditioning  226–7 exclusion of liability  144 modification  222–3 platform services  222–3, 224–7, 232 predictions  226 reactive behaviour  5 vicarious liability  38 best endeavours  54 biases  22, 28, 29, 207, 225 big data Big Data Analytics  268, 271 deep learning  4–5, 11–14 machine learning  67–8 price discrimination  22 profiling  22, 24–5 self-driving contracts  271 shared intentionality thesis  62 typifications  94 Bigle Legal  102, 104

biometrics  122, 124 Black Boiler  104 blackbox  42, 215–16, 219–20 Blawx  164, 168–71, 174–5, 176–7 blockchain  43–50 ADR  267 advanced AI systems  202 algorithms 2366, 269 arbitration  270–1 black-box problem  42, 215–16, 219–20 collusion  266 cryptography  44 The Dao, attack on  45, 46–51, 56 decentralised autonomous organisations (DAOs)  42, 45–7, 48 decentralisation  42, 45–7, 266, 269 dispute resolution  267, 268–72 Ethereum  42, 44–5, 46–8, 50–1, 269, 271 hard fork response  47, 50–1 hyper-autonomous contracting, regulation of  42–50 if-this-then-that’ logic statements  269 integration of AI and blockchain technology  47–50, 58 lex cryptographia  49–50, 51–2 machine learning  42, 47–9 natural language  48 Online Dispute Resolution (ODR)  270 remedies  269 security  47–8 self-driving contracts  271–2 smart contracts  20, 41–52, 56–8, 84, 202, 268–70, 272 specific performance  210–11 start-ups  47–8 transparency  42 trust  48, 50 vulnerability in code  51 boilerplate clauses code, contracting in  161, 163, 171, 173, 175–7, 179 drafting  161 modular boilerplate  177 natural language interface  69 Unfair Terms in Consumer Contracts Directive  234 Boole, George  160 Boolean algebra  160 bootstrap method  11 Boyer, Pascal  77–8 Box  107 Bratman, Michael  76–7

286  Index Breese, Jack  10 BRYTER  111 Buchwald, M  271 business-to-consumer (B2C) commercial practices  231, 233–4, 246 CADUCEUS  8 Cambridge Analytica  22 cancel, right to  28–9 capacity to contract  61, 66, 77–8 cartels, detection of  257–8 Case-Based Reasoning (CBR) paradigm  4, 14–15 case management  274, 276–7 Casey, AJ  172, 175, 271 Catalini, C  266 causation  212–14 change management  88 Charter of Fundamental Rights of the EU  228 chatbots, building  22–3, 115–33 advice  138 biometric identification  122, 124 chatbot, definition of  115 Computer Programs Directive  116 conformity criteria  147 copyright  115–16, 117, 121, 123–33 data protection  115–16, 120–33 anonymous data  123, 131 data subjects, consent of  116, 127–9, 132 deletion  122 object, right to  116 personal  116, 123–33 purpose of use of data  122 principles  121 training from models  116, 126 voice samples  120 withdrawal of consent  116, 128 data used to build a chatbot  123–6 databases  126, 130 deep learning  12 Digital Content Directive  137, 147, 154 Digital Single Market Directive  116, 126, 129–30 E-Commerce Directive  152 Enforcement Directive  132 e-Privacy Directive  121–2, 127 EU law  116, 121–30, 132–3, 147, 152, 154 execution of requests  122 expressing responses to users  118 General Data Protection Regulation (GDPR)  116, 121–4, 127–8, 132–3

Guidelines on Virtual Voice Assistants (Guidelines on VVAs) (EDPB)  116, 122–4, 127, 131 Infosoc Directive  116, 129, 132 initial creation  126–7 interactive training  118 language models  115–16, 118–22, 124, 126–33 legal basis for building a chatbot  126–31, 133 legal framework  123–32 life cycle of chatbots  126–7 machine learning  117–20, 122, 127 memory, length of  121–2 natural language processing  115, 117–18, 123 privacy  120–1, 122, 127 profiling  122 programs, as  116, 117 public interest  128, 133 public-private partnerships (PPPs)  128, 133 purpose of use of data  122 research exception  128–32 sensitive personal data  123–4 storage  121–2 technical background  117–23 text and data mining (TDM) exception  116, 127, 129–30, 132–3 training from models  116, 118–21, 123–4, 126–8, 133 understanding utterances  117–18 Virtual Voice Assistants (VVAs)  116, 121–4, 127, 131 voice-activated chatbots  120 Chief Legal Innovators Officers (CLIO)  88 Chief Legal Officers (CLOs)  111 Chinese Room experiment  6 choice of law clauses  68 Chomsky, Noam  160, 167–8, 173 Church, Alonzo  8 Cicero architecture  164–5, 172–3 citations  195 civil law systems  202–3, 206, 209, 272, 279 claims management companies  138–9, 145 Clark, Sally  16 CNN (Convolutional Neural Network)  12 code, contracting in  92–3, 155–80 algorithms  160, 216–19 Blawx  164, 168–71, 174–5, 176–7 boilerplate clauses  161, 163, 171, 173, 175–7, 179 Boolean algebra  160

Index  287 code is law statement  41–2 context  157–8 drafting contracts  155–6, 161–3, 173, 175–80 emerging frontiers  175–80 Ergo  164–6, 172–3 ethics  216–19 expert systems  161 human error  161–2 integrated development environment (IDE)  162 Internet  41 interpretation  35, 39, 156–61, 169–71, 174–6 Lexon  164, 166–8, 173, 176–7 logical ancestors and formalistic return  159–63 Low Code/No Code Builders  110–11 machine learning  161 mathematics  158–62, 172, 174 meaning, processing  158–9 natural language  162–73, 175, 177 new encasings  176–80 no-code platforms  176 party autonomy  171, 173, 175–6 programming languages  155–80 Prolog  163–4, 168 Python  163–4 remedies  216–19 rule of code  43, 52–8 semantics  157, 159, 167, 171–3 smart contracts  41–2, 155–80 statistics  158–60 symbolism  159–60 syntax  162–3, 171 translation  156–9, 164–71, 174–6, 180 vulnerabilities in code  51 cognitive biases  225 cognitive science  4, 60–1 Cohen, Julie E  223 Collins, Hugh  51–2, 56–7 collusion see anticompetitive collusion and algorithms commodification of legal practice  85–7 common law systems  60, 70–2, 79, 202–3, 277 competition see anticompetitive collusion and algorithms compliance and regtech through contract software  92–4 Computer Programs Directive  116 concerted practices  254–5, 260–1, 264

conciliation  276 conferral, principle of  227 confidentiality  146–7, 272–3 conflicts of interest  146–7, 276 conformity criteria Digital Content Directive  135, 139, 140–8, 149, 154, 236–8, 242, 247 ethical duties  146–7, 154 exclusion or limitation of liability  149 fairness  146 licensing obligations  145–6, 154 mandatory indemnity insurance  147–8, 154 objective conformity criteria  135, 144–8, 149, 154, 236–8, 247 reasonably-expectable-quality test  145 subjective conformity criteria  141, 236–7 connectionism or sub-symbolic approach to AI  9 conscious parallelism  251, 254, 256–9 consensus ad idem  31 consent data subjects  116, 127–9, 132 defects  31–3, 39, 205–7, 219 shared intentionality thesis  60, 78–9 Unfair Terms in Consumer Contracts Directive  234 withdrawal  116, 128 consumer law  221–47 consumer protection  222–3, 228–43, 246–7, 279, 282 Consumer Rights Directive  27–8, 142–3, 241 Context Free Grammars (CFG)  167–8 contra proferentem rule  35 Contract Express  102 contract generators  34, 136–7, 139–45, 148, 150–2, 154, 209 contracting and contract law  19–40 ADR  22–3 argumentation support tools  23 BATNAs  22–3 behavioural biases  22, 28, 39 conformity  21 consent, defects in  31–3, 39 data protection  26–7 direct communication with other systems  25 enforcement of rights  25 exploitative contracts  27, 28–9, 39 fair trading law  26–7 fairness  36 formation of contract  29–31

288  Index fraud  32–3 freedom of contract  24–5, 39 information asymmetries  24–5, 39 interpretation  34–6, 39 liability  37–9 machine learning  19–21 mistake  31–2 natural language processing  19–21 Negotiation Support Systems (NSSs)  22 party autonomy  24–5 performance phase  23 post-contractual phase  23 pre-contractual duties  21–2, 26–31, 39 price discrimination  22, 27–8, 39 process of contracting  22–3 smart contracts  20 standard terms, incorporation of  33–4 subject of contract, where AI systems are the  20–1 tools for contracting  22–3 contractual liability  37–9, 91 agents, AI systems as  38–9 autonomous systems  38–9 behaviour of human agents  38 damages  37–8, 39 Digital Content Directive  37, 135, 139, 148–9 ethics  279 exclusion of liability  148–9 fault  37–8 force majeure  37 limitation of liability  46, 148–9 misconduct of system  37 non-conforming digital content/services  37 non-performance  37 price reduction  37 repair or replacement  37 Sale of Goods Directive  37 strict liability  37 termination of contract  37 waivers  101 convergence  42 cookies  93 copyright  123–33 Berne Convention  124–5 chatbots  115–16, 117, 121, 123–33 Database Directive  126 Digital Single Market (DSM) Directive  116 Enforcement Directive  132 EU law  125, 132 intellectual creation  125 language models  131

literary and artistic works  124–5 machine learning  117 making available right  126 originality  125, 131 private use exception  129 quotation right  129 reproduction right  126, 129–30, 132 temporary reproduction right  129–30, 132 Council of Europe (CofE) Ad Hoc Committee on AI (CAHAI)  280 Convention No.108 on personal data  280 European Commission for the Effectiveness of Justice (CEPEJ)  278–9 European Ethical Charter for the Use of AI in Criminal Justice Systems and Related Environments  278–9 country-of-origin principle  135, 149–54 court systems  196–7, 272–3, 281 Covid-19  267 criminal law  32–3, 273, 278–9 cryptography  44 cultural social practices  77–8 customs  53 damages  28, 37–8, 39, 91, 211–14 algorithms  212, 273 calculation  273 causation  212–14 Digital Content Directive  148–9 foreseeability  212–14 manufacturers  214 non-contractual liability and contractual liability, boundary between  212 probabilistic computing  213 product liability  211–12 quantification  212–13 road accidents  273 specific performance  209 supervised learning  213 unsupervised learning  213 Unfair Contract Terms Directive  148–9 The Dao attack  45, 46–51, 56 dashboards  109 data protection accountability  93 anonymous data  123, 131 automated decisions, information on logic used for  116 chatbots  115–16, 120–33 cookies  93 Council of Europe Convention No.108 on personal data  280

Index  289 data controllers  121 data subjects, consent of  116, 127–9, 132 deceased persons, personal data of  124 deletion  122 dispute resolution  281–2 Enforcement Directive  132 ethical duties  146–7 explainability  281–2 General Data Protection Regulation (GDPR)  26–7, 116, 121–4, 132–3 accountability  93 consent  127–8 deceased persons, personal data of  124 default option  93 explainability  282 remedies  214–15, 218–19 models  116 object, right to  116 personal  116, 123–33, 138, 218, 228 platform services  228 pre-contractual duties  26–7 purpose of use of data  122 principles  121 public interest  128 sensitive personal data  123–4 supervision  218 training from models  116, 126 voice samples  120 withdrawal of consent  116, 128 databases chatbots  126, 130 Database Directive  126 extraction  130 Davidson, Donald  64, 75–7 De Berk, Lucia  16 De Filippi, P  49–52, 270 Deakin, Simon  160 decentralised autonomous organisations (DAOs)  42, 45–7, 48 decision-making argumentation support tools  23 automated decision-making  36, 116 collusion  256 consumer protection  279 deterministic approach, avoidance of  279 deviate from decisions and data, ability to  281 dispute resolution  274–8, 281–2 ethics  279 explainability, right to  214–16, 281–2 fairness  36

human intervention, principle of guaranteeing  278–9 informed of binding or non-binding nature, right to be  281 machine learning  281 problems  275–8 pronouncement of decisions  277 remedies  214–16 deeds and documents, automatic analysis and preparation of  272 deep learning algorithms  192–3 big data  4–5, 11–14 decision support  5 deep neural networks  11–14 Encoder-Decoder model  193 models  192–3 multilingual documents, summarising  181, 190, 192–3, 197–8 neuroscience  4–5 pointers  193 reinforcement learning models  193 Delrahim, Makan  266 Dennett, Daniel  61, 73–4, 79 Derrida, Jacques  155–6, 175 Descartes, René  159 design asymmetries and clauses evaluation  96 collusion  262, 266 compliance, by  262 objective intent  73 platform services  225, 227 regulation  94–5 Diedrich, Henning  167–8, 173 Digital Content Directive (DCD)  135–49 advice offered online  137, 138, 147 applicable law, compliance with the  143–4 asymmetries of bargaining power  237–8 audio sharing  236 automated services  137–40, 143–4, 154 behaviour, exclusion of liability for consumer’s  144 border between digital and non-digital work  137 chatbots  137, 147, 154 conflict of interests  146–7 conformity criteria  135, 139, 140–8, 149, 154, 236–8, 242, 247 consumer contracts  135–49 contract generators  136–7, 139–45, 148, 154 counter-performance  236

290  Index damages  148–9 differentiation between technical part and subject matter part  139–40, 154 digital services, definition of  236 do-it-yourself products  136–7 document generators  137 drafting tools  143–4 ethical duties  146–7, 154 exclusion or limitation of liability for legal tech services in standard terms  148–9 exclusions from scope  136–7, 140 fairness  146 fit-for-purpose test  142 good faith  242 harmonisation  135 indemnity insurance  147–8, 154 intermediation services  138, 149, 154 legal profession national rules as objective conformity criteria  135, 144–8, 149, 154 legal tech services as falling within scope  135, 136–40 liability  37, 135, 139, 148–9 licensing obligations relating to legal profession  145–6, 154 mass claim enforcement services  136, 138–9, 145, 154 mixed contracts  138, 151, 154 national laws, compliance with  142–9 objective conformity criteria  135, 139, 140–9, 154, 236–8, 247 offline and online parts, mixture of  151, 154 outcomes, predicting  147 payable services  138–9 personal data, provision of  138 personal services exception  136–7 platform services  149, 236–8, 241–2, 247 pre-contractual information obligations  142–3 price reduction  146, 148 quality of legal services  140–1 reasonableness, standard of  241–2, 247 remedies of consumers  135, 140, 143, 145–6, 148–9, 154 safe harbour privileges  149 security standards  142 smart contracts  137 standard terms  148–9 subjective conformity criteria  141, 236–7

supply of digital content and digital services  135–49 templates  137 termination  146, 148 Unfair Contract Terms Directive  148–9 updating obligations  142, 143 video sharing  236 Digital Markets Act  262, 266 Digital Single Market (DSM) Directive  116, 126, 129–30 disclosure discovery and document analysis  275 e-discovery  110, 274–5 Unfair Terms in Consumer Contracts Directive  234–5 discrimination  22, 24, 27–8, 36, 224, 278–81 dispute resolution  267–82 see also dispute resolution, use of AI in ADR  22–3, 267 arbitration  267–72 Big Data Analytics  268, 271 blockchain-based dispute resolution mechanisms  267, 268–72 discrimination and equality  280–1 ex-ante intentions  270–2 ex-post intentions  271–2 governance  268 human rights  280–1 management  268 Online Dispute Resolution (ODR)  83, 270 post-contractual phase  23 predictive models  271–2 self-driving contracts  268–72 smart contracts  267–72 third parties  267–8, 270 dispute resolution, use of AI in  272–80 see also arbitration ADR  273–4 algorithms  273, 279, 280 classification of documents  272 common law systems  277 conciliation  276 confidentiality  272–3 court systems  272–3 criminal law  273 data protection  281–2 decision-making  274–8, 281–2 deeds and documents, automatic analysis and preparation of  272 discovery and document analysis  275 emerging trends  274–5

Index  291 ethical and legal concerns in European framework  278–80 European Ethical Charter for the Use of AI in Criminal Justice Systems and Related Environments  278–9 explainability  281–2 facts, analysis from  274–5 information, adequacy of consumer  279 machine learning  274–5, 277, 281 management of cases/processes  274, 276–7 mediation  276 multidisciplinary teams, creation of  281 natural language processing  274 predictive models  272–3, 274–5 problems  275–8 research platforms  275 robot judges  272–3 role of AI in dispute resolution  272–4 sentiment analysis  274 transparency  281–2 do-it-yourself products  101, 136–7 Do-Not-Pay  25 Dolin, Ron  177–8 dopamine activity  225 drafting  100–3 agents  34–5 boilerplate clauses  161 choice of law clauses  68–9 code, contracting in  155–6, 161–3, 173, 175–80 consistency  102 Digital Content Directive  143–4 experts  103 fixed provisions  100, 101–3 formal languages  161–3 hybrid languages  177, 180 interpretation  35 machine learning  68–9 market uptake  161 negotiations  100, 103–4 outsourcing  101 parallel drafting  177 questionnaire structure  102–3 review  100, 103–4 revolution of contract through legal technologies  83, 84–7, 90–1, 96 solutions  101–3 structuring contracts  103 templates  100, 101, 103, 162–3, 178 text editors  100 variable provisions  100, 101–3, 112

Dropbox  107 due diligence  23, 275 duress  28 Easterbrook, Frank  71 eBay  81, 90 Ebers, Martin  141 eBrevia  274–5 E-Commerce Directive  149–54 chatbots  152 contract generators  150–2, 154 country-of-origin principle  135, 149–54 Digital Content Directive  149 information society services, definition of  150–4 legal tech services  135, 149–54 notaries, exclusion of  149–50 race-to-the-bottom  152 restrictive measures under national law  153, 154 templates, publishers of legal  151–2 eContracts Schema  178–9 eDiscovery Platform  110 editing  100, 161–2 efficiencies  203–4, 208, 233, 259–61 eIDAS Regulation  105–6 Eigen Technologies  109 Eigen, Zev  171 ELIZA effect  157 emotional contagion  225 Encoder-Decoder model  193 enforcement collusion  258–64, 266 consumer organisations  25 damages  132 Enforcement Directive  132 intellectual property  132 mass claim enforcement services  136, 138–9, 145, 154 public watchdogs  25 rational apathy  25 threats of legal enforcement  54–5 unenforceable terms  54 Ensemble Learning  10 e-Privacy Directive  121–2, 127 Ergo  164–6, 172–3 The Accord Project  164 Cicero architecture  164–5, 172–3 ERP software  87 error see mistake

292  Index e-Signing  104–8, 111 advanced electronic signature (AES)  105, 1–7 Application Programming Interfaces (APIs)  106 eIDAS Regulation  105–6 ETSI  106 post-signing management  100, 107–8 qualified electronic signature (QES)  105, 107 Qualified Trust Service Provider (QTSP)  105–6 simple electronic signature (SES)  105–7 type of contract or legal acts  105 validity of standards  105 Estonia  124, 149 Ethereum  42, 44–5, 46–8, 50–1, 269, 271 ethics and values arms race  18 code  216–19 confidentiality  146–7 conflict of interests  146–7 conformity criteria  146–7, 154 data protection  146–7 Digital Content Directive  146–7, 154 dispute resolution  278–80 failure transparency  18 human control  18 judicial transparency  18 loyalty  146 non-subversion  18 personal privacy  18 platform services  228–9, 238, 245 responsibility  18 safety  18 sharing of benefits and economic prosperity  18 value alignment  18 EU law see also Digital Content Directive (DCD); platform services and EU; Unfair Commercial Practices Directive; Unfair Contract Terms Directive chatbots  116, 121–30, 132–3, 147, 152, 154 collusion  250, 253, 257, 260, 262–3, 265 Computer Programs Directive  116 consumer protection  279 Consumer Rights Directive  27–8, 142–3, 241 decision-making  279 design regulation  94

Digital Single Market (DSM) Directive  116, 126, 129–30 E-Commerce Directive  135, 149–54 e-Privacy Directive  121–2, 127 Enforcement Directive  132 ethics  279 European Commission  250, 253, 257, 260, 262–3, 265 European Parliament  279 Framework for an Algorithmic Reliability and Transparency Authority (European Parliament)  279 Guidelines for Reliable AI  279 harmonisation  135, 142 Infosoc Directive  116, 129, 132 liability for AI and other emerging digital technologies  279 models  94 Modernization Directive  27, 232, 243 Plan for a European Approach to AI  279 pre-contractual duties  26 product liability  212 remedies  214–15, 218–20 robotics, civil law rules on  279 Sale of Goods Directive  37 European Convention on Human Rights (ECHR)  280–1 Everlaw  87 evolutionary anthropology  77 exclusion of liability  144, 148–9 experts code, contracting in  161 drafting  103 expert systems  4, 7–9, 160–1 natural language  160 explainability  214–16, 281–2 Explainable AI (XAI)  13 exploitative contracts  27, 28–9, 39 Extensible Markup Languages (XML)  177–8, 180 Ezrachi, A  252, 260 Facebook  191, 224–7 advertising delivery  225–7 audience targeting  225–6 automated auction  225 machine learning  226 volunteered data  225–6 display  224 emotional contagion  225 predictions of consumer behaviour  226

Index  293 facts, analysis from  274–5 fairness see also Unfair Commercial Practices Directive; Unfair Contract Terms Directive algorithms  16–17 automated decision-making  36 conformity criteria  146 Digital Content Directive  146 fair trading law  26–7 machine learning  17 price discrimination  36 standard terms  36, 229 terms discrimination  36 FastText  191 fault  37–8 feedback  90, 96, 251 financial intermediaries, regulation of  91–2 financial markets  23 fit-for-purpose test  142 Flightright  25 folk psychology  61, 69, 74 force majeure  37, 207–9 foreseeability  64, 70, 212–14 formation of contract  23, 29–31 agency  62–3, 75–6 attribution  29–31 computer errors  29 consensus ad idem  31 deterministic algorithms  30 intention to create legal relations  72 objectivity  29, 70 prior intention  29 remedies  202 shared intentionality thesis  60–1, 66, 75, 79 Turing test  71 UN Convention on the Use of Electronic Communications in International Contracts 2005  30 Foucault, Michel  156 France algorithm communication  259, 263 code level and data level, distinction between alignment at  258 collusion  266 Digital Economy Unit  263 fraud  32–3 freedom of contract  24–5 Fried, Charles  78 Frumer, Yulia  157–8 frustration  207–9

Fuller, Lon  78 Future of Life Institute  18 Gal, MS  261–2 Gautier, A  264 Gebicka, A  260–1 generators automatic headline generation  184 contracts  34, 136–7, 139–45, 148, 150–2, 154, 204, 209 Digital Content Directive  136–7, 139–45, 148, 154 document generators  137 drafting  34 E-Commerce Directive  150–2, 154 interpretation  34 perfect contracts  204 specific performance  209 Germany algorithm communication  263 code level and data level, distinction between alignment at  258 collusion  266 formation of contract  30 information society services, definition of  152–3 liability  139 licensing obligations relating to legal profession  145–6 mass claim enforcement services  145 pricing  264–5 GloVe (Global Vectors)  191 GOFAI (good old-fashioned AI) contracts  65–7 good faith  206, 211, 234, 242 Google advertising  68, 224 Google Documents  103 Google Drive  107 Google Home  120 Google Translate  157 open display  224 searches  224 Gravity Stack  110 Grice, Paul  67 Hachey, B  196–7 Hand, Learned  71 harmonisation  135, 142, 222, 227, 232–4, 239 Harrington, JE  261 Hart, HLA  172, 174 Hayes, Patrick  9

294  Index Heckerman, David  10 Heinemann, A  260–1 Helberger, N  246 Hildebrandt, Mireille  155, 158–9 Hofstadter, Douglas  157, 175 HORN clauses  8 Horvitz, Eric  10 Howell, Bronwyn E  268, 271–2 hub-and-spoke scenarios  258, 260, 265 Hugo Legal  138 human error  161 human intervention, principle of guaranteeing  278–9 human rights Charter of Fundamental Rights of the EU  228 Council of Europe Convention No.108 on personal data  280 dispute resolution  280–1 ethics  278 European Convention on Human Rights (ECHR)  280–1 platform services  222 hyper-autonomous contracting, regulation of  42–50 blockchain  43–50 decentralised autonomous organisations (DAOs)  45–7 distributed ledger technology  44 smart contracts  43–50 IBA Guidelines of Conflicts of Interest in International Arbitration  276 IBM’s crypto anchor verifier  47 Icertis  83, 87 ‘if, then’ statements  8, 65, 156, 269 image of consumers  238–41 imitation game  70, 73, 79 indemnity insurance  147–8, 154 inequality of bargaining power  24–5 information see also transparency asymmetries  24–5, 39, 67–8, 282 confidentiality  146–7, 272–3 consumer protection  279 information society services, definition of  150–4 platform services  223, 230, 244 pre-contractual information obligations  142–3 Infosoc Directive  116, 129, 132

insurance intermediaries, regulation of  91 mandatory indemnity insurance  147–8, 154 intelligent assistants  5, 67–8 intelligence, definition of  6–7 intelligent games  12 intermediation services Digital Content Directive  138, 149, 154 electronic intermediation services  150–2 financial intermediaries, regulation of  91–2 insurance intermediaries, regulation of  91 lawyers’ intermediation service  138 new intermediary, legal tech software as  88–92 passive intermediaries  149 platform services  223 Internet  41, 45 Internet-connected devices  45 Internet of Things (IoT)  49 interpretation  34–6, 39 agents, drafting and negotiating by  34–5 ambiguous terms  35 automation of interpretation  36 code, contracting in  35, 39, 156–61, 169–70, 174–6 context  156 contra proferentem rule  35 contract generators  34 deep learning  12 drafting  34 intention  35 machine learning  36 negotiations  34 open-textured rules  56–7 programming languages  35 purposive interpretation  174 semantics  157, 159, 167, 171–3 summarising documents  182 syntax  162–3, 171 Italy  91, 95, 273 Iubenda  82–3, 86 Japan, Fifth Generation Computer Project in  8–9 JavaScript  163 Jeffrey, Michael  161–2 JUR  271 Juro  102, 104, 162 Kanapala, A  196–7 Kaplow, L  255

Index  295 Kelsen, Hans  172 Kennedy, Duncan  158, 172 Kira Systems  109, 275, 280 Kleros  271 knowledge acquisition bottleneck  9 attribution  32–3 Case-Based Reasoning (CBR) paradigm  15 common-sense knowledge  7 engineering  4 Knowledge-based systems (KBS)  274 LISP Processor  7 management  87, 101 Lamontanaro, A  261 language see also natural language; programming languages chatbots  115–16, 118–22, 124, 126–33 copyright  131 drafting  161–3, 177, 180 models  115–16, 118–22, 124, 126–33, 164 text summarisation, solutions to  187–8 translation  156–9, 164–71, 174–6, 180 Latent Semantic Analysis (LSA)  192 law and AI  14–17 LawGeex  104, 272 LawGood  179–80 learning see also machine learning algorithms  279 big data  4–5, 11–14 damages  213 deep learning  4–5, 11–14 Ensemble Learning  10 experience  4, 7 lazy learning methods  14 Q-learning algorithms  251 reinforcement learning  193, 250–1 self-learning algorithms  23, 32, 251, 258 speed of learning  279 supervised and unsupervised learning  213, 250, 274 legal certainty  223, 231–2, 238, 245, 261 legal wrappers  46 LegalZoom  89, 101 Leibniz, Gottfried  159–60 Lessig, Laurence  41, 43, 55, 58 Levy, Karen  54–5, 57 lex cryptographia  49–50, 51–2 Lex Machina legal analysis platform  273–4 LexDo.it  101 LexFox  145 Lexon  164, 166–8, 173, 176–7

liability see contractual liability licensing  145–6, 154 limitation of liability  46, 148–9 LISP Processor  7, 9 Lithuania  138 Llewellyn, Karl  68–9 long-term issues  18, 45, 54–6, 272 Low Code/No Code Builders  110–11 Luminance  86–7, 109, 280 Macaulay, Stewart  52–4 McCarthy, John  5, 7–9 machine learning  19–21 adversarial machine learning  13 advertising delivery  226 agency  68–9 algorithms  20, 42, 67–9, 192 analysis through AI  108–9 BATNAs  22–3 Bayesian machine learning  10, 273 chatbots  117–20, 122, 127 code, contracting in  161 collusion  253–4 copyright  117 decision-making  281 deep learning  11–14 dispute resolution  274–5, 277, 279, 281 drafting  68–9 ethics  279 experience  4, 7 facts, analysis from  274–5 fairness  17 feature extraction  12 formation of contracts  68–9 GOFAI (good old-fashioned AI) contracts  65–6 information asymmetries  67–8 interpretation  36 knowledge engineering  4 management of contract lifecycle  100, 108–9 mistake  32 natural language interface  68–9 open-source machine learning models  42 prejudices and discrimination, systemisation of  279 security  281 shared intentionality thesis  59, 62, 67–9 statistical methods  11, 67 summarising documents  181–2, 198 supervised learning techniques  274 trained model  274

296  Index MacNeil, Ian  52 management of the contract  87, 99–113 analysis through AI  100, 108–9 Contract Lifecycle Management Systems (CLMs)  83, 99–100, 106–8, 112 custom build solutions  110–11 definitions  99–100 do-it-yourself culture  101 Document Management  111 drafting phase  100–4, 112 e-Signing  104–8, 111 enterprise marketplaces  111 knowledge management  101 legal tech professionals  112 lifecycle, legal tech solutions for the contract  99–113 Low Code/No Code Builders  110–11 machine learning  100, 108–9 marketplaces  101, 111–12 negotiation  100, 103–4 outsourcing  101, 109–11 performance phase  23 phases of the contract lifecycle  112–13 point solutions  99, 106 post-signing management  100, 107–8 review  100, 103–4 solutions  101–3, 109–11 storage  100, 107–8 mapping AI  3–18 big data and deep learning  11–14 Cased-Based Reasoning (CBR) paradigm  4 deep learning  4–5, 11–14 ethics and values  17–18 large scale and high-performance computational resources  5 law and AI  14–17 machine learning  3–4 mathematics  3–4 neuroscience  4–5 seasons of AI  5, 6–11 marketisation and commodification of legal practice  85–7 Markou, Christopher  160 marketplaces  83, 101, 111–12 mass claim enforcement services  136, 138–9, 145, 154 mathematics  3–4 mediation  276 medicine  12 Merchant, K  197

Microsoft 365  103 Azure  10 MSBN (Microsoft Bayesian Network) tool  10 Word  100, 103, 161–2 Mik, E  44–5 Miklós-Thal, J  264 Mikolov, Tomas  190 Minsky, Marvin  7 misleading and aggressive commercial practices  231, 246 misrepresentation  28, 58 mistake  31–2 attribution  31 common mistake  31–2 exploitative contracts  28 formation of contract  29 human error  161–2, 207, 270 unilateral mistake  31–2 mixed contracts  138, 151, 154 models chatbots  116, 118–21, 123–4, 126–8, 133 data protection  116 deep learning  192–3 EU law  94 language  115–16, 118–22, 124, 126–33, 164 new methodologies for model-building and refinement  5 predictive models  271–5 Probabilistic Graphical Models  10 training  116, 126, 274 Modernization Directive  27, 232, 243 monopolies  22 Morandi bridge, collapse of the  273 Moravec’s paradox  7 MSBN (Microsoft Bayesian Network) tool  10 multidisciplinary teams, creation of  281 multilateral autonomous contracts  34 multilingual documents, summarising  181–97 abstractive summarisation  184, 187, 198 Advanced Natural Language Programming techniques  182 algorithms  183–4, 191–3, 197 automated text summarisation, state of the art of  182, 183–8, 197–8 content compression  188 cross-lingual text summarisation  187–8 deep learning  181, 190, 192–3, 197–8 deep natural language processing  181–97 distributed representations of words  190–1

Index  297 extractive summarisation  184, 187 future research directions  197–8 interpretation  182 language-dependent solutions to text summarisation  187–8 legal document summarisation  194–8 machine learning  181–2, 198 machine translation  188 Natural Language Processing  182–3, 197–8 occurrence-based text representations  190 summarisation pipeline  183, 188–94 text post-processing  193–4 text pre-processing  188–91, 197 timeline summarisation  185–6 traditional document summarisation  187 update summarisation  186–7, 198 MYCIN  8 national laws Digital Content Directive  135, 142–9, 154 information society services, definition of  153–4 legal profession  135, 144–8, 149, 154 objective conformity criteria  135, 144–8, 149, 154 restrictive measures  153, 154 natural language Advanced Natural Language Programming techniques  182 ambiguity  172, 175, 177 blockchain  48 boilerplate clauses  69 chatbots  115, 117–18, 123 code, contracting in  162–73, 175, 177 controlled language  173 deep natural language processing  181–97 dispute resolution  274 expert systems  160 GOFAI (good old-fashioned AI) contracts  66–7 natural language processing (NLP)  19–21, 115, 117–18, 123, 160, 181–98, 274 objective intent  73–4 Navas, S  139 negotiations agents  34–5 back-and-forth problem  103 BATNAs  22–3 drafting  100, 103–4 management of contract lifecycle  100, 103–4

Negotiation Support Systems (NSSs)  22 simultaneous collaboration  103–4 Netflix  81 Netherlands Authority for Consumers and Markets  263 neuroscience  4–5, 225 Niblett, A  172, 175, 271 Nilsson, Nils  64, 69–70 non-contractual liability and contractual liability, boundary between  212 non-contractual mechanisms in contracting practices  52–5, 56–8 non-discrimination and equality  22, 24, 27–8, 36, 224, 278–81 Norvig, Russell  63 notaries  149–50 nudging  230 OASIS LegalXML eContracts schema  178 occurrence-based text representations  190 offline-online division  151, 154 Ohlhausen, Maureen  252, 264 oligopolies  251, 253–6, 261–2, 265 one-size-fits-all rule  140–3 Online Dispute Resolution (ODR)  83, 270 OneZero  132 open-ended or vague terms, intentional use of  54 OpenLaw  47, 162–3 opportunism  44–5 Opus 2  280 oral assurances  53 O’Shields, R  270 outsourcing  101, 109–11 Pande, Y  197 paper-deals and real-deals, differences between  53, 56–8 parallelism  251, 254, 256–9 party autonomy   24–5, 171, 173, 175–6 past dealings and customs  53 Pearl, Judea  9–10 per se prohibitions  244, 245–6 perfect contracts, generation of  204 Periscope  110 personal assistant devices  5, 69 Petit, N  262, 264 philosophy of contract law  59–61, 68, 72–3, 79–80

298  Index platform services and EU  89–90, 221–47 acquis communautaire  222–4, 227–30, 236, 238–43 advertising  223–7 algorithms  90, 222–3, 225, 229–30, 232, 247 Artificial Intelligence Act, proposal for a  243–4, 245–6 cognitive biases  225 commercial surveillance  227 conferral, principle of  227 conformity with the contract  237–8, 247 consumer behaviour  222–3, 224–7, 232 conditioning  226–7 modification  222–3 consumer law  221–47 consumer markets  223–5 consumer model  238–41 consumer protection  222–3, 228–43, 246–7 consumer relations  223–5 Consumer Rights Directive  241 Council of Ministers  227 data nudging  230 data protection  228 design  225, 227 Digital Content Directive (DCD)  149, 236–8, 241–2, 247 Digital Services Act, proposal for  243–5, 247 economic surveillance  246 emotional contagion  225 ethics  228–9, 238, 245 European Commission  227, 243 European Parliament  227 fundamental rights  222 harmonisation  222, 227, 232–3 image of consumers  238–41 information issues  223, 230 instrumental rationality of EU acquis  227–9 integration  222, 227–8 intermediaries  223 legal certainty  223, 231–2, 238, 245 monetisation strategies  224–5 need-oriented norms, need for  229–30 neuroscience  225 no-code platforms  176 per se prohibitions  244, 245–6 personal data  228 privacy  228 profiling  247 proportionality  227 quality standards  90

reasonableness, standard of  223, 241–3 regulation  222–3, 227, 243–5, 247 reputational feedback systems  90, 96 role of AI  225–7 scrolling as addictive  225 sharing/platform economy  82 social media  236 social psychology  225–7, 232, 246 socio-economic transformations  222 standard terms  229, 233–5, 246–7 standards  227, 247 structural asymmetry between consumers and traders  240–1 subsidiarity  227 supply of digital services  236–8 take-it-or-leave-it basis  230 transaction costs  90 Unfair Commercial Practices Directive  229, 230–3, 236, 238–42, 246 Unfair Terms in Consumer Contracts Directive  233–5, 236, 246–7 whistle-blowing  247 positive law  61 Posner, Richard  176 Potgieter, Petrus H  268, 271–2 precedent-based reasoning  14–15 pre-contractual duties  21, 26–31, 39, 142–3 predictions  83, 147, 226, 271–5 pricing algorithms  22, 258–9 collusion  250–2, 255–62, 264–6 damages  28 discrimination  22, 27–8, 36, 39 dynamic pricing  28, 68, 250 efficiencies  259–60 matching  261 monitoring  257 monopolies  22 oligopolies  255 personalised prices  22, 27, 207 reduction  37, 146, 148 software  250, 265 privacy  82–3, 86 chatbots  120–1, 122, 127 e-Privacy Directive  121–2, 127 ethics  18 platform services  228 probabilistic computing  9–10, 213 Probabilistic Graphical Models  10 product liability  211–12, 219 profiling  21–2, 24–5, 122, 247

Index  299 programming languages  155–80 declarative  163–4, 168, 174 Extensible Markup Languages (XML)  177–8, 180 interpretation  35 procedural  163–4 programs see software Prolog  8–9, 163–4, 168 proportionality  153, 154, 227 prosecutor fallacy  15–16 PROSPECTOR  8 psychology capacity  60 consent, defects in  206 folk psychology  61, 69, 74 social psychology  225–7, 232, 246 public-private partnerships (PPPs)  128, 133 Python  163–4 Q-learning algorithms  251 Qualified Trust Service Provider (QTSP)  105–6 quality  90, 140–1, 145, 184–5, 278 Quattrocolo, S  15 R1  8 R3 Consortium  57–8 reasonably-expectable-quality test  145 reciprocity  61 recommender systems  245 Reed Smith LLP  110 regulation collusion  253, 261–2, 265 design  94–5 financial intermediaries  91–2 fragmentation  227 hyper-autonomous contracting  42–50 insurance intermediaries  91 platform services  222–3, 227, 243–5, 247 remedies  217–19 smart contracts  41–58 Unfair Commercial Practices Directive  231 reinforcement learning  250–1 relational contracts  43, 50–6, 58 remedies for AI  201–20 see also damages; specific performance advanced AI  205 agency  219 algorithms  204–5, 214–19 autonomy  202, 203–5, 219–20 blockchain  269 choice  146

civil law systems  202–3, 206 coding law and ethics in AI algorithms  216–19 common law systems  202–3, 207 conformity criteria  146 consent, defects in  205–7, 219 data protection  214–15, 218–19 decision-making  214–16 Digital Content Directive  135, 140, 143, 145–6, 148–9, 154 efficiency  203–4, 208 EU law  214–15, 218–20 explainability, quest for  214–16 force majeure and frustration  207–9 formation of contracts  202 functional approach to contract law  201–2 hyper-automation  208, 219–20 law as a remedial institution  57–8 perfect contracts, generation of  204 price reduction  146, 148 problems created by use of AI  203–5 product liability  211–12, 219 regulation  217–19 repair  37, 148 rigidification of contracts  207–8 self-driving contracts  205 self-execution  203–4, 210 standards  217–18 termination of contracts  146, 148 remuneration  150 repair  37, 148 reputational feedback systems  90, 96 research  18, 128–32, 275 reviews  86–7, 100, 103–4 simultaneous collaboration  103–4 tracking changes  104 versioning problem  103–4 revolution of contract through legal technologies  81–96 analysis of contracts  83, 90, 92 asymmetries and clauses evaluation  95–6 changes to contracting  81–5, 88 classification of activities  83 coding  92–3 compliance and regtech through contract software  92–4 Contract Lifecycle Management (CLM)  83 data-driven AI tools  84 data, use of  83–4 design regulation  94–5 data to manage, conclude and analyse contracts, use of  81

300  Index drafting contracts  83, 84–7, 90–1, 96 e-discovery  83, 87 ERP software  87 independence of legal tech services  91–2 industrial approach  86–7 intermediary, legal tech software as new  88–92 lawyers as legal tech integrators  85–8 legal as a product  85–8 liability  91 litigation  86–7 management of the business  87 marketisation and commodification of legal practice  85–7 modelling  94 performance of work  87 platforms  89–90, 86 privacy policies  82–3, 86 professionalism, impact on  86–7 review and monitoring  86–7 rules-driven AI  84 services, digitisation of  82 servitisation or productification/ mercification of professional performance  81, 85 smart contracts  84–5, 96 software  92–6 supervisory bodies  91 transparency  91–2 understanding and use of contracts, enhancing  86 Reynen Court  111 RightNow  25 rigidification of contracts  207–8 Ripple  57–8 robotics  204, 279 Rocket Lawyers  89, 101 Ross Intelligence  209, 275 Rouge metric  185 rule of law  51–2 Russell, Stuart  63 safe harbour privileges  149 Sale of Goods Directive  37 Samuel, Geoffrey  156 Santosuosso, A  269 SAP  87 Schrepel, T  264, 266 Schwalble, U  264 scrolling as addictive  225 Searle, John  6

seasons of AI  5, 6–11 spring  5, 7–8, 9–10 statistical methods  10–11 winter  5, 7, 9 security blockchain  44, 47–8 data security technologies  83 machine learning  281 standards  142 self-driving contracts  172, 205, 268–72 self-driving corporations  276 self-driving vehicles  41–2 self-execution  43–4, 203–4, 210, 270 self-help  25 self-learning algorithms  23, 32, 251, 258 self-reliance  228–9 semantics  157, 159, 167, 171–3, 192 sentiment analysis  274 shared intentionality thesis  59–80 agency  62–9, 75, 79–80 ascription of intentionality to AI  61 attribution of intentionality  60–1, 69–75 belief-desire attribution  74 capacity to contract  61, 66, 77–8 cognitive science  60–1 collective intentionality  77–8 consent  60, 78–9 core of contractual obligation, as  75–9 cultural social practices  77–8 definition  60 evolutionary anthropology  77 folk psychology  61, 69, 74 formation of contracts  60–1, 66, 70, 75, 79 future  62, 64, 69 future directed intention  75–6 general theory of contract  78–9 GOFAI (good old-fashioned AI) contracts  65–7 ‘I’ intentionality  77 inductive reasoning  65 intention  59–60 joint intentionality  77 machine learning  59, 62, 67–9 neural networks  59 objective intent  60, 69–75, 79 philosophy of contract law  59–61, 68, 72–3, 79–80 practical reasoning  75–6 promise  60, 78 psychological capacity  60 reciprocity  61 strong AI  79

Index  301 subjectivity  79 Turing test intentionality in common law  60, 70–2, 79 ‘we’ intentionality  77 weak AI  79 Siciliani, P  261 signatures see e-Signing Simon, Herbert  7 Singapore  30–2 SkipGram model  190 smart contracts  41–58, 84–5, 96 algorithms  66, 270 autonomous contracting, recontextualisation of  55–8 blockchain  20, 41–52, 56–8, 84, 202, 266, 268–70, 272 change of circumstances, inability to deal with  45 code  155–80 code is law, idea of  41–2 creation in code  42 rule of code  43, 52–8 vulnerabilities in the code  51 decentralised autonomous organisations (DAOs)  42–3, 45–7, 48 definition  43–4 Digital Content Directive  137 hyper-autonomous contracting, regulation of  42–50, 52, 56–8 non-contractual mechanisms in contracting practices  52–5, 56–8 opportunism  44–5, 270 recontextualisation of autonomous contracting  55–8 regulation  41–58 relational contract scholarships  43, 50–6, 58 rule of code  43, 52–8 self-execution  43–4 smarter, making smart contracts  47–50 social context  56, 58 social resources, as  57 specific performance  210 super-smart contracts  43–50 symbolic AI  65–6 trust  45, 50–2, 55 Smith, HE  177 social and referral networks  83 social media  236 social psychology  225–7, 232, 246 social resources, contracts as  54 social settings of real-world contracting  54

software agents, using software  34 asymmetries  95–6 automation software  96 chatbots  116, 117 compliance and regtech through contract software  92–4 Computer Programs Directive  116 custom build solutions  110 Digital Content Directive  138 pricing  250, 265 programming languages  155–80 declarative  163–4, 168, 174 procedural  163–4 reputational feedback systems  96 source code  46 spam filters  10 specific performance  203–4, 209–11 algorithms  210–11 automated systems  210–11 blockchain  210–11 civil law systems  203, 209 contract generators  209 damages  209 good faith  211 hyper-automation  210–11 self-execution  210 smart contracts  210 SpeedLegal  107 Spotify  81 standard terms see also boilerplate clauses battle of forms  34 Digital Content Directive  148–9 efficiency  233 exclusion of liability  148–9 fairness  36, 229 incorporation  33–4 limitation of liability  148–9 multilateral autonomous contracts  34 platform services  229, 233–5, 246–7 reasonable notice test  34 software agents, using  34 Unfair Contract Terms Directive  36, 233–5, 246–7 user of standard terms  33–4 start-ups  47–8 statistical methods  10–11, 67 storage  100, 107–8 chatbots  121–2 cloud systems  107 Contract Lifecycle Management Systems (CLMs)  107–8

302  Index metadata  107 static archiving  107 strict liability  37 strong AI  6, 8, 79 Stucke, A  252, 260 subliminal techniques  244 subscriptions  142 subsidiarity  227 summaries see multilingual documents, summarising Support Vector Machines (SVM)  10 Surden, Harry  14, 17 surveillance  227, 246 Susskind, R  85, 88–9 symbolism  13, 65–7, 159–60 syntax  162–3, 171 Szabo, Nick  20, 44, 268 tax disputes  273 templates Digital Content Directive  137 drafting  100, 101, 103, 162–3, 178 E-Commerce Directive  151–2 selection rules  101 termination of contracts  37, 146, 148 text and data mining (TDM) exception  116, 127, 129–30, 132–3 text editors  100 Thomas, S  261 Thomson Reuters  102 Thought River  104 timeline summarisation  185–6 Toga System  273 Tomasello, Michael  77–8 training analysis of AI  109 chatbots  116, 118–21, 123–4, 126–8, 133 data protection  116, 126 interactive training  118 models  116, 118–21, 123–4, 126–8, 133, 274 static  250–1 Tran, VD  197 translation  156–9, 164–71, 174–6, 180 algorithms  156 argument bites  158 code  156–9, 164–71, 174–6, 180 context  157–8 ELIZA effect  157 ‘if, then’ statements  156 legal formalism  159 machine translation  188

nesting  158 semantics  157 understanding, requirement for  157 transparency  91–2 blockchain  42 collusion  252–4, 256, 262, 265 dispute resolution  281–2 ethics  278–9 failure transparency  18 Framework for an Algorithmic Reliability and Transparency Authority (European Parliament)  279 judicial transparency  18 platform services  244–5 Unfair Commercial Practices Directive  239 Unfair Terms in Consumer Contracts Directive  242 trust  45, 48, 50–3, 55, 57 Tucker, C  264, 266 Turing test  6, 60, 70–2, 79 Uber  82, 89, 151 UN Convention on the Use of Electronic Communications in International Contracts 2005 30 UNCITRAL Arbitration Rules  277 UNCITRAL Model Law on International Commercial Arbitration  276 understanding and use of contracts, enhancing  86 undue influence  28 Unfair Commercial Practices Directive average consumer  238–41 blacklist  232 business-to-consumer (B2C) commercial practices  231, 246 codes of conduct  242–3 harmonisation  232–3, 239 misleading and aggressive commercial practices  231, 246 Modernisation Directive  232, 243 personalisation of the average consumer  240–1 platform services  229, 230–3, 236, 238–42, 246 pre-contractual duties  26–7 professional diligence, definition of  242 regulation  231 structural asymmetry between consumers and traders  240–1 transparency  239

Index  303 Unfair Contract Terms Directive  242 vulnerable consumers  239–41 Unfair Contract Terms Directive algorithms  234 automated advertising delivery  235 boilerplate clauses  234 business-to-consumer (B2C) contracts  233–4 core terms  234 damages  148–9 Digital Content Directive  148–9 disclosure  234–5 exclusion or limitation of liability  148–9 free and informed consent  234 good faith  234 harmonisation  233–4 indicative and non-exhaustive list of terms  148–9 legal profession  149 need-oriented instruments  247 platform services  233–5, 236, 246–7 significant imbalances  234 standard terms  36, 233–5, 246–7 transparency  234–5, 246–7 Unfair Commercial Practices Directive  242 United Kingdom collusion  252, 257–8, 263, 266 Competition and Markets Authority (CMA)  252, 257–8, 263 United States  96, 272–3 collusion  254, 257, 266 dispute resolution  272–3 Federal Trade Commission (FTC)  132

Restatement (Second) of Contracts  71–2 Sherman Act  254 updating  142, 143, 186–7, 198 user control principle  278–9 vague or open-ended terms, intentional use of  54 values see ethics and values Van Cleynenbreugel, P  264 Vestager, Margrethe  253, 265 vicarious liability  39 von der Leyen, Ursula  243 vulnerable consumers  239–41 watchdogs  25 weak AI  6–7, 79 Wendell Holmes Jr, Oliver  71, 159 Werbach, K  45, 50–1 Wheeler, Sally  56 whistle-blowing  247, 261 Willett, Chris  228–30 Windows Printer Trouble-shooters  10 Wittgenstein, Ludwig  173 Word2Vec  190–1 Wright, A  49–52, 270 Wu, H  261 XML (Extensible Markup Languages)  177–8, 180 Y Combinator Series Team Sheet Template  178–9 Zheng, G  261

304