Civil Liability for Artificial Intelligence and Software [37, 1 ed.] 9783110775341, 9783110775402, 9783110775457, 2022944925

Initiated by the European Commission, the first study published in this volume analyses the largely unresolved question

268 113 3MB

English Pages 407 [416] Year 2023

Report DMCA / Copyright

DOWNLOAD FILE

Polecaj historie

Civil Liability for Artificial Intelligence and Software [37, 1 ed.]
 9783110775341, 9783110775402, 9783110775457, 2022944925

Citation preview

Mark A Geistfeld, Ernst Karner, Bernhard A Koch, Christiane Wendehorst (eds) Civil Liability for Artificial Intelligence and Software

Tort and Insurance Law

Edited by the European Centre of Tort and Insurance Law together with the Institute for European Tort Law of the Austrian Academy of Sciences and the University of Graz

Volume 37

Civil Liability for Artificial Intelligence and Software Mark A Geistfeld Ernst Karner Bernhard A Koch Christiane Wendehorst (eds)

European Centre of Tort and Insurance Law Reichsratsstraße 17/2 1010 Vienna Austria Tel.: +43 1 4277 29650 Fax: +43 1 4277 29670 E-Mail: [email protected] Institute for European Tort Law of the Austrian Academy of Sciences and the University of Graz Reichsratsstraße 17/2 1010 Vienna Austria Tel.: +43 1 4277 29651 Fax: +43 1 4277 29670 E-Mail: [email protected] The studies contained in this book were produced under service contracts with the European Commission. The content of these studies represents the views of the authors and is their sole responsibility; it can in no way be taken to reflect the views of the European Commission or any other body of the European Union. The European Commission does not guarantee the accuracy of the data included in these studies, nor does it accept responsibility for any use thereof made by third parties. The studies and the Expert Group Report are being reproduced under a Creative Commons Attribution 4.0 (CC-BY) 4.0) licence (). This means that reuse is allowed provided that appropriate credit is given and changes are indicated.

ISBN 978-3-11-077534-1 e-ISBN (PDF) 978-3-11-077540-2 e-ISBN (EPUB) 978-3-11-077545-7 Library of Congress Control Number: 2022944925 Bibliographic information published by the Deutsche Nationalbibliothek The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data are available on the Internet at http://dnb.dnb.de. © 2023 Walter de Gruyter GmbH, Berlin/Boston Typsetting: jürgen ullrich typosatz, Nördlingen Printing and binding: CPI books GmbH, Leck www.degruyter.com

Preface In this volume, two studies on liability for new technologies are published in print, which were called for by the European Commission. Both manuscripts were completed in November 2020. In an Annex, the Final Report of the New Technologies Formation of the European Commission’s Expert Group on New Technologies is added, which is quoted repeatedly throughout the two studies. Ernst Karner, Bernhard A Koch and Christiane Wendehorst were members of this Formation. The studies as well as the Report of the Expert Group have been published online by the European Commission.* They are being reproduced here with only very minor adjustments for editing reasons. Donna Stockenhuber took care of proofreading the entire text as perfectly as always. Kathrin Karner-Strobach checked the citation and style uniformity and Katarzyna Ludwichowska-Redo coordinated the whole project. We would like to express our sincere gratitude to the three of them for their support in bringing about this volume. Mark A Geistfeld Ernst Karner Bernhard A Koch Christiane Wendehorst

* The first study can be downloaded at ; the second study is available at and the Expert Group report can be downloaded at .

https://doi.org/10.1515/9783110775402-202

Table of Contents Mark A Geistfeld, Ernst Karner and Bernhard A Koch Comparative Law Study on Civil Liability for Artificial Intelligence Christiane Wendehorst and Yannic Duller Safety- and Liability-Related Aspects of Software

1

185

Annex Expert Group on Liability and New Technologies – New Technologies Formation Liability for Artificial Intelligence and Other Emerging Digital Technologies 321

Mark A Geistfeld, Ernst Karner and Bernhard A Koch

Comparative Law Study on Civil Liability for Artificial Intelligence Table of Contents Introduction 7 Executive Summary 9 Civil Liability for Artificial Intelligence. A Comparative Overview of Current Tort Laws in Europe 21 A. Introduction 21 B. Current Tort Law Regimes and Artificial Intelligence 23 I. Overview 23 II. Causation 26 1. Procedural starting points 28 (a) Standard of proof 28 (b) Procedural alleviations of the burden of proof 30 (c) Administrative law measures assisting those charged with the burden of proof 33 2. Basis of liability 35 3. Burden of proving causation 37 4. Causal uncertainty 41 III. Fault liability 44 1. Overview 44 2. The varieties of fault liability in Europe 45 (a) Differences regarding the recognition of wrongfulness as a separate element 45 (b) Differences regarding the benchmark for assessing the harmful conduct 46 (c) Differences regarding the applicable fault standard 47 (d) Differences regarding the burden of proving fault 48 (1) Jurisdictions where the burden of proving fault primarily lies on the claimant 48 (2) Jurisdictions where the burden of proving fault is generally shifted upon the defendant 51 (3) Jurisdictions where the burden of proving fault is placed ad hoc 51 (e) Differences regarding other deviations from the standard rules of fault liability 51

Geistfeld/Karner/Koch, Comparative Law Study https://doi.org/10.1515/9783110775402-001

2

Table of Contents

(f) Differences regarding the impact of fault on the extent of compensation 52 (g) Interim result 53 3. The application of fault-based liability to AI systems 53 (a) Conduct requirements for the deployment of AI systems 54 (b) The burden of proving misconduct triggering liability in the case of autonomous systems 56 (c) The significance of fault-based liability for AI systems 59 IV. Vicarious liability 61 1. Overview 61 2. The varieties of vicarious liability in Europe 62 (a) Differences regarding the classification of vicarious liability 62 (b) Differences regarding the trigger of vicarious liability 62 (c) Differences regarding the implications of the differing scope of liability 64 (d) Differences regarding the range of persons for whose conduct someone else may be held liable 65 (e) Differences regarding the context of the harmful conduct 66 (f) Differences regarding the availability of direct claims against the auxiliary 67 (g) Differences regarding the extent of the liability of legal persons for their representatives 68 3. Analogous application of vicarious liability to AI systems? 69 V. Strict (risk-based) liability 69 1. Overview 69 2. The varieties of strict liability in Europe 70 (a) Differences regarding the liable person 70 (b) Differences regarding the range and scope of strict liabilities 71 (1) Singular instances of strict liability 71 (2) General clause of strict liability 72 (3) Strict liability for things 73 (c) Differences regarding the possibility to extend strict liability by analogy 74 (d) Differences regarding the availability of defences 75 (e) Differences regarding the extent of compensation 77 (f) Differences regarding the possibility to file concurrent faultbased claims 77

Geistfeld/Karner/Koch, Comparative Law Study

Table of Contents

3

C. Use Cases 77 I. Autonomous vehicles 78 1. The system of traffic liability 78 (a) Fault-based and risk-based liability 78 (b) Liability insurance 80 2. Consequences of automatisation for road traffic liability 81 3. Differences between current motor vehicle liability regimes 84 (a) Subject of liability 84 (b) Risks covered 86 (c) Protection of the driver/user? 89 (d) Exclusion of liability in case of a technical defect? 90 (e) Relevance of contributory negligence 92 (f) Joyriders and hacking 93 4. Excursus: A specific insurance solution for autonomous motor vehicles 93 5. Findings of this use case 95 II. Autonomous lawn mowers and harvesters 96 1. Autonomous lawnmowers 96 (a) Fault liability 96 (b) Strict liability 98 (c) Summary 100 2. Autonomous combine harvesters 100 (a) Fault liability 100 (b) Strict motor vehicle liability 101 (c) Strict liability for things 107 (d) General risk-based liability 107 3. Findings of this use case 108 III. Autonomous drones 109 1. Preliminary remarks 109 2. Three main damage scenarios 110 3. Current legal landscape in Europe 112 (a) International and EU law 112 (1) Rome Convention 1952 112 (2) Regulation (EC) No 785/2004 113 (3) The new EU drones regime 114 (b) Domestic law 115 (1) Specific (strict) liability for drones 115 (2) Strict liability for aircraft also potentially applicable to some drones 115

Geistfeld/Karner/Koch, Comparative Law Study

4

Table of Contents

(3) Strict liability for means of transport also potentially applicable to some drones 120 (4) General strict liability also potentially applicable to some drones 121 (5) No specific regime applicable at all 123 4. Findings of this use case 124 5. Case hypotheticals 124 D. Conclusions 127 Regulation of Artificial Intelligence in General and Autonomous Vehicles in Particular in the US 133 A. Status of Artificial Intelligence Regulation in the United States 133 1. Scope of this Report 133 2. Statutes as a Source of Liability 134 3. Express Federal Regulation of Artificial Intelligence 135 4. Express State Regulation of Artificial Intelligence 135 5. Application of Existing Statutes to Artificial Intelligence 140 6. Application of State Tort Law to Artificial Intelligence 145 7. Conclusion 147 B. Status of Autonomous Vehicle Regulation in the United States 147 1. Current Regulatory Framework Governing Liability and Insurance Issues for Motor Vehicle Crashes 147 2. Federal Regulation of Autonomous Vehicles 149 3. State Legislation Expressly Addressing Autonomous Vehicles 150 (a) Operating Autonomous Vehicles Without a Human Operator 151 (b) Operating Autonomous Vehicles Only with a Human Operator 151 (c) Operating Autonomous Vehicle on Public Roads 151 (d) Determining Liability in the Event of a Crash 152 (e) Regulating Duty in the Event of a Crash 153 (f) Requiring Insurance 153 4. Application of State Tort Law to Autonomous Vehicles 153 5. Conclusion 155 C. Annexes 156 Annex I: State Regulations of Autonomous Vehicles 156 1. State Legislation Allowing Autonomous Vehicles to Operate without a Human Operator in the Vehicle 156 2. State Legislation Allowing Automated Driving only with a Human Operator 158 3. State Legislation Allowing Autonomous Vehicles on Public Roads 159 Geistfeld/Karner/Koch, Comparative Law Study

Table of Contents

5

4. State Legislation Addressing Liability in the Event of a Crash 160 5. State Legislation Regulating Duty in the Event of a Crash 161 6. State Legislation Requiring Insurance 164 7. State Legislation Requiring Minimal Risk Condition 166 8. State Legislation Granting Special Privileges 168 9. State Legislation Requiring Further Research 170 Annex II: State Regulation that Indirectly References AI 172 1. Appropriating Money/Expanding Research 172 2. Investigative Commissions 175 3. Outlining Administrative Goals 180 4. Other State Efforts 180 Annex III: Federal Regulation that Indirectly References AI 180 1. Appropriating Money/Expanding Research 180 2. Outlining Administrative Goals 182 3. Regulatory Interest 183 4. U.S. Trade Agreements 184

Geistfeld/Karner/Koch, Comparative Law Study

Introduction This study was commissioned to the authors by the Directorate-General for Justice and Consumers of the European Commission in spring 2020 and completed in November of that same year. The task given was to provide an overview of existing tort law regimes within the EU that would apply to liability for artificial intelligence (AI) at present, while leaving aside the national implementations of the Product Liability Directive. Instead of a comprehensive presentation of the laws of all Member States, only key aspects at the example of selected jurisdictions should be highlighted. Three use cases were agreed upon as examples. Furthermore, having regard to the pre-eminent position of the US in the field of AI technologies, information on the legal regimes in the US on AI was added, again limited to key aspects and selected states. The latter task was tackled by Mark Geistfeld in the second part of this study, whereas the first part on the comparison of tort laws in Europe was drafted by Ernst Karner and Bernhard A Koch.

Geistfeld/Karner/Koch, Comparative Law Study

Executive Summary –

Artificial Intelligence (AI) has the potential to transform our society increasingly in the future, fostering substantial innovation in various fields. Implementations of this technology, however, as varied as they may be, may also contribute to causing damage, the likelihood and extent of which depending upon the type of application and the degree of its exposure to third-party interests.

Current Tort Laws in Europe –





It is doubtful whether the liability regimes currently in place in all EU Member States provide for an adequate distribution of all such risks, and it is to be feared that at least some victims will not be indemnified at all or at least remain undercompensated if harmed by the operation of AI technology even though the principles underlying the tort law regimes in those jurisdictions would speak in favour of remedying their harm. More importantly, the outcome of such cases in the Member States will often not be the same due to peculiar features of these legal systems that may play a decisive role especially in cases involving AI. While claims for compensation invariably require that the victim incurred some harm, the range of compensable losses and the recognised heads of damage will not be different in AI cases than in any other tort scenario. Without going into detail in this study, however, it is important to bear in mind that there are important differences throughout Europe when it comes to recognising which damage triggers tort claims in the first place (specifically evident in the case of pure economic loss). Furthermore, jurisdictions differ with regard to which consequences of an initial harm will be indemnified at all. The range and extent of remedies available are equally divergent, in particular (but clearly not limited to) the extent of compensation for immaterial harm. It is equally universally acknowledged throughout Europe that a duty to compensate the loss of another requires that its cause at least possibly lay within the sphere of the addressee of the claim. However, identifying such a cause and convincing the court of its impact on the turn of events is particularly challenging if it is an AI system that is suspected to have at least contributed to damaging the victim. This is due to the very nature of AI systems and their particular features, such as complexity, opacity, limited predictability, and openness. Just think of a collision of autonomous vehicles, which may have been triggered by some flaws of the hardware as in an accident involving tra-

Geistfeld/Karner/Koch, Comparative Law Study

10



Executive Summary

ditional cars, but it may also have been caused instead by the internal software of either vehicle, which in turn may or may not have been altered by some over-the-air update that did not necessarily originate from the car manufacturer. The vehicle’s ‘decision’ ultimately leading to the collision may have been the result of flawed data (collected either by the vehicle itself or by some external provider) or of some errors in processing that information. Even the question of what constitutes an ‘error’ of the AI will be difficult to answer. The success of proving something in court depends upon the applicable standard of proof in place in the respective jurisdiction, ie the degree of conviction that the judges must have in order to be satisfied that the burden of proof has been met. There are significant differences throughout European jurisdictions with respect to this procedural threshold. Some are already satisfied with a mere preponderance of the evidence, with the result that the person charged with proving something already succeeds if it is more likely than not that her allegations are true. In other jurisdictions, the degree to which the fact finder must be persuaded is much higher, making it correspondingly much more difficult to prove something. This difference directly impacts upon the outcome of a case: if, for example, the claimant has to prove that an AI system caused her loss, and evidence only supports a 51 % likelihood that this was indeed the case, she will win the case in full (subject to the other requirements of liability) in the first group of jurisdictions and lose it completely in the remainder, collecting full compensation in the former countries and getting nothing at all in the others. In the latter jurisdictions, courts may at some stage consider lowering the threshold for victims of AI. Any such alleviations of the position of only a select group of claimants necessarily trigger the problem of equal treatment of tort law victims as a whole. This may be justified due to the particular features of AI as a potential source of harm, but whether courts throughout the EU will acknowledge that is yet unpredictable. Those charged with the burden of proof can sometimes benefit from certain alleviations along the way, for example if the courts are satisfied with prima facie evidence. It is hard to foresee whether and to what extent this will also be available in cases of harm caused by AI systems. At least initially, it would be difficult to apply, considering that this requires as a starting point a firmly established body of experience about a typical sequence of events which at first will be lacking for novel technologies. Another approach helping claimants is to shift the burden of proof to the opposing party. Apart from express legislative presumptions (which at present seem to be lacking in AI cases so far), some courts charge the defendant rather than the claimant with proving certain facts if it is the former who may  





Geistfeld/Karner/Koch, Comparative Law Study

Executive Summary







11

be in control of crucial evidence needed to establish (or disprove) such facts. Failure to submit such evidence may then reverse the burden of proving whatever it could have corroborated. In the AI context, this may apply, for example, to log files or the like produced or at least stored within the sphere of the defendant. To what extent this would be truly beneficial for victims of AI systems not only depends upon the existence of logging devices in the first place, but also on what kind of information is to be expected from such files that could contribute to clarifying the cause of the damage. How difficult it will be for those injured by an AI system to prove that it contributed to causing their harm will also depend on which tort theory they can rely on in order to pursue their claim. If the only available claim is based on fault liability, triggered by some blameworthy conduct attributable to the defendant, claimants need to show that their damage was indeed caused by such behaviour. If the applicable law provides for strict liability instead, holding someone liable for a risk within her sphere, it is not any specific flawed conduct that claimants have to show but merely that this risk attributable to the defendant materialised, without necessarily having to prove how. In the case of harm possibly caused by the operation of an AI system, at least that still needs to be established, which may not necessarily be self-evident in all cases though, particularly in light of the connectivity and openness of such technology. In the case of fault liability, the greatest challenge for a victim whose damage was caused involving an AI system is to identify human conduct which significantly impacted upon the course of events at all. Unlike other technology, where a person will be actively involved in operating it and whose behaviour will influence the use of the technology as such, AI is often used to replace that human factor in the operation entirely or at least to a large extent, the best example being self-driving cars, as their colloquial name already indicates. Therefore, at least in the immediate events preceding the infliction of harm, there will often be no human actor involved at all who played a decisive role, unless the user or operator failed to update or monitor the system as required (a duty the extent of which is in itself yet hard to predict). The victim will hence often have to go further back in the timeline in order to identify conduct that may have at least contributed to the causal chain of events, and then find evidence supporting the conclusion that such conduct was faulty. Identifying harmful conduct will be the more difficult the more independent the behaviour of the AI system is designed, or – figuratively speaking – the more black the box is. After all, while human conduct is literally visible and can be witnessed, identifying the processes within an AI system and persuading the court thereof seems much more challenging, even if the defendant’s

Geistfeld/Karner/Koch, Comparative Law Study

12









Executive Summary

conduct relating to the AI system can be traced back with convincing evidence. While fault liability is a basis for tortious liability in all European legal systems, there are important differences between these jurisdictions when it comes to the details, which will impact significantly upon the outcome of any future case involving AI systems. This includes, in particular, the question of who has to prove fault in practice. Some jurisdictions apply the default rule, requiring the claimant to prove this prerequisite of liability (though subject to exceptions – where further differences lie), others generally charge the defendant with proving that she was not at fault. These differences come in addition to the aforementioned variations regarding the standard of proof, which apply correspondingly to proving fault. To the extent a legislator should decide to impose concrete rules of conduct, by introducing, for example, monitoring or updating obligations, any disregard of such duties may be treated distinctly by the courts, as some jurisdictions alleviate or shift the burden of proving fault if the defendant or someone in the latter’s sphere violated such express rules of conduct designed inter alia to prevent harm. To what extent we will see such regulation in this field, and whether it will happen on a purely domestic or also on the EU level, remains to be seen. Apart from differences regarding the burden of proving fault, jurisdictions in the EU also differ with respect to the applicable standard of care and the way it is determined. This is decisive inasmuch as it defines the threshold to determine whether causal conduct is sufficiently blameworthy to justify holding the defendant liable. Often, a comparison is drawn to a ‘reasonable’ or similarly ideal person, but in the absence of experience in the case of novel case scenarios, it is for the courts to define what should and could have been done under the circumstances. The reduced factual relevance of human conduct to be expected in future tort cases involving AI systems, at least in the immediate environment of their operations, will also impact upon the significance of vicarious liability, ie the attribution of someone else’s conduct to the liable person. This variant of liability is still being discussed in the context of liability for AI as a potential basis for analogy, based on the (convincing) argument that if someone is to be held liable for the conduct of another to whom the former had outsourced tasks of her own, the same should apply correspondingly if that auxiliary is not a human, but an AI system instead. However, this should not distract us from the fact that vicarious liability is a rather diverse concept in a European comparison. Some jurisdictions are very restrictive in tort law and only in rather exceptional cases attribute the conduct of an auxiliary to her principal, whereas Geistfeld/Karner/Koch, Comparative Law Study

Executive Summary











13

other countries are much more generous in that respect. Further differences are evident with respect to the expected relationship between the auxiliary and the principal (such as employment), or the actual context in which harm was caused by the former. The aforementioned problems with respect to liability triggered by human conduct do not arise if the reason for holding someone liable is not some flawed behaviour, but instead rather a risk within the defendant’s sphere whose source is in her control and from which she benefits. While all European jurisdictions have at least some instances of strict liability in addition to and alongside fault liability, there are tremendous differences both with respect to the range of risks covered by such no-fault regimes as well as with respect to the details of how such liabilities are designed. This is best illustrated with the example of traffic accidents: most, but not all EU jurisdictions provide for strict liability for the motor vehicles involved. If they do, their liability regimes differ substantially, however, for example, with respect to which victims are covered by it, and which can only resort to traditional fault liability instead. Some countries put a cap on such liability, others do not. Some allow alternative paths towards compensation at the victim’s choosing, others do not. Some allow a wider range of defences, others are extremely restrictive. Yet others have no strict liability for motor vehicles at all as said and apply only fault liability instead. Also when it comes to drones, the current legal landscape is quite diverse: while many jurisdictions have at least some strict liability regime in place for ground damage, for example, scope and details thereof vary considerably throughout Europe. Such differences are evident already with regard to what kind of drones are subject to such a special liability regime, but also with respect to the range of defences available to the defendant, the degree to which contributory conduct by the claimant is considered, and whether and to what extent the amount of possible compensation is capped. In those jurisdictions where all or at least some drones are not covered by strict liability, fault liability applies instead, subject to the general variations already mentioned above. The catalogue of risks subjected to strict liability in any given jurisdiction in general differs from country to country. All address some more or less specific risks (eg for motor vehicles of a certain kind or for means of transport more generally), which may also apply if such sources of risk include AI technology, such as self-driving cars, to the extent they match the definitions foreseen in such legislation. Some jurisdictions have introduced broader instances of strict liability, either instead of or in addition to strict liabilities limited to specific risk sources.

Geistfeld/Karner/Koch, Comparative Law Study

14





Executive Summary

Broader strict liability regimes may apply, for example, to dangerous substances, things, or activities without further specifications. While those broader variations may also extend to novel technologies not specifically addressed in other tort legislation, whether or not courts will be prepared to do so in the case of an AI system is yet unclear, as they would first have to qualify it as ‘dangerous’ within the meaning of such general clauses. After all, due to the absence of experience with such systems at least at first, and in light of the expectations towards AI systems promoted to be safer than traditional technology, it is difficult to foresee whether and where courts will be prepared to classify them as required by such general clauses. One must also bear in mind, though, that not all AI systems will be equally likely to cause frequent and/or severe harm, so the justification for subjecting them to strict liability will already for that reason differ depending upon the technology. Some jurisdictions hold persons liable for ‘things’ in their control. Whether or not this requires such ‘things’ to be defective, or whether this liability is designed as a true strict liability or just as a regime of – rebuttably or irrebuttably – presumed fault differs from country to country. This will also impact upon the likelihood of victims of an AI system in such countries to successfully pursue claims for compensation on that basis. Summarising the current situation in the EU, one can observe that, while there are at least some strict liabilities in place in all European jurisdictions, it is clear that at present many AI systems do not fall under such risk-based regimes, leaving victims with the sole option of pursuing their claims for compensation via fault liability. However, the latter is triggered by human (mis-) conduct and thereby not only requires victims to identify such behaviour as the cause of their harm, but also to convince the court that it was blameworthy. Due to the nature of AI systems, at least their immediate operation and therefore the activities directly preceding the damaging event will typically not be marked by human control. Connecting the last input of someone within the defendant’s sphere with the ultimate damage will therefore be more challenging than in traditional cases of fault liability. The procedural and substantive hurdles along the way of proving causation, coupled with the difficulties of identifying the proper yardstick to assess the human conduct complained of as faulty, may make it very hard for victims of an AI system to obtain compensation in tort law as it stands.

Geistfeld/Karner/Koch, Comparative Law Study

Executive Summary

15

Regulation of Artificial Intelligence in the US –







In the US, a patchwork of federal and state laws expressly addresses various issues involving artificial intelligence (AI). With the notable exception of autonomous vehicles, this developing body of law has only limited relevance for the liability and insurance questions that will arise when AI causes bodily injury or property damage. No federal statute of this type expressly governs AI. Nevertheless, some existing statutory schemes can be applied to the safety performance of AI. For example, the US Food and Drug Administration has already used its existing statutory authority to regulate the use of AI for some types of medical-support software and medical devices. In January 2021, the agency announced a five-part Action Plan moving towards implementation of a regulatory framework, first proposed in 2019, for treating AI/machine-learning software as a ‘medical device’ that the agency can regulate under its existing statutory authority. In addition to bodily injury or property damage, AI can cause other types of harms governed by existing laws. In some cases, for example, plaintiffs have alleged that the defendant trained the AI with datasets that violate federal or state privacy laws. Other cases involve algorithms that allegedly violate antidiscrimination laws or those governing fair trade practices. Yet others involve allegations that AI inaccurately characterised citizens as filing fraudulent claims for benefits. Cases like these further illustrate the extent to which existing laws can be applied to a variety of harms caused by AI. A comprehensive analysis of these forms of liability is outside the scope of this study, which instead focuses on the safety performance of AI and the associated liability and insurance issues. In addition to the medical issues described above, federal legislators and regulators have paid the most attention to autonomous vehicles. In Congress, there has been considerable bipartisan support for federal legislation that would create a comprehensive framework for regulating autonomous vehicles. This legislation, however, has stalled because of the pandemic and the 2020 presidential election. Meanwhile, federal regulators have only provided soft guidance to the industry – recommendations for best practices involving the development of this emergent technology. In the absence of federal law that resolves an issue of liability or insurance, state law will govern these issues. As of June 2020, seven states have adopted statutes that expressly regulate the use of AI. Each statute is narrowly focused, ranging from the use of AI in interview settings to real-estate appraisals. Two states have adopted statutes permitting optometrists to use AI in eye assessments, although optometrists must continue to perform their duties

Geistfeld/Karner/Koch, Comparative Law Study

16







Executive Summary

as is if they had performed the assessment in person. They must also maintain liability insurance in an amount adequate to cover claims made by individuals examined, diagnosed, or treated with the AI. For liability purposes, the statutes hold optometrists to the same standard of care as those in traditional in-person clinical settings. One state, Mississippi, has incorporated AI into its regulations of the medical field. The statute holds licensed medical practitioners to the same standard of care as would otherwise be in place if they had not used an innovative medical treatment. The Mississippi statute illustrates some of the legal uncertainties that will arise when AI causes bodily injury. For liability purposes, physicians are held to the standard of care defined by customary practices for the procedure in question. Adherence to a customary practice does not necessarily guarantee positive health outcomes. Suppose that instead of following such a customary procedure, a physician used AI and did not obtain a positive health outcome. The adverse health outcome or injury would not conclusively establish malpractice liability – the same outcome could have occurred if the physician had followed customary procedures. In cases like this, what does it mean to hold physicians to the same standard of care as if they had not used AI? The state governments’ primary emphasis with respect to the safety performance of AI has involved autonomous vehicles. As of June 1, 2020, thirty-five states and the District of Columbia have enacted statutes expressly regulating autonomous vehicles. Several state statutes require autonomous vehicle testing entities to procure insurance for their vehicles. These provisions differ in both the types (liability or indemnity) and the amount of required insurance (ranging from $ 1.5 million to $ 5 million). Three states have adopted statutes that directly address liability in the event of a crash, and only one of them (Tennessee) has comprehensively addressed the liability and insurance questions. Under this statute, liability in the event that an autonomous vehicle crashes is determined ‘by product liability law, common law, or other applicable federal or state law’ – the same laws that currently determine liability for the crash of a conventional vehicle. Similarly, a Louisiana statute makes clear that its provisions permitting the use of autonomous vehicles on public roads do not ‘repeal, modify, or pre-empt any liability that may be incurred pursuant to existing law applicable to a vehicle owner, operator, manufacturer, component part supplier, or retailer.’ The Tennessee statute deems an autonomous vehicle that is ‘fully engaged, operated reasonably and in compliance with manufacturer instructions and warnings’ to be the ‘driver or operator’ for purposes of determining the liability of the vehicle owner or lessee. Ownership liability for motor vehicles ordinarily involves the owner’s obligation to procure liability insurance covering Geistfeld/Karner/Koch, Comparative Law Study

Executive Summary





17

crashes caused by anyone who is driving the vehicle (discussed below). The statute requires the owner or operator to be ‘covered by primary automobile liability insurance in at least five million dollars ($ 5,000,000) per incident for death, bodily injury, and property damage,’ an amount substantially higher than the insurance typically required for conventional motor vehicles (also discussed below). In the absence of either federal or state statutes that dictate otherwise, state tort law will govern cases in which AI technologies cause injury. Indeed, both the Louisiana and Tennessee statutes described above expressly contemplate this outcome. In general, someone who has been physically harmed by AI will have four potential tort claims: (1) If the operator negligently deployed the AI, then the victim can recover based on proof of fault, causation, and damages. (2) If the AI determines the physical performance of a product, the victim can recover from the manufacturer under strict products liability by proving that the product contained a defect that caused the injury. Alternatively, if the AI only provides a service that was otherwise reasonably used by the operator, then the victim must prove that the AI performed in an unreasonably dangerous manner because of negligence by the manufacturer or other suppliers of the AI. (3) The owner can be subject to negligence liability for negligent entrustment of the AI to the party who negligently caused the victim’s injury, or under limited conditions, the owner can be vicariously liable for the user’s negligence. (4) Finally, the victim can recover from the operator and potentially the owner under the rule of strict liability that would apply only if the AI technology is abnormally dangerous despite the exercise of reasonable care. A plaintiff can pursue any or all of these claims when available. If there are multiple tortfeasors, the liabilities will be apportioned among them based on the rules of comparative responsibility that virtually all states have adopted. Apportionment might be quite difficult when the components of an AI system come from various suppliers and it is unclear which component caused the AI to perform in the manner that caused injury. Plaintiffs in all states bear the burden of proving negligence, which will be hard to do in many cases of AI-caused injury. For example, negligent deployment could be at issue when injured patients claim that medical professionals committed malpractice by using AI. Medical malpractice in the US involves a departure from customary procedures, so there might be uncertainty about

Geistfeld/Karner/Koch, Comparative Law Study

18



Executive Summary

how physicians should reasonably use AI technologies before customary practices have been established. These problems of proof might seem to be ameliorated in the realm of defective products governed by strict liability, but plaintiffs in almost all states must prove that the product is defective. Proof of defect is often functionally equivalent to proof of negligence and could be problematic in the AI context. For example, whether the operating system of an autonomous vehicle is defectively designed depends on safety performance measures that are likely to be highly contestable, unless there are regulatory measures that conclusively resolve the matter. For other types of products or services, the ‘black box’ nature of machine-learning algorithms can pose similar difficulties of proving that the AI is defective in some respect or otherwise deployed in an unreasonably dangerous manner. These problems have generated a lively debate among US legal scholars about how existing liability rules might apply to AI-caused injuries, and whether fundamental reforms are required to adequately resolve these matters. The transition to autonomous vehicles will reduce the incidence of crashes caused by driver error, shifting the emphasis from third-party liability insurance procured by the owner to first-party insurance (also procured by the owner) covering the vehicle and its occupants. A greater proportion of injury costs will probably be shifted to manufacturers, because the autonomous vehicle executes the dynamic driving task instead of a human driver. Under existing laws, manufacturers will be liable for crashes caused by defects in the operating system, in addition to those caused by defects in the hardware of the vehicle – the only source of liability they now face for conventional motor vehicle crashes.

Comparative Observations on the EU and US Systems of Extra-Contractual Liability Rules –

While fault liability is the common default and therefore backup liability regime in all European legal systems as well as in the US, strict liability plays a larger – albeit very varied – role in Europe. Whereas victims can rely more extensively on strict liability (either based on specific, individual statutory provisions or on a more far-reaching general clause) in continental European legal systems, common law jurisdictions like the US, Ireland and Malta adopt a highly restrictive attitude towards such risk-based liability. In particular, those latter systems do not have a no-fault, risk-based liability for motor vehicles, although a number of states across the US have adopted legislative schemes of no-fault insurance that come closer to the continental European approach. Geistfeld/Karner/Koch, Comparative Law Study

Executive Summary



19

Among those European jurisdictions that have adopted strict liability, very significant differences exist as to how comprehensive these regimes are. The US and European legal systems share the general principle (subject to various exceptions and nuances) that the plaintiffs bear the burden of proving the facts, allowing the court to conclude that the conditions of the claim are fulfilled. Given the specific characteristics of AI, such as notably the ‘black box’ nature (opacity) of some types of AI systems as well as their increasingly autonomous behaviour, victims may face significant difficulties under both the US and the European legal systems in meeting that burden. This challenge seems particularly relevant with respect to fault-based claims under both US and European tort laws, requiring the victim to establish the wrongdoer’s fault and the causal link between that fault and a damage allegedly caused by an AI system.

Geistfeld/Karner/Koch, Comparative Law Study

Ernst Karner and Bernhard A Koch

Civil Liability for Artificial Intelligence A Comparative Overview of Current Tort Laws in Europe*

A. Introduction1 This study will analyse how damage caused by artificial intelligence (AI) systems2 is allocated by the rules of tortious liability currently in place in the EU, and whether – and if so to what extent – the national tort law regimes differ in that respect.3 In light of the peculiar challenges of AI systems presented by their charac* The authors are very grateful to many colleagues who kindly provided guidance on their respective jurisdictions, in particular to Marko Baretić, Elena Bargelli, Søren Bergenser, Agris Bitans, Lucian Bojin, Giannino Caruana Demajo, Simona Drukteinienė, Jiří Hrádek, Anne Keirse, Taivo Liivak, Peter Loser, Barbara Novak, Ronen Perry, Eoin Quill, Albert Ruda González, and Wolfgang Wurmnest. The authors further wish to thank Katarzyna Ludwichowska-Redo, Julian Pehm, and David Messner for their valuable research assistance, in particular in preparation of Use Cases I and II, as well as Andrew J Bell for his input on the Automated and Electric Vehicles Act 2018. The latter are or were staff members of the Austrian Academy of Sciences’ Institute for European Tort Law and of the University of Graz. 1 Unless indicated otherwise, translations of domestic liability rules in the following are quoted from E Karner/K Oliphant/BC Steininger (eds), European Tort Law: Basic Texts (2nd edn 2018). Websites cited in the following were last accessed on 1 September 2020. 2 This term is extremely broad and encompasses a wide variety of technologies and their implementations. See in particular High-Level Expert Group on Artificial Intelligence, A definition of AI: Main capabilities and scientific disciplines (2019, ), and the definition proposed on p 9: ‘Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.’ 3 Due to the broad and varied meaning of ‘AI system’ (fn 2), it is of course impossible to cover all such systems in a one-size-fits-all manner. However, when we address damage scenarios in the following, we proceed from the assumption that there are at least some technologies falling within that definition that are capable and likely to contribute to such cases as envisaged. The challenges we describe will not necessarily apply in all such cases to the same degree, of course, which we will also try to illustrate at just one set of examples specifically below (see infra C.II). In order to meet the task of highlighting the most important issues only, we will disregard purely diKarner/Koch, Civil Liability for Artificial Intelligence

22

A. Introduction

teristics, including connectivity, autonomy, data dependency, complexity, openness, and opacity,4 we will also examine possible gaps in the laws as they are when it comes to the protection of those harmed. Solutions to be expected on the basis of current European tort laws, also with respect to damage caused by AI, are assessed after weighing the conflicting interests in such cases, which impacts upon the application of liability rules. On that basis, questions in respect of which the interpretation of such rules appears uncertain, and potential inconsistencies in the solutions to be expected are identified. After all, such problems may reduce the effectiveness or contribute to a fragmentation of the existing liability regimes. As commissioned, the study focuses only on key aspects of selected problem constellations that serve to highlight key aspects of civil liability. The presentation is therefore limited to a core outline of the key features of applicable liability rules that distinguish the tort laws of Europe. It does not attempt to strive for an even partially comprehensive review. Significant disparities are shown on the basis of illustrative examples from selected European legal systems representing different legal families, with an obvious starting point in Germanic jurisdictions due to the authors’ background. In accordance with the task assigned, the following provides a descriptive report of the diversity of tort laws in the EU as evidenced in particular by cases involving AI systems and will neither attempt to demonstrate potential solutions to overcome such differences nor make recommendations as to whether that is desirable at all. As members of the New Technologies Formation of the Expert Group on Liability for New Technologies,5 we would nevertheless point to its Final Report,6 whose results we obviously continue to endorse.

gital AI systems such as decision support systems in the following, even though these may also cause harm that inter alia will have to be assessed on the basis of tort law rules. Many of the problems addressed in the following will also arise in such cases, though. 4 See, eg, the Commission Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics, COM(2020) 64 final, 2 ff. 5 See . 6 Expert Group on Liability and New Technologies – New Technologies Formation, Liability for Artificial Intelligence and other digital technologies (2019) ; in the following: NTF Report. While the text of this report is reproduced below at 321 ff, the citations in the following will nevertheless refer to the page numbering of the original download.  



Karner/Koch, Civil Liability for Artificial Intelligence

23

I. Overview

B. Current Tort Law Regimes and Artificial Intelligence I. Overview In the following, we will examine how current tort law regimes in the EU would apply to AI systems. We will try to highlight existing solutions and challenges if applied to harm caused by such novel technologies. As with other potential sources of harm, essentially four different liability tracks are to be considered: fault liability, vicarious liability, strict (risk-based) liability,7 and product liability under the implementations of the Product Liability Directive (PLD)8 in the Member States. In accordance with the task assigned, the following analysis concentrates solely on (tortious) fault-based, vicarious, and risk-based liability. In light of the ongoing reform of the PLD, its regime will only be addressed to the extent necessary to highlight the interplay between the various delictual tracks of liability. The study also excludes contractual liabilities, which can generally compete with delictual liability claims (eg in Austria, Germany, the Czech Republic, Italy, Greece, the Netherlands, and Poland).9 By virtue of the non-cumul principle, however, the latter is neither true for French law10 in particular, nor for Romania (art 1350 para 3 Romanian Civil Code, Codul civil),11 nor Hungary (§ 6:145 Hungarian Civil Code, Polgári törvénykönyv, Ptk).12 In these jurisdictions, contractual claims exclude any alternative claims arising in tort as a matter of principle.13 However, there are nevertheless also certain exceptions to

7 On the categorisation of vicarious and strict liability, see infra B.IV.2(a). 8 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [1985] Official Journal (OJ) L 210/29, as amended by Directive 1999/34/EC of the European Parliament and of the Council of 10 May 1999 [1999] OJ L 141/20. 9 See M Martín-Casals, Comparative Report, in M Martín-Casals (ed), The Borderlines of Tort Law: Interactions with Contract Law (2019) 787 ff (no 188 ff), and the country reports contained in that volume. 10 On Belgium, see P Wéry, Droit des obligations I: Théorie générale du contrat (2010) 545 ff. 11 Art 1350 para 3 Codul civil states: ‘Unless otherwise stated by law, neither party may replace the application of the rules concerning contractual liability by reference to other rules that would be more favourable.’ 12 See A Fuglinszky, Risks and Side Effects: Five Questions on the ‘New’ Hungarian Tort Law, ELTE Law Journal 2/2014, 200 (216 ff). 13 For French law, see further J-S Borghetti, France, in M Martín-Casals (ed), The Borderlines of Tort Law: Interactions with Contract Law (2019) 156 ff (no 72 ff).  









Karner/Koch, Civil Liability for Artificial Intelligence



24

B. Current Tort Law Regimes and Artificial Intelligence

the non-cumul principle, especially in France for road traffic accidents and product liability, where the strict tort liability regime that they establish applies regardless of the existence of a contractual relationship between the parties.14 While not necessarily expressed in statutory language,15 tort laws throughout (not just) Europe proceed from the basic rule that casum sentit dominus – whoever suffers harm has to cope with it herself unless there is a justification recognised by law to shift that loss at least in part onto somebody else.16 To begin with, this means that tort law only provides for reparation if someone incurred a loss that is recognised by the applicable legal system as compensable.17 Furthermore, it is universally acknowledged that the victim can only seek indemnification from another who has to account for at least a partial cause of the victim’s loss. Finally, that cause within the other party’s sphere must be evaluated in order to examine whether it falls within one of the bases of liability recognised by the legal system. Originally, this was limited to blameworthy conduct of the addressee of the victim’s claim, but was gradually expanded by adding more or less alternative claims in tort law triggered by the materialisation of a certain risk without the need to identify some misconduct of the person in charge of that risk. While the former (fault-based) variant of delictual liability is universally acknowledged throughout all legal systems (though not necessarily regulated in the same way), the latter (risk-based) liability has not yet been equally met with universal recognition. A third basis of liability stands in between: while it is also triggered by some reproachable conduct, it was not the addressee of the claim herself that misbehaved, but some third person linked to the addressee. This is why some legal systems deem vicarious liability as more closely related to fault-based liability

14 See Borghetti (fn 13) 157 (no 74). It is yet unclear whether the planned reversal of the noncumul rule in personal injury cases as suggested by the current project of a reform of French tort law will indeed be adopted: see art 1233-1 para 1 of the Projet de réforme de la responsabilité civile 2017 (available at ); cf also the recent alternative proposal by the Senate, according to which a victim physically injured could choose between contractual and non-contractual liability (art 1233 para 2 of the Senate’s proposal, ). 15 But see, eg, § 1311 of the Austrian Civil Code (Allgemeines Bürgerliches Gesetzbuch, ABGB): ‘Pure misfortune is to be borne by whomever it is to whose patrimony or person it occurs. …’ 16 § 1311 ABGB as cited in the previous footnote continues: ‘… If, however, someone caused the misfortune culpably, if he violated a legal norm aimed at the prevention of harm by misfortune, or if he intervened in another’s affairs without necessity, he is liable for all harm which would not otherwise have occurred.’ See also H Koziol, Basic Questions of Tort Law from a Germanic Perspective (2012) 1 ff. 17 In the following, we will thereby disregard preventive relief and so-called ‘damage per se’ (as suggested, eg, by Art VI-6:204 DCFR).  

Karner/Koch, Civil Liability for Artificial Intelligence

I. Overview

25

(due to the focus on misconduct), whereas others emphasise the absence of personal (directly) causal conduct of the liable person and therefore qualify it as strict (no-fault) liability.18 Harm caused by or in the course of the operation of some AI system will not be different in kind from any other damage relevant in tort law – the victim may be bodily harmed or even die (in which case survivors will seek compensation), there may be damage to property, or any other type of loss. We will therefore not address the question of compensable damage in this paper.19 However, we would still like to point out that there are indeed significant differences throughout Europe when it comes to the question of what kind of damage tort law will remedy, the most prominent example being pure economic loss, which some jurisdictions are more reluctant to compensate than others.20 Also, the various heads of damage that will be indemnified as a consequence of a certain harm differ,21 not to mention the extent of compensation awarded for such losses.22 Still, we believe

18 See infra B.IV.2(a). 19 See, eg, BA Koch/H Koziol (eds), Compensation for Personal Injury in a Comparative Perspective (2003) with reports from 10 European jurisdictions (Austria, Belgium, England, France, Germany, Italy, the Netherlands, Spain, Sweden, Switzerland). An extensive comparison with reports from 26 European jurisdictions is offered by B Winiger/H Koziol/BA Koch/R Zimmermann (eds), Digest of European Tort Law II: Essential Cases on Damage (2011). 20 See, eg, WH van Boom/H Koziol/CA Witting (eds), Pure Economic Loss (2004). 21 Personal injuries, for example, do not allow secondary victims such as close relatives to claim for their own ensuing immaterial harm, even though damages for bereavement (if the primary victim is killed) are now deemed compensable in most European legal systems, though still not all, and the requirements differ substantially. In Austria, for example, surviving relatives are only compensated for bereavement if the primary victim was killed with at least gross negligence by the tortfeasor, but not in cases of strict liability, and there is no compensation at all for family members who endure emotional harm because a close person is severely injured (but survives). See also – as another example – the wide differences regarding the compensability of the loss of housekeeping capacity as a consequence of personal injury: E Karner/K Oliphant (eds), Loss of Housekeeping Capacity (2012). 22 Just consider, eg, the dramatic differences in the amounts of compensation for immaterial harm such as pain and suffering. See, eg, the comparison of the monetary value of the various heads of damages resulting from bodily injury offered by Swiss Re at . Immaterial harm resulting from bodily injuries in particular is indemnified with considerable sums in some countries and very nominal amounts in others, with Malta not compensating at all for pain and suffering. See also BA Koch, 15 Years of Tort Law in Europe – 15 Years of European Tort Law? in E Karner/BC Steininger (eds), European Tort Law 2015 (2016) 704 (714 ff, in particular the comparative charts at 715, 716, and 718); E Karner, Quantification of Moral Damages in Personal Injury Cases in a Comparative View, in H Koziol/U Magnus (eds), Essays in Honour of Jaap Spier (2016) 117.  

Karner/Koch, Civil Liability for Artificial Intelligence

26

B. Current Tort Law Regimes and Artificial Intelligence

that the involvement of AI in causing damage will not (and should not) alter the fundamental attitude of legal systems regarding its compensability as such. Therefore, we will only highlight the remaining key elements of liability in the following, starting with causation. We will then address the three main bases of liability (fault, strict and vicarious liability), but – in line with the task assigned for this project to highlight key differences only – leave aside other aspects such as contributory negligence23 or prescription24 despite their undeniable importance in practice. We will also not cover alternative or complementary systems providing relief to victims of accidents, such as insurance or fund solutions. However, it is important to note upfront that the immediate relevance of whatever we say on tort law in the following may be reduced or at least altered in those jurisdictions where such alternative regimes play an important role in practice, often superseding the relevance of tort law as pursued before courts, even if such regimes are founded upon tort law concepts. This is true, for example, for the systems of traffic or patient insurance in the Nordic countries.

II. Causation All jurisdictions require that the tortfeasor has to account for at least partially causing the harm complained of, whether by individual conduct, by conduct of another attributable to the liable person, or via the materialisation of some risk that emerges from within the tortfeasor’s sphere.25 As a general rule (though subject to exceptions), it is typically the victim who has to trace her loss back to the defendant. This is particularly challenging, however, if the suspected cause of the victim’s loss is related to the operation of an AI system, due to its very nature and its

23 See, eg, U Magnus/M Martín-Casals (eds), Unification of Tort Law: Contributory Negligence (2004). 24 The most recent comparative study on this aspect is I Gilead/B Askeland (eds), Prescription in Tort Law: Analytical and Comparative Perspectives (2020). 25 J Spier/O Haazen, Comparative Conclusions on Causation, in J Spier (ed), Unification of Tort Law: Causation (2000) 127: ‘All jurisdictions recognise causation as a requirement of tortious liability …’ See also Comment A to Art VI-4:101 DCFR: ‘Notwithstanding the many questions connected with the concept of causation, it is an undisputed cornerstone of all European legal systems of liability – including Community law – that the legally relevant damage must have been “caused” either by the liable person or by another person or a material source of danger for which that person bears responsibility.’ Karner/Koch, Civil Liability for Artificial Intelligence

II. Causation

27

peculiar features such as complexity, opacity, limited predictability and openness.26 These problems arise (at least) on four levels: – First and foremost, the AI itself may have malfunctioned. However – and this distinguishes AI systems from other technologies – it may have worked as intended, but nevertheless produced an undesired outcome due to its selflearning abilities and the capability of adapting its behaviour to new challenges. These may stem in particular from incoming data, either derived from its own sensors or from external sources, which are processed internally on the basis of algorithms. The latter themselves may be adjusted in the course of the operation. It is the more difficult to link an undesired outcome produced by an AI system to human misconduct the more the latter is detached from the former due to its nature and operation. Such misconduct may include flawed initial design of the algorithm and/or its interplay with sensors or external data, or failure to pre-define boundaries for the self-development of the AI system. Also, the user may have been negligent in applying the AI system, eg by operating it in an improper environment or under unsuitable conditions. – Second, AI systems embedded in hardware may cause harm due to flaws of the hardware itself, or due to its AI features, or because the latter caused the former to fail. Identifying what exactly initiated the process ultimately leading to the victim’s loss is thereby complicated even further. While the hardware failure itself may be easy to detect, identifying what caused it (and thereby who may ultimately be responsible for the ensuing loss) may not. If liability vis-à-vis the victim is based upon a defect within the combination of AI and hardware as such, these problems may obviously arise at the recourse level as well. – Thirdly, the operation of AI systems is influenced by human conduct, be it by their users, by content providers, by their manufacturers, or by software or hardware component producers, or by third parties. This adds a potential further layer of complexity when trying to analyse ex post what exactly caused the victim’s harm. The more players are involved, and the more that may have influenced the ultimate ‘behaviour’ of the AI system, the more complex identifying a liable person may be. While the involvement of multiple persons is not an exclusive feature of AI systems, of course, the aforementioned particular characteristics of AI may make it especially challenging to attribute an AI system’s ‘behaviour’ to any specific person.

26 See already supra at fn 4 and the reference therein. Karner/Koch, Civil Liability for Artificial Intelligence

28



B. Current Tort Law Regimes and Artificial Intelligence

Fourthly, implementations of AI systems often significantly rely on an interaction with other technology, often themselves AI-based. Particularly in the latter case, the difficulties in identifying what ultimately caused the victim’s harm are multiplied exponentially. Connectivity also brings about vulnerability for malicious interference, which may further distort the quest for the true cause of someone’s loss.27

In the following, we will therefore look at various aspects of proving causation with a particular focus on the peculiar challenges of establishing that an AI system at least contributed to causing damage.

1. Procedural starting points (a) Standard of proof Succeeding in proving causation depends inter alia on the applicable standard of proof, ie the minimum degree of probability that an alleged fact really happened which needs to be proven in order for a court to be convinced of this fact as indeed given (the degree of the court’s persuasion of an allegation to be true). While some countries (such as Cyprus, Ireland, Malta, or – subject to qualifications – the Nordic countries28) are satisfied with a likelihood (at least in theory) just exceeding 50 % (‘more likely than not’, the so-called ‘balance of probabilities’, the ‘prepon 

27 Without addressing the problem of hacking further in this study, it is worth noting that this may contribute to distortions of tort law of a different kind. As with all criminal behaviour where the offender herself cannot be identified, brought to court, or has insufficient funds to indemnify her victims, the latter may seek redress from other players involved, such as those who should have shielded them from such attacks. The more remote such third-party addressees of compensation claims are, however, the less traditional notions of tort law work in practice, as experienced in the area of compensation for acts of terrorism. See, eg, BA Koch (ed), Terrorism, Tort Law, and Insurance (2004). 28 This varies from country to country, though: ‘In Scandinavian law …, the starting point is the “balance of probabilities” test. In Norwegian law, the court cannot impose liability if the court finds that the probability of the fact which is disputed is 50 % or less. The fact must seem more likely than not to the court. If it seems to the court that it is a 50/50 situation, the alleged tortfeasor must be acquitted. In Swedish law, as a starting point, “clear probability” is required in case law. Also under Danish law, the standard of proof is generally higher than a balance of probabilities. However, the applicable standard of proof may vary from one area of tort law to another.’ V Ulfbeck/M-L Holle, Tort Law and Burden of Proof – Comparative Aspects. A Special Case for Enterprise Liability? in H Koziol/BC Steininger (eds), European Tort Law 2008 (2009) 26 (29). See also E Karner, The Function of the Burden of Proof in Tort Law, in H Koziol/BC Steininger (eds), European Tort Law 2008 (2009) 68 (71 f).  

Karner/Koch, Civil Liability for Artificial Intelligence

29

II. Causation

derance of the evidence’ or överviktsprincip), most other procedural laws in (continental) Europe expect much more. In Austria, for example, it used to be almost certainty (though not necessarily expressed in percentages),29 but this has meanwhile been reduced to a mere (but still) ‘high degree of probability’ or ‘substantial likelihood’, requiring the judge to be fully convinced, but without pinpointing it to exact percentages of probability.30 Yet other jurisdictions merely emphasise the discretion of the court to come to a conclusion without defining a specified standard of proof.31 Which theory applies has a decisive impact on the outcome of tort cases where causation is disputed:32 In common law or Nordic jurisdictions, the claimant (at least in theory) only needs to prove that her version of the story is more likely than that told by the defendant. If all other requirements for tortious liability are given, this means that she wins the case (and is therefore indemnified in full) if there is a mere 51 % likelihood that her loss was caused by the defendant in  

29 Such a high standard still seems to apply – at least in theory – for example in the Czech Republic, cf L Tichý/J Hrádek, Causal Uncertainty and Proportional Liability in the Czech Republic, in I Gilead/M Green/BA Koch (eds), Proportional Liability: Analytical and Comparative Perspectives (2013, in the following: Gilead/Green/Koch, Proportional Liability) 99 (107). 30 See, eg, W Rechberger in H Fasching/A Konecny (eds), Kommentar zu den Zivilprozessgesetzen (3rd ed 2017) vol III/1, Vor § 266 ZPO, no 11 f: While, as a rule, high demands must be made on the probability required for proof, this is not an objective quantity and therefore a certain range is inherent in such a standard of proof rule. From a comparative perspective, see, eg, M Brinkmann, Das Beweismaß im Zivilprozess aus rechtsvergleichender Sicht (2005). 31 Cf, eg, sec 232 para 1 of the Estonian Code of Civil Procedure (Tsiviilkohtumenetluse seadustik): ‘The court evaluates all evidence pursuant to law from all perspectives, thoroughly and objectively and decides, according to the conscience of the court, whether or not an argument presented by a participant in a proceeding is proven considering, among other, any agreements between the parties concerning the provision of evidence.’ (transl at ). Similarly sec 97 para 1 of the Latvian Civil Procedure Law (Civilprocesa likums, ). See also G Comandé/L Nonno, Proportional Liability in Uncertain Settings: Is it Precautionary? Italian Insights and Comparative Policy Considerations, in Gilead/Green/Koch, Proportional Liability (fn 29) 199 (202), describing a ‘passage from the burden of proof based on the unwritten rule of certainty (request to prove facts beyond any doubt) to one based on “the preponderance of evidence” (a judgment on plausibility)’ in Italy. 32 However, it is difficult to predict the exact impact of such doctrinal differences on actual court practice. See, eg, K Clermont/E Shervin, A Comparative View of Standards of Proof, American Journal of Comparative Law (AJCL) 50 (2002) 243 (sharp contrast); M Brinkmann (fn 30) 72 ff (practically no difference provable); S Steel, Proof of Causation in Tort Law (2015) 55 (plausible evidence for a practical difference); M Schweizer, The civil standard of proof – what is it, actually? International Journal of Evidence & Proof 20 (2016) 217 (attempting to prove the point empirically, with little differences shown).  

Karner/Koch, Civil Liability for Artificial Intelligence

30

B. Current Tort Law Regimes and Artificial Intelligence

comparison to any alternative explanation. The mirror image of that outcome is equally true, of course – she collects no compensation at all if the probability is just below or exactly 50 %. If the threshold is much higher as in most continental European systems, the claimant will not win the causation argument as easily – presuming for the sake of the argument that the ‘high probability’ amounts to around 80 %, the victim loses her case against the defendant even though she can successfully convince the court of a 79 % probability that the cause of her loss can be traced into the defendant’s sphere.33 While the former theory endorsing a lower threshold obviously aids the victim, the latter protects the tortfeasor. From the defendant’s perspective, the likelihood of being held liable even without actually having caused the loss is much higher in jurisdictions applying the ‘more likely than not’ standard. Either way, however, once the standard of proof is satisfied, the victim collects full compensation (and is compensated for 100 % of her loss, even if the court is convinced only to a degree of just above 50 % or 80 % respectively that it was caused by the defendant). Hence, a 51 % likelihood that an AI-based application caused harm may lead to full liability of the person to whom it is attributable in one country and fully exonerate that same person in another. In tort cases involving AI-driven technologies where proving causation may be more difficult than in other scenarios, the applicable standard of proof is therefore a crucial factor determining the outcome of cases, giving claimants in common law and Nordic jurisdictions a higher chance of succeeding at least on the causation argument than in other countries. If experts, for example, cannot tell by more than a 55 % probability that the developer’s failure to adjust the light sensors of a cleaning robot (which was blinded by a reflection and therefore made a wrong turn) was the reason for the ensuing loss, the developer’s omission will be deemed the cause of the harm in the former jurisdictions, but not in the latter.  















(b) Procedural alleviations of the burden of proof The higher the standard of proof, the more claimants will benefit from any alleviations that the applicable laws of procedure provide.34 The range of options and

33 As said though, in practice the threshold is not expressed in numerical terms; judges merely conclude that there was a ‘high’ probability or that they were ‘convinced of the truth of the allegations’ without specifying to which degree. 34 Cf V Ulfbeck/M-L Holle (fn 28) 27: ‘At the very general level it can be observed that the higher the standard of proof, the greater the effect of rules allocating the burden of proof from one person to another. Likewise, the lower the standard of proof, the less may be the need for proof allocating rules in favour of the person bearing the burden of proof.’ Karner/Koch, Civil Liability for Artificial Intelligence

II. Causation

31

the likelihood that courts are willing to offer them to claimants differ throughout Europe, though. Courts may already be satisfied that the cause lay within the defendant’s sphere, for example, if the claimant can prove an event attributable to the defendant that typically leads to losses of the kind incurred by the claimant, even though the latter cannot demonstrate that the necessary sequence of occurrences connecting the proven event with the loss really followed.35 Apart from this Germanic ‘Anscheinsbeweis’ (or prima facie evidence), the common law res ipsa loquitur doctrine may also be cited in this context.36 The latter in its core infers negligence from the nature of the victim’s injuries even though any direct evidence of the defendant’s wrongdoing is lacking: ‘Usually pedestrians do not get struck by falling barrels of flour plummeting from a second floor storage facility.’37 In the case alluded to, the accident that ‘spoke for itself’ led the court to conclude that the defendant’s servants had failed to properly store these barrels in her warehouse even though there was no evidence proving any misconduct in her sphere.38 This classic case may indicate a very modern application, though: after all, usually pedestrians also do not get struck by drones or their cargo falling out of the sky. Related thereto is the practice eg of Austrian courts to infer causation from a violation of a so-called ‘Schutzgesetz’ (protective norm): if the claimant can prove that the defendant infringed a provision prescribing specific conduct that was introduced (inter alia) in order to prevent harm, and the claimant suffered the kind of harm that should have been avoided, this also serves as prima facie evidence of a causal link between the defendant’s conduct and the damage.39 Any such factual presumption linking one proven fact to another does not alter the burden of proof, however: ‘It only denotes that for the time being that bur-

35 But see V Ulfbeck/M-L Holle (fn 28) 31: ‘Scandinavian law does not know the German “Anscheinsbeweis” as a concept.’ 36 Applicable eg in Ireland and Cyprus. 37 I Giesen, The Burden of Proof and other Procedural Devices in Tort Law, in H Koziol/B Steininger (eds), European Tort Law 2008 (2009) 49 (57), referring to the landmark English case Byrne v Boadle, 159 English Reports (Eng Rep) 299. 38 As Pollock J said in that case: ‘A barrel could not roll out of a warehouse without some negligence, and to say that the plaintiff who is injured by it must call witnesses from the warehouse to prove negligence seems to me preposterous. … I think that those whose duty it was to put it in the right place are prima facie responsible, and if there is any state of facts to rebut the presumption of negligence, they must prove them.’ 39 See, eg, E Karner in H Koziol/P Bydlinski/R Bollenberger (eds), KBB – Kurzkommentar zum ABGB (6th ed 2020) § 1311 no 6. On similar practices in Germany and Denmark, see V Ulfbeck/ M-L Holle (fn 28) 39. Karner/Koch, Civil Liability for Artificial Intelligence

32

B. Current Tort Law Regimes and Artificial Intelligence

den has been discharged’,40 at least on that point. After all, the claimant still needs to prove the initial trigger of the factual presumption with the applicable standard of proof, and the defendant can produce evidence questioning that the presumption actually applies in her case, thereby rebutting it. In some instances, courts in those jurisdictions requiring a higher standard of proof were willing to lower that bar in scenarios where victims for systemic reasons typically, or at least very often, fail to reach such a high threshold. This was evidenced, for example, in medical malpractice cases.41 Whether or not these courts will also come to the rescue of claimants harmed by AI technology (and if so which) is yet unclear and unpredictable, as this lowering of the standard of proof gradually emerged in case law under the judges’ impression of the peculiar cases before them. It is equally not foreseeable whether courts throughout Europe will develop particular detours for victims of a complex case scenario as was at the centre of the dispute in the French vaccination case before the CJEU.42 What seems more likely, though, is that courts at least in some jurisdictions may – also in AI cases – require the defendant to further substantiate her denial of the claimant’s allegations with evidence in the former’s exclusive control, and/or sanction failure to produce such evidence:43 If, for example, an AI-driven

40 I Giesen (fn 37) 56. 41 See, eg, BA Koch, Causal Uncertainty and Proportional Liability in Austria, in Gilead/Green/ Koch, Proportional Liability (fn 29) 77 (78 f). See also for the Netherlands I Giesen (fn 37) 62: ‘The information deficit a patient usually encounters when suing a medical practitioner can be balanced by imposing on the doctor the duty to come forward with certain information at his disposal, thus levelling the “playing field” between both parties to some extent.’ 42 CJEU 21.6.2017 C-621/15, N W et al v Sanofi Pasteur, ECLI:EU:C:2017:484: French courts had already before that case mitigated the difficulties of victims of vaccines in proving causation by accepting ‘on the basis of a set of evidence the seriousness, specificity and consistency of which allows it to consider, with a sufficiently high degree of probability, that such a conclusion corresponds to the reality of the situation’. The CJEU was called to determine (and confirmed) that the French special treatment of the proof of causation was compatible with the PLD. The court insisted that ‘under the principle of procedural autonomy and subject to the principles of equivalency and effectiveness, it is for the national legal order of each Member State to establish the ways in which evidence is to be elicited, what evidence is to be admissible before the appropriate national court, or the principles governing that court’s assessment of the probative value of the evidence adduced before it and also the level of proof required’ (para 25). 43 This could also be seen in medical malpractice cases, although not in all jurisdictions (eg not in the Czech Republic), see BA Koch, Medical Liability in Europe: Comparative Analysis, in BA Koch (ed), Medical Liability in Europe (2011) 611 (633 f with further references). Cf I Giesen (fn 37) 59 ff on the so-called ‘sekundäre Behauptungslast’ and related concepts. Giesen also argues (at 66) that this would be a preferable rule on a European scale, avoiding the harsh consequences of leaving the burden of proof where it is or shifting it: requiring also the opposing party to submit evidence to the court that is in his exclusive control (and beyond reach for the claimant) may ‘pro 

Karner/Koch, Civil Liability for Artificial Intelligence

II. Causation

33

machine had a built-in logging device, but the keeper of the machine refuses to produce the log files in the case of an accident, courts may well be willing to conclude that whatever could have been proven by such files in favour of the keeper was not in fact given, or go even further and presume that the allegations of the victim are true, thereby now burdening the keeper with proving the opposite. Whether or not this would be truly beneficial for victims of AI systems not only depends upon the existence of devices that should have logged relevant data, but also on what kind of information is to be expected from such files that could indeed contribute to clarifying the cause of harm. If, for example, a black box only records the collection of internal and external data (eg incoming GPS information and the data collected by the LIDAR44) as well as the system’s reaction thereto, the processing of the data within the system as such would not be part of the log file (ie why the combined information led the vehicle to make that left turn). If this analysis is where the flaw is suspected, the log file would not help the claimant’s story in the first place.

(c) Administrative law measures assisting those charged with the burden of proof While not a matter of procedural, but rather of administrative law, requirements to produce and/or secure evidence obviously contribute to assisting those burdened with proving causation. This includes, in particular, duties to install logging or recording devices.45 While such duties will typically (but not necessarily)

vide an efficient means to level, if not equal, the procedural chances between the litigants in the proceedings without at the same time opening Pandora’s box in the sense of allowing a vast amount of added, new and/or frivolous tort claims to pop up. If this duty were to be accepted on a more general scale in European (tort law) systems, this would mean that one party, usually the defendant, would be obliged to provide certain information, thereby allowing the opposing party (usually the plaintiff) to use that information to strengthen its own case or at least to make it easier to provide the proof demanded for. It would do so without going so far as to completely reverse the chances of both parties as the typical reversal of the burden of proof would entail. Hence, the possible fear for opening floodgates to frivolous claims can be put to rest.’ 44 LIDAR is short for ‘light detection and ranging’, which is one of the technologies used, eg, in autonomous vehicles to identify obstacles in the path of the car. Cf, eg, . 45 See (in particular in relation to autonomous medical systems) E Karner, Liability for Medical Robots and Autonomous Medical Devices, in E Karner/U Magnus/J Spier/P Widmer (eds), Essays in Honour of Helmut Koziol (2020) 57 (63 ff). Such a logging duty has also been explicitly recommended by the NTF Report (fn 6) 7, 49 f, Key Finding 20: ‘There should be a duty on producers to equip technology with means of recording information about the operation of the technology (logging by design), if such information is typically essential for establishing whether a risk of the  



Karner/Koch, Civil Liability for Artificial Intelligence

34

B. Current Tort Law Regimes and Artificial Intelligence

be directed at the producers of AI systems, they will have the effect of aiding their users should things go wrong in the operation of such systems. Such obligations may, however, also be imposed on the users themselves, even if only a duty to activate (or not de-activate) such recording devices, or ancillary duties to regularly monitor their functionality, the battery status (if self-powered), or the capacity of the storage unit.46 In various areas of automation, corresponding mechanisms are already now provided for, with these intended in particular to serve to prevent future harm, but also in part to alleviate the evidentiary burden.47 In Germany, for example, the 2017 reform of the Road Traffic Act (Straßenverkehrsgesetz, StVG) introduced a duty to record and store the position and time of any change in vehicle control from the driver to the highly or fully automated vehicle or vice versa as determined by a satellite navigation system (§ 63a para 1 StVG),48 leaving details such as the addressee of such a duty49 for the competent federal ministry to regulate

technology materialised, and if logging is appropriate and proportionate, taking into account, in particular, the technical feasibility and the costs of logging, the availability of alternative means of gathering such information, the type and magnitude of the risks posed by the technology, and any adverse implications logging may have on the rights of others.’ 46 There may also be duties related to such logging functions which have a different purpose; cf, eg, the German § 63a para 4 StVG, which requires the keeper of a highly or fully automated vehicle to delete all logged data after six months (unless there has been an accident in the meantime, in which case such data have to be saved for three years). 47 For air traffic, see Chapter 6.3.1 ICAO Annex 6 (‘Operation of Aircraft – Flight data recorders and aircraft data recording systems’). On the development of accident data recording for motor vehicles at the European level, cf RR Schmidt-Cotta, Event-Data-Recording – Fluch oder Segen, in E Hilgendorf/S Hötitzsch/LS Lutz (eds), Rechtliche Aspekte automatisierter Fahrzeuge (2015) 80 ff. 48 See, eg, T Hoeren/M Böckers, § 63a StVG und der Umgang mit Fahrzeugdaten beim hochbzw. vollautomatisierten Fahren, JurPC-Web-Dok 0021/2020 (). It is interesting to note that this recording obligation was introduced specifically to prevent the user of the vehicle from sweepingly blaming its automated functions for an accident, and only as the second reason does the explanatory memorandum refer to improving the user’s chances to rebut the presumption of fault: ‘Mit § 63a StVG (neu) wird sichergestellt, dass der Fahrzeugführer sich nicht pauschal auf ein Versagen des automatisierten Systems berufen kann. Gleichzeitig ermöglicht die Regelung es dem Fahrzeugführer, einen gegen ihn erhobenen Schuldvorwurf sogar positiv zu entkräften, sollte zB ein Unfall ausschließlich auf ein Systemversagen zurückzuführen sein.’ Bundestags-Drucksache 18/1300, 24. 49 This also depends on how and where such data will be stored. Apart from a black box in the vehicle itself, there may be some online logging stored with the backend service to which the vehicle is linked, in which case of course the operator of such a backend service will be at the centre of attention, who will not necessarily be the manufacturer (and of course also not the user), so a third party with regard to the accident.  

Karner/Koch, Civil Liability for Artificial Intelligence

II. Causation

35

(§ 63b StVG).50 At EU level, art 6 para 1 lit g of the new General Safety Regulation51 inter alia requires (not just autonomous) motor vehicles to be equipped with event data recorders, which are defined as systems ‘with the only purpose of recording and storing critical crash-related parameters and information shortly before, during and immediately after a collision’ (art 3 para 13 leg cit).52 The data collected by such systems may also be relevant as evidence in analysing specific accidents.

2. Basis of liability Irrespective of the procedural framework in each jurisdiction and its impact on the outcomes of individual cases, the likelihood of success in establishing causation obviously primarily depends upon what exactly the claimant needs to prove, ie which (and what kind of) cause she claims to have triggered liability. In those jurisdictions where strict liability plays a lesser role, it is primarily human conduct and aspects related thereto that stand at the centre of the court’s and therefore also of the claimant’s attention. While the latter need not prove fault as such (which is a legal assessment by the court), she still needs to come forward with evidence of conduct attributable to the defendant that can be qualified as faulty. If she has to trace back the cause of her loss to some conduct influencing or at least initiating the operation of some AI-driven system, she will typically also need to prove how exactly the conduct at the beginning of the chain of causation impacted upon the operation in a way that led to her harm. Unless courts are willing to accept prima facie evidence (which will be tricky at least initially where there is no experience yet with what ‘typically’ results from AI opera-

50 Such a regulation has not yet been issued, though, in anticipation of measures on the EU level. Cf also § 1g of the proposed further amendments to the StVG, now introducing rules for (fully) autonomous vehicles: . 51 Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27.11.2019 on type-approval requirements for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, [2019] OJ L 325/1, applicable from 6.7.2022 onwards. 52 Art 6 para 4 leg cit gives further specifications for such event data recorders, such as the capability to record and store ‘shortly before, during and immediately after a collision … the vehicle’s speed, braking, position and tilt of the vehicle on the road, the state and rate of activation of all its safety systems, …’, lit b also expressly prohibits the possibility to deactivate such devices. Karner/Koch, Civil Liability for Artificial Intelligence

36

B. Current Tort Law Regimes and Artificial Intelligence

tions), the claimant will not only have to prove the conduct of (or attributable to) the defendant as such, but also how exactly such conduct subsequently led the AI system to inflict harm upon the claimant, which appears to be a major challenge in such a black box situation. While human conduct is literally visible and can be witnessed (even the abstention from action), identifying the processes within an AI system and convincing the court thereof seems much more challenging, even if the defendant’s conduct relating to the AI system can be traced back with convincing evidence. Even if a claimant can prove, for example, that someone entered false data into an AI system, this typically does not yet suffice in and of itself to prove that this error led the AI system to ultimately cause the harm at stake if such data is only part of the overall collection of information on which the ‘behaviour’ of the system is based, particularly if the system includes methods to validate individual information with big data. When it comes to strict liability, on the other hand, the weight of the burden of proving causation depends upon how the trigger of such a basis of liability is defined in the legislation. Typically, liability irrespective of fault is linked to the materialisation of a certain risk either as such or one linked to a certain object or activity. If it is the latter, the claimant merely needs to prove that the activity or the operation of the object caused her loss without going further into explaining why she was harmed. If the keeper of an animal is strictly liable for any harm it causes, the victim only has to prove that she was injured by the animal, but not that the keeper did anything wrong (such as failed to keep it under control). The hurdle for proving causation is therefore much lower than in a fault liability setting. Similarly, if it is the operation of a motor vehicle that triggers strict liability, again the victim only needs to submit evidence that she suffered loss in the course of the vehicle’s operation, but neither that it was defective nor that it was improperly conducted, maintained or controlled. If there were strict liability for some AI-driven system, presumably proof of harm resulting from its mere operation would already trigger its keeper’s (or operator’s) duty to indemnify the victim (unless some defence applies). The victim would therefore not have to explain to the court what exactly went wrong within the AI system and why, and how this can be traced back to the operator, developer, or programmer, and she also need not show that it was the software rather than the hardware that was flawed. If strict liability is attached to a more generic risk or ‘dangerous activity’, proving causation is more demanding as such a rule does not statutorily define a specific object or activity as dangerous and proceed from such a presumption, but requires the claimant to first prove that whatever caused her loss was indeed as ‘dangerous’ as required by the general clause. The less experience there is with a novel AI system, the more (or less) difficult this first hurdle can be cleared, depending upon the court’s perception of such technology. This inter alia also depends upon the Karner/Koch, Civil Liability for Artificial Intelligence

II. Causation

37

degree of trust that can be established when introducing such AI systems onto the market. After all, autonomous systems, for example, are promoted as safer than systems under human control, so are they still as ‘abnormally’ or ‘extraordinarily’ dangerous as to justify that they fall within the scope of the general clause? Only if this question is answered in the affirmative (which will at least initially require expert evidence on the inner workings of the AI system) can the victim proceed to prove that the operation of the AI system as such caused her loss, which may even have been seen with the naked eye and without technical expertise.

3. Burden of proving causation For the reasons set out above, tracing the victim’s loss back to the defendant may be particularly challenging if the suspected cause of that loss is related to the operation of an AI system. As explained, this is due to the increasingly autonomous nature of some AI systems and their peculiar features such as complexity, opacity, limited predictability and openness. However, the varying challenges of establishing causation obviously only affect the victim if she is charged with the burden of proof in the first place. If that burden were shifted to the defendant, the problems of convincing the court that the AI system (or the conduct designing, producing, initiating, controlling or operating it under a fault-based regime) did not cause the claimant’s loss are at least similarly substantial, if not more severe: after all, if the claimant has to prove that she was harmed by the AI system, she can focus her analysis of the facts on that, whereas the defendant will typically have to investigate alternative causes as well in order to show that these were the true causes of the harm instead. However, the latter endeavour will typically be at the centre of the defence strategy either way, as proving possible alternative causes will already serve to weaken the evidence submitted by the claimant if she remains burdened with proving causation. Shifting the burden of proving causation is a clear intervention in favour of the victim and therefore openly prefers one party over the other.53 As mentioned above, courts in some jurisdictions have taken that step, for example, if evidence in the defendant’s exclusive control was not produced by the latter (which is typi-

53 Cf the Norwegian solution described by B Askeland in B Winiger/H Koziol/BA Koch/R Zimmermann (eds), Digest of European Tort Law I: Essential Cases on Natural Causation (2007, in the following: Digest I) 1/16 no 8: ‘The Norwegian Supreme Court has, through these cases, established a rule of shifting the burden of proof: A defendant who claims that there is no causal connection between his faulty behaviour and the damage has to prove that the harm would have occurred even if he had acted prudently. Otherwise the defendant will be deemed to be the cause of the harm.’ Karner/Koch, Civil Liability for Artificial Intelligence

38

B. Current Tort Law Regimes and Artificial Intelligence

cally a procedural way of shifting the burden of proving at least those aspects which such evidence could have shown).54 A rather unique approach was taken by the German courts in medical malpractice cases, where they concluded that if the defendant is proven to have committed some grave fault, the causal link between such misconduct and the harm will be presumed. This has meanwhile been codified by § 630h para 5 of the German Civil Code (Bürgerliches Gesetzbuch, BGB). Also other jurisdictions are prepared to link conclusions regarding causation to the degree of fault.55 In light of the mostly lacking relevance of human conduct in AI scenarios, though, this may not apply there. Another noteworthy domestic specialty is the Dutch omkeringsregel, introduced by courts in the mid-1970s. The rule essentially provides for a rebuttable presumption of causation if ‘a wrongful act creates or increases a certain risk of damage and that specific risk actually materialises’. After a temporary expansion of the rule in practice, it is currently presumed to apply mostly to the breach of traffic or safety rules.56 Its relevance in AI cases may therefore be limited to technologies specifically regulated by law, including certain specific rules of conduct (such as update obligations or the like). Shifting the burden of proving causation in substantive law is otherwise – at least in civil law jurisdictions57 and at least in theory – for the legislator to decide. The legislator may choose to do so for various reasons, for example in order to promote the introduction of certain new technology by coupling it with a facilitated path for potential victims to obtain compensation58. The European legislator, on the other hand, decided to expressly leave the burden of proving causation 54 Supra B.II.1(b). 55 V Ulfbeck/M-L Holle (fn 28) 38, citing Denmark and Norway as examples. 56 I Giesen (fn 37) 57. Cf the above-mentioned practices of Austrian and German courts in cases of a violation of protective norms (supra at fn 39). 57 But see Best v Wellcome Foundation Ltd and Others, [1993] 3 IR 421, where the Irish Supreme Court per Finlay CJ said: ‘The function which a court can and must perform in the trial of a case in order to achieve a just result is to apply common sense and a careful understanding of the logic and likelihood of events to conflicting opinions and conflicting theories concerning a matter of this kind.’ The Court therefore held that a manufacturer who distributed a charge of vaccines with known indications of toxicity shall be liable despite conflicting expert opinions as to whether the claimant’s condition was really caused by that vaccine. Under the circumstances and in light of the proven negligence of the defendant, the Court was satisfied with a close temporal proximity between the administration of the vaccine and the onset of symptoms. 58 This was done, eg, by the Austrian legislator when introducing strict liability for genetically modified organisms into §§ 79a ff Gentechnikgesetz (with § 79d providing a rebuttable presumption of causation if the GMO was ‘under the circumstances able to cause the harm’) in response to a highly successful referendum demanding a ban on genetic engineering in agriculture.  

Karner/Koch, Civil Liability for Artificial Intelligence

II. Causation

39

in product liability with the victim (art 4 PLD) in order to achieve ‘a fair apportionment of risk between the injured person and the producer’ (Recitals 2 and 7 of the PLD).59 A unique solution was recently introduced in Belgium: Art 8.4 of the new Civil Code allows the judge ‘in exceptional circumstances’ and as a matter of last resort to determine ad hoc who should bear the burden of proof if the standard rules seem ‘manifestly unreasonable’. Such step may only be taken, though, if the toolbox of the law of evidence does not offer adequate solutions, eg by ordering the opposing party to produce evidence known to be in her hands.60 Instead of shifting the burden of proving causation entirely, it could be combined with a trigger such as the ‘inherent risk’, or by requiring that the nature of the harm complained of was such which typically may result from the technology under the circumstances.61 This means that the claimant first has to overcome at least that threshold (even though it may be lower). Any deviation from the (otherwise universally acknowledged62) rule that whoever claims something must prove it, however, requires sufficient justification ‘because the implications of a shift in the burden of proof are quite serious and not to be taken lightly’.63 One such justification64 often found when it comes to proving causation is a systemic disadvantage for the claimant to succeed on

59 See the CJEU ruling in Sanofi Pasteur (supra fn 42). 60 Art 8.4 para 4 (that entered into force on 1.11.2020) provides: ‘In exceptional circumstances, the judge may determine, by a special reasoned judgment, who bears the burden of proof where the application of the rules set out in the preceding paragraphs would be manifestly unreasonable. The judge may make use of this power only if he has ordered all the necessary investigative measures and has ensured that the parties cooperate in the taking of evidence, without obtaining sufficient evidence.’ 61 Cf fn 58. Cf also § 630h para 1 BGB: ‘An error on the part of the person providing treatment is presumed if a general treatment risk has materialised which was fully controllable by the person providing treatment and which has led to injury to the life, body or health of the patient.’ 62 Cf I Giesen (fn 37) 50: ‘The general, worldwide accepted rule regarding the (division of the) burden of the proof is that each party to civil proceedings (both the claimant and the defendant) is required to prove those facts that form the minimally required factual content of the legal rule upon which the claim or defence is based.’ 63 I Giesen (fn 37) 51 f. 64 ‘Other reasons used are the idea that he who benefits from a certain activity should also bear the extra burdens related to that activity (profit theory), the idea of channelling liability in a certain direction, the idea of promoting the preventive effects of (a harsher form of) liability, the need to protect fundamental rights at stake, the wish to decrease the dependence of one party, the need to decrease the imbalance in information between the litigants, the existence of insurance coverage, or to serve the goal of being able to invoke a substantive rule despite evidential difficulties.’ I Giesen (fn 37) 52.  

Karner/Koch, Civil Liability for Artificial Intelligence

40

B. Current Tort Law Regimes and Artificial Intelligence

that point: insisting that the victim proves exactly what caused her loss ‘would put that victim in unreasonable difficulties due to, for instance, the technical or organisational complexity of the defendant’s activities’.65 The less insight the victim has into the processes that led to her damage (even with expert help), the more leaving her entirely with the burden of proving causation may appear unsatisfactory, bearing in mind the ultimate goal of achieving a level playing field for both parties to the case. This may also be true for some loss scenarios involving AI systems. The NTF Report, for example, tried to honour the basic rule that it is for the victim to prove what caused her loss, but nevertheless foresaw exceptions in light of certain arguments justifying such deviations.66 Shifting the burden of proving causation does not mean, however, that this topic is no longer disputed in court. After all, it is not a presumption of causation (or if it is expressed in such a way, it is a merely rebuttable presumption), since the defendant might still succeed with proving that the AI system was indeed not the cause of the damage.67 A more radical solution, obviously, would be to introduce a non-rebuttable presumption of causation. However, this can hardly ever be justified: even if the AI system is known or at least presumed to cause substantial harm (leaving aside the question as to whether it could nevertheless be legally operated), there is no need for a non-rebuttable presumption of causation, as the defendant will often not be able to rebut it anyhow (and if she does, this should be permissible, since no-one should be liable for something that did not contribute to causing harm). This also depends upon the range of available defences, of course – the more alternative causes such as third-party conduct or contributory negligence by the victim are valid arguments against liability, the more likely will the defendant have an at least residual chance of rebutting the presumption of causation. From a victim’s perspective, the need for reversing the burden of proving causation is higher in those jurisdictions where the standard of proof is close to certainty, but not so high in jurisdictions satisfied with a mere preponderance of the

65 I Giesen (fn 37) 52. Cf the Irish case Hanrahan v Merck, Sharp and Dohme (Ireland) Ltd, [1988] ILRM 629: ‘The rationale behind the shifting of the onus of proof to the defendant in such cases would appear to lie in the fact that it would be palpably unfair to require a plaintiff to prove something which is beyond his reach and which is peculiarly within the range of the defendant’s capacity of proof.’ (no 20). 66 See KF 25 f of the NTF Report (fn 6). 67 Again, there may be variations imaginable, such as requiring the claimant to prove a mere abstract likelihood of causation, triggering a presumption only rebuttable if the defendant convinces the court of ‘utmost certainty’ (or any higher standard of proof) that in fact another cause triggered the loss. Karner/Koch, Civil Liability for Artificial Intelligence

II. Causation

41

evidence.68 This has to be taken into account when considering the imposition of a uniform reversal of the burden of proof irrespective of the applicable standard of proof.69 One also has to bear in mind that shifting the burden of proving causation, particularly in a strict liability regime where the remaining requirements of liability are easier to establish, effectively reverses the outcome of the litigation in cases where the true cause cannot be identified with the necessary degree of certainty, from a full victory for the defendant (zero liability) to a full victory for the claimant (full liability of the defendant). After all, any rule allocating the burden of proof determines the outcome of the case in a non liquet situation, ie if factual uncertainties cannot be resolved.70 This is particularly questionable in cases of causal uncertainty, as shall be addressed now.

4. Causal uncertainty Depending upon the applicable liability regime and the respective standard of proof, it is more or less challenging for a claimant to convince the courts in the various Member States that an AI system caused her loss. Even if a particular jurisdiction foresees strict liability triggered by the operation of the AI system, with the operation as such being comparatively easy to prove, there may still be other potential causes that may at least also have contributed to causing harm, either alternatively or cumulatively, causes attributable to another or risks within the claimant’s own sphere. Collisions of AI vehicles are the most evident example, with perhaps more than the obvious possible causes involved: after all, the crash may have occurred due to problems of one (or both) of the vehicles themselves, but also further external influences may have played a role such as the communication network, the road infrastructure, or further participants in traffic such as pedestrians which one or both cars tried to avoid running into, to name just a few. While a pedestrian hit by a drone may seem to face a fairly low barrier when it comes to proving causation at first sight, the drone operator may introduce evidence that some third party had interfered with the flight, for example. While it is

68 V Ulfbeck/M-L Holle (fn 28) 31 f: ‘A legal system which proceeds from the requirement of certainty … may find it necessary to look for ways of lowering the burden of proof for instance through the operation of presumptions and a reversal of the burden of proof. … A legal system which sets a lower bar in relation to the standard of proof may not have the same need to operate rules on presumptions and the reversal of the burden of proof.’ 69 Supra B.II.1(a). 70 I Giesen (fn 37) 53. Karner/Koch, Civil Liability for Artificial Intelligence

42

B. Current Tort Law Regimes and Artificial Intelligence

not possible to describe all possible variations and combinations in this context, just a few examples shall illustrate that outcomes may vary throughout Europe due to that very complication of the case alone. If more than one event may have caused the claimant’s loss, each of which would have sufficed to trigger the harm independently and without the respective others’ influence, but only one of them in fact did, the court faces the problem of so-called alternative causation. Just think of two external sources that both provide certain specific information needed for the operation of an AI system. Both transmit flawed data, and the AI system subsequently causes harm to another, but it remains unclear from which of the two sources the AI system did in fact receive the wrong data, as it can no longer be resolved whether it was logged into the one or the other source at the time. This problem is resolved surprisingly differently by European jurisdictions. The majority solution is to hold all potential tortfeasors jointly and severally liable for the claimant’s loss,71 which means that if the claimant sues only one of them, the latter has to indemnify the former in full, but subsequently may seek recourse from all others for their respective share. However, some jurisdictions, such as Denmark, France, Italy, or Spain, require that all such merely potential tortfeasors had also acted jointly, with the harm ensuing from such concerted conduct (even though such joint action is sometimes merely presumed in order to overcome problems of individual actors).72 Yet other jurisdictions, such as Belgium, rather circumvent the problem by striving for a cause essentielle.73 In countries strictly adhering to a preponderance of the evidence standard, the claimant may lose the case, however, as neither of the alternative causes will exceed the 50 % threshold.74 Therefore, if the claimant cannot convince the court that it was the AI system attributable to the particular defendant that was the sole (or at least predominant) cause of her loss in light of at least one further possible explanation not linked to the defendant, the outcome of the case will vary throughout Europe, even though in cases where the possible causes are alike (such as two or more AI  

71 This is also true in Ireland, for example, by way of a specific legislative provision: sec 11(3) of the Civil Liability Act 1961. 72 See the comparative overview in I Gilead/M Green/BA Koch, General Report: Causal Uncertainty and Proportional Liability: Analytical and Comparative Report, in Gilead/Green/Koch, Proportional Liability (fn 29) 1 (25 f) and the contributions to Category 6a of Digest I (fn 53, 353 ff). In France, apart from the action commune, courts may also presume a garde collective in case one or more objects that would trigger liability under art 1242 al 1 Code civil were involved; see O Moréteau/C Pellerin-Rugliano in Digest I (fn 53) 6a/6 no 5 ff. 73 I Durant in Digest I (fn 53) 6a/7 no 3 f, 7 ff. 74 For Sweden, see B Bengtsson/H Andersson in Digest I (fn 53) 6a/17 no 1 ff.  









Karner/Koch, Civil Liability for Artificial Intelligence

II. Causation

43

systems), joint and several liability of all operators may eventually be the majority outcome. The landscape is even more diverse if one of the potential causes of the victim’s damage is within her own sphere,75 including the so-called risks of life such as predispositions or existing illnesses that may have deteriorated even without external influence.76 Those jurisdictions which have adopted the French concept of the loss of a chance (perte d’une chance)77 will typically use this approach to address such cases: if a medical AI system misdiagnoses a patient, for example, whose cancer is therefore treated too late, this theory does not focus on the consequences of the illness as a whole, but instead on the likelihood that the cancer could have been cured if diagnosed in time. The patient’s compensable harm is therefore not a percentage of her full loss, but a newly defined distinct damage – the lost chance of recovering from cancer – which as such will be indemnified in full. An alternative way of looking at the same problem would be to regard it as a problem of causation instead, arguing that the misdiagnosis did not cause the deterioration of the patient’s health alone, but only in part. Seen from that angle, the patient would not receive full compensation for her lost chance as in France, but partial compensation of her full actual loss, both typically calculated according to the same probabilities, though.78 In Austria, the latter (causation) view prevails, leading to proportional liability of the defendant.79 Other jurisdictions adhere to the all-or-nothing solution resulting from a strict application of the respective standard of proof, either indemnifying the claimant in full80 or leaving

75 See the comparative overview by H Koziol in Digest I (fn 53) 6b/29 and 10/29 (the latter on the loss of a chance theory). 76 This is not to be confused with cases of contributory negligence which require that the victim already succeeded in convincing the court of an (at least partial) contribution of the defendant to her harm, whereas here it remains unclear whether the true cause of the loss originated from the claimant’s or from the defendant’s side, as it could have been either the one or the other alone. 77 This includes eg (at least in some scenarios such as medical liability cases) Belgium, Italy, Poland, Portugal, Spain, in part also in the Netherlands. 78 See I Gilead/M Green/BA Koch (fn 72) 39 ff, particularly at 40: ‘The “lost chance” doctrine is often identified with cases where D’s negligence reduces the probability that P will avoid harm caused in the natural course of events, such as illness. One might denote the actual harm “primary” harm, and the “lost chance” a “derivative” harm. The difference between the two conceptions is that the “lost chance” doctrine asks whether P should be fully compensated for the derivative harm, while the “straightforward” CPL doctrine asks whether P should be partially compensated for the primary harm.’ 79 See, eg, BA Koch, Proportional liability for causal uncertainty, in M Martín-Casals/D Papayannis (eds), Uncertain Causation in Tort Law (2016) 67. 80 At least in cases where the claimant incurred personal injuries, courts in Ireland seem to be sympathetic to the victims and appear more likely to identify the potential cause attributable to  

Karner/Koch, Civil Liability for Artificial Intelligence

44

B. Current Tort Law Regimes and Artificial Intelligence

her uncompensated entirely if she fails to succeed in convincing the court that the defendant’s activity was the cause of her loss with at least the percentage required by procedural law.81 In such cases of causal uncertainties – presumably not a rare scenario in AI cases due to the specificities of the technology82 – the outcomes of the cases will therefore vary substantially throughout Europe, again, however, primarily influenced by the applicable standard of proof. The lower it is, the more likely will courts be prepared to just barely tip the scale above the threshold, with the – not necessarily satisfactory – result that the outcome will be reversed entirely: the claimant is awarded 100 % of her loss instead of zero (or vice versa). The harshness of this all-or-nothing result is increasingly criticised in academia, promoting a shift towards proportional liability instead.83  

III. Fault liability 1. Overview All European legal systems recognise (tortious) fault-based liability. The basis for attributing liability is some alleged misconduct by the damaging party. Faultbased liability not only serves to ensure that the victim of reproached conduct is indemnified ex post, but it is also an instrument guiding conduct to prevent potential damage ex ante.84 Not only is fault liability common to all European legal systems, it is also the default and therefore backup liability regime should there be no alternative in

the defendant as slightly more probable, thereby tipping the scale of the preponderance of the evidence, resulting in full liability of the defendant; cf E Quill in Digest I (fn 53) 6b/14 no 7. This appears to be the solution of choice also in Sweden; cf the cases presented by B Bengtsson/H Andersson in Digest I (fn 53) 6b/17 no 1 ff. 81 This is the case, eg, in the Czech Republic, Germany or Greece. 82 See NTF Report (fn 6) 32 f. 83 See Gilead/Green/Koch, Proportional Liability (fn 29) passim. Cf also the draft of the new Belgian Civil Code (), which promotes proportional liability in cases of alternative causation in art 5.169 (and expressly providing for the causation theory of the perte d’une chance in proposed art 5.168, again leading to proportional liability). 84 See H Koziol, Basic Questions of Tort Law from a Germanic Perspective (2012) no 3/4 ff; U Magnus, Comparative Report on the Law of Damages, in U Magnus (ed), Unification of Tort Law: Damages (2001) 185 f; E Karner, Prevention, Deterrence, Punishment, in H Koziol (ed), The Aims of Tort Law – Chinese and European Perspectives (2017) 53 f.  









Karner/Koch, Civil Liability for Artificial Intelligence

45

III. Fault liability

place, therefore in the absence, eg, of some strict liability regime. This means that any given case where someone incurs a loss is already covered by the tort laws of all European jurisdictions, including – but not limited to – cases involving AI systems or other novel fact scenarios, so strictly speaking, there is no lacuna in the lex lata. However, due to the differences between these jurisdictions when it comes to the details of fault liability, the outcome of such cases will not necessarily be the same. If that outcome means no compensation for those damaged, this may not always seem just, fair and reasonable in light of all the circumstances and competing interests, thereby evidencing gaps in the protection of those that deserve it. For that reason, the dissatisfaction with the outcome of applying traditional fault liability in certain groups of cases has already triggered the development of strict liabilities or at least of variations of the classic fault regime in the past.

2. The varieties of fault liability in Europe (a) Differences regarding the recognition of wrongfulness as a separate element Conceptually, and in accordance with the various legal traditions, fault-based liabilities are framed very differently across Europe.85 In the Germanic legal systems, a distinction is drawn between wrongfulness and fault.86 The same is also true, for example, in Poland.87 The Romance systems, by contrast, predominantly start from a unitary concept: in France88 and Belgium, for example, a singular notion of faute is employed.89 In Italy, on the other hand, a stronger distinction is drawn between wrongful damage (danno ingiusto) and fault (culpa or dolo, art 2043 Codice civile, CC).90 Then again, the Scandinavian systems adhere to a uni-

85 See comprehensively B Winiger/E Karner/K Oliphant (eds), Digest of European Tort Law, Volume 3: Essential Cases on Misconduct (2018, in the following: Digest III), with reports from 28 European jurisdictions and comparative reports building from them; see further H Koziol (ed), Unification of Tort Law: Wrongfulness (1998); P Widmer (ed), Unification of Tort Law: Fault (2005). 86 Germany: § 823 BGB; Austria: § 1294 ABGB; Switzerland: art 41 Obligationenrecht (Code of Obligations, OR). 87 This is the dominant view in the case law and literature, although the legislature originally intended to follow the unitary concept of French law, see E Bagińska/I Adrych-Brzezińska, Poland, in Digest III (fn 85) 75 f (1/23 no 2). 88 Art 1240 [formerly art 1382] Code civil. 89 See J-S Borghetti/M Séjean, France, in Digest III (fn 85) 34 f (1/6 no 6) and B Dubuisson/ I Durant/T Malengreau, Belgium, in Digest III (fn 85) 36 f (1/7 no 3 ff). 90 See F Busnelli/G Comandé/M Gagliardi, Fault under Italian Law, in Widmer (ed), Fault (fn 85) 155 f; N Coggiola/B Gardella Tedeschi/M Graziadei, Italy, in Digest III (fn 85) 40 (1/9 no 1 ff).  



Karner/Koch, Civil Liability for Artificial Intelligence



46

B. Current Tort Law Regimes and Artificial Intelligence

tary conceptual understanding instead, where the culpa rule is applied (Denmark, Finland, Norway, Sweden).91 A distinction between wrongfulness and fault is also generally foreign to the common law world, where the most common basis of liability is negligence, which is triggered by the breach of some duty of care recognised by law. However, there are also other torts with a more limited scope, such as those requiring some intentional harmful conduct.92 When comparing tort laws in more detail, these differences are of course even more significant. Thus, in relation to wrongfulness (where recognised separately), a distinction has to be drawn between result-based wrongfulness (Erfolgsunrecht, as in Germany in the case of direct interferences with absolute rights,93 or in Switzerland94), where the harmful outcome indicates that the conduct causing it was wrongful, or conduct-centred wrongfulness (Verhaltensunrecht, as in Austria95), where the focus instead lies on the conduct leading to the harm rather than on that result in itself.96

(b) Differences regarding the benchmark for assessing the harmful conduct Irrespective of whether wrongfulness is recognised as a distinct category and separate element, all jurisdictions require some misconduct of the tortfeasor as a prerequisite of fault liability. The benchmark for determining which conduct a legal system disapproves of is typically determined by the legal system as a whole, and it is for the court to decide in light of the circumstances and the competing interests of the parties whether the actual behaviour of the defendant at stake was flawed. This assessment, often by way of comparison to what a ‘reasonable person’ would have done under the circumstances at stake, is naturally made only after damage has already been sustained, which makes the outcome of a case involving conduct which has so far not yet been evaluated often unpredictable. This ex post assessment unavoidably also fosters a tendency to expand the duties of care.97

91 See V Ulfbeck/A Ehlers/K Siig, Denmark, in Digest III (fn 85) 58 (1/16 no 1, 4); AM Frøseth/B Askeland, Norway, in Digest III (fn 85) 60 (1/17 no 1 f); H Andersson, Sweden, in Digest III (fn 85) 63 (1/18 no 1). 92 See K Oliphant/V Wilcox, England and Wales, in Digest III (fn 85) 48 f (1/12 no 1 ff). 93 U Magnus, Germany, in Digest III (fn 85) 112 f, 114 (2/2 no 3, 7). 94 B Winiger/A Campi/C Duret/J Retamozo, Switzerland, in Digest III (fn 85) 119, 120 (2/4 no 15, 22). 95 E Karner, Austria, in Digest III (fn 85) 115 f (2/3 no 3). 96 See E Karner/A Bell, Comparative Report, in Digest III (fn 85) 193 f, 195 ff (2/31 no 1 f, 4 ff). 97 Cf P Widmer, Comparative Report on Fault as a Basis of Liability and Criterion of Imputation (Attribution), in Widmer (ed), Fault (fn 85) 331 (349): ‘This apparent differentiation is however not able to veil the fact that, in practice, courts (following more or less consciously the rule “post hoc,  











Karner/Koch, Civil Liability for Artificial Intelligence

III. Fault liability

47

This process of assessing the defendant’s conduct is easier for the court (and at the same time more predictable) if the legal system foresees specific rules of conduct that were evidently breached in the case at hand.98 If these rules were at least in part introduced to prevent harm, they may be deemed as so-called protective statutes (Schutzgesetze).99 This is true, for example, in many jurisdictions in the area of traffic regulations: traffic law sometimes foresees very specific rules of conduct, such as speed limits or the like, infringements of which are easy to identify for the court. This notably reduces the difficulties facing judges in establishing the relevant liability threshold (within the framework of balancing the applicable interests) that exists in the absence of protective norms. The more concrete a legal system prescribes (or forbids) certain conduct, the easier it is for someone harmed by another to demonstrate that the latter did not meet these expectations. Administrative law therefore often has quite a decisive (and often underestimated) influence in tort law.

(c) Differences regarding the applicable fault standard Almost all European legal systems apply an objective standard when assessing fault.100 This is necessarily true for all those jurisdictions which do not distinguish between wrongfulness and fault, but also for Germany, the Czech Republic, Croatia, Denmark, Lithuania and Poland.101 Austria, on the other hand, proceeds from a generally subjective standard, allotting more relevance to the individual knowledge and capabilities of the damager.102 Nevertheless, even in Austria, the notion

ergo propter hoc“) have a clear tendency to strengthen retrospectively the requirements of “due care”, precisely in such a way that, would they have been met, the damage would not have occurred (“After the event, even a fool is wise”).’ 98 There are also select provisions that expressly declare certain conduct as permissible; cf, eg, art 2 para 2 of the Irish Animals Act 1985: ‘Where damage is caused by an animal straying from unfenced land on to a public road, a person who placed the animal on the land shall not be regarded as having committed a breach of the duty to take care by reason only of placing it there if (i) the land is situated in an area where fencing is not customary, and (ii) he had a right to place the animal on that land.’ 99 On the relevance of statutory norms for establishing misconduct to trigger liability, see Karner/Bell (fn 96) 698 (4/31 no 2). 100 See Karner/Bell (fn 96) 197 (2/21 no 7); Widmer (fn 97) 347 ff; H Koziol, Comparative Conclusions, in H Koziol (ed), Basic Questions of Tort Law from a Comparative Perspective (2015) no 8/229 ff. 101 On Poland, see K Ludwichowska-Redo, Basic Questions of Tort law from a Polish Perspective, in H Koziol (ed), Basic Questions of Tort Law from a Comparative Perspective (2015) no 3/118; E Bagińska/I Adrych-Brzezińska, Poland, in Digest III (fn 85) 76 (1/23 no 5). 102 Karner (fn 95) 26 (1/3 no 5).  



Karner/Koch, Civil Liability for Artificial Intelligence

48

B. Current Tort Law Regimes and Artificial Intelligence

of fault is not as subjective after all: first, there is a rebuttable presumption in the Austrian Civil Code that everybody can live up to an average standard (§ 1297 Allgemeines Bürgerliches Gesetzbuch, ABGB)103, and second, any person undertaking an activity that requires particular expertise – including, for example, driving a car104 – thereby indicates to the outside world that she is capable of mastering such special skills, which is why an objective fault standard is applied to such ‘experts’ (Sachverständige, § 1299 ABGB).105

(d) Differences regarding the burden of proving fault106 Manifold differences also exist in relation to the question of who must prove misconduct (fault).107 Before looking at various jurisdictions in comparison, it is important to bear in mind upfront that significant differences exist not only in relation to the allocation of the burden of proving fault, but also as regards the relevant standard of proof108 by which fault must be proven, similar to what has been said above with regard to causation.109 This adds an additional layer of complexity to the overview in the following.

(1) Jurisdictions where the burden of proving fault primarily lies on the claimant Under the general burden of proof rule, it is, in principle, for the injured party to establish the wrongdoer’s fault (eg in Austria, Belgium, France, Germany, the

103 § 1297 ABGB reads: ‘It is also presumed that every person of sound mind is capable of exerting the degree of diligence and care that can be applied by a normally competent person. Whosoever fails to attain this degree of diligence and care in the course of acts causing prejudice to another person’s rights is guilty of negligence.’ 104 Karner (fn 95) 766 (6/3 no 4). 105 Karner (fn 95) 26 (1/3 no 5), 765 f (6/3 no 3 f). § 1299 ABGB reads: ‘Whosoever publicly professes an office, art, business or craft, or voluntarily and without necessity undertakes a task the execution of which requires particular expertise or extraordinary diligence, thereby holds himself out as capable of the necessary diligence and as having the extraordinary expertise required; he is therefore liable for their absence. If, however, the person assigning the task to him knew about his inexperience, or ought to have known about it had he exercised proper care, the latter is also negligent.’ On the special role of experts in the fault analysis, see Widmer (fn 97) 349. 106 See also infra B.III.3(b) on the consequences for AI systems. 107 See also I Giesen (fn 37); Karner (fn 28) 68. 108 See supra B.II.1(a). 109 Whoever is charged with proving fault must convince the court of facts that allow the court to conclude that the conduct was faulty (or not), since this is a legal rather than a factual assessment. Karner/Koch, Civil Liability for Artificial Intelligence

III. Fault liability

49

Netherlands, Portugal, and Sweden).110 In various practically relevant constellations, however, the burden of proof can be shifted to the damaging party under judicially developed or codified rules (reversal of the burden of proving fault; liability for presumed fault), for example if a protective statute (Schutzgesetz) was breached (eg in Germany, Austria, or Portugal).111 Furthermore, a reversal of the burden of proving fault (failure to exercise objective care) is often foreseen for cases involving so-called ‘sources of increased danger’.112 – A historic example thereof that can still be found in many European jurisdictions is the tightened liability of the owner of a structure: if it collapses, specific statutory provisions rebuttably presume that the owner failed to exercise all due care to avert that danger.113 – The same is true, eg, in Germany or Greece for domesticated animals (§ 833 2nd sentence BGB, art 924 para 2 Greek Civil Code),114 and in Austria (§ 1320 ABGB) or Poland (art 431 Kodeks cywilny, KC) for animals in general: the keeper of such animals115 is liable for damage they caused if she cannot prove that she provided for the necessary safe-keeping and supervision.116 Most jur-

110 See the corresponding answers to question 11 in Widmer (ed), Fault (fn 85): 21 (Austria), 45 (Belgium), 97 (France), 114 (Germany), 174 (Netherlands), 195 (Portugal), 273 (Sweden). 111 For Germany, see U Magnus/G Seher, Fault under German Law, in Widmer (ed), Fault (fn 85) 115; G Wagner in Münchener Kommentar zum BGB VII (8th ed 2020) § 823 no 617; for Austria Karner (fn 39) § 1298 no 4 with further references; for Portugal J Monteiro/MM Veloso, Fault under Portuguese Law, in Widmer (ed), Fault (fn 85) 184 (no 29). 112 Karner (fn 28) 76 ff; on further legal systems, see the reports in Widmer (ed), Fault (fn 85): J Monteiro/MM Veloso (fn 111) no 116 ff (buildings and animals); M Martín-Casals/J Solé Feliu, Spain, no 12 f (risks in general); WH van Boom, the Netherlands no 22 (traffic accidents). 113 Cf, eg, the respective provisions in Austria (§ 1319 ABGB), Germany (§ 836 BGB), Greece (art 925 Greek Civil Code), or Portugal (art 492 Código civil). 114 The rule covers any ‘domesticated animal that is intended to support the keeper’s profession, work or subsistence’. Other animals are subject to strict liability: § 833 1st sentence BGB, art 924 para 1 Greek Civil Code. 115 According to art 493 para 1 of the Portuguese Código civil, such a presumption applies to the person charged with supervising an animal (encargo da vigilância). The keeper of an animal (the person who uses it in her own interest) is strictly liable if its dangerousness materialises (art 502 Código civil). 116 Here too, however, it is evident – as with construction sites – that the relevant danger of a particular thing or animal is assessed very differently in the various European jurisdictions. For this reason, diverging levels of strictness also apply to liability for animals across Europe. See BA Koch/H Koziol, Comparative Conclusions, in BA Koch/H Koziol (eds), Unification of Tort Law: Strict Liability (2002) 395 (396 ff).  





Karner/Koch, Civil Liability for Artificial Intelligence

50





B. Current Tort Law Regimes and Artificial Intelligence

isdictions, however, foresee full-fledged strict liability for at least some groups of animals.117 Some countries such as Italy (art 2050 CC)118 or Portugal (art 493 para 2 Código civil)119 reverse the burden of proving fault generally if harm resulted from some ‘dangerous activity’. Jurisdictions which have a strict liability regime in place for motor vehicles may still allow concurrent claims based on the fault of the driver,120 which then may be presumed as, eg, in Germany (§ 18 para 1 StVG).

Such a reversal of the burden of proving fault leads to a distinct tightening of liability, because the defendant may be held liable for merely presumed negligence in non liquet situations. Improving the position of the claimant in such cases is nevertheless deemed justified because the increased dangerousness of a defective building or of an animal, for example, constitutes an additional reason for liability altogether. While the jurisdictions cited do not deem the increased dangerousness in such cases grave enough to justify strict liability in its pure form, they at least decided to tighten liability moderately by reversing the burden of proving fault.121

117 Strict liability for all animals is imposed, eg, by art 1243 of the French Code civil or art 1385 of the Belgian Code civil; art 6:179 of the Dutch Burgerlijk Wetboek (BW); art 2052 of the Italian Codice civile (CC); art 502 of the Portuguese Código civil; art 1905 of the Spanish Código civil; § 1060 of the Estonian Law of Obligations (fn 501); art 6.267 of the Lithuanian Civil Code. See also the German and Greek provisions quoted supra in fn 114 (animals not for professional use); sec 2 para 1 of the Irish Animals Act 1985 (strict liability for dog attacks). Furthermore, under the Irish scienter rule, the owner of an animal whose vicious propensity she is aware of is liable, which is irrebuttably presumed for wild animals. Cf also the proposal of strict liability for both animals and constructions in the DCFR (art VI-3:202 f). 118 See F Busnelli/G Comandé, Italy, in Koch/Koziol (eds), Strict Liability (fn 116) 212 f (no 43 ff). 119 See J Monteiro/MM Veloso (fn 111) 196 f (no 124 ff). 120 Infra B.V.2(f). 121 However, it should again be noted that in the various European jurisdictions the relevant dangerousness of a particular thing or animal is assessed very differently. For this reason, for example, liability for animals or construction sites also takes various more and less strict forms; see above fn 114 f.  





Karner/Koch, Civil Liability for Artificial Intelligence

III. Fault liability

51

(2) Jurisdictions where the burden of proving fault is generally shifted upon the defendant By contrast, a general reversal of the burden of proving fault, always requiring the defendant to show that she was not at fault, applies, eg, in Bulgaria,122 Croatia,123 the Czech Republic,124 Estonia,125 Hungary,126 and seemingly also in Romania.127

(3) Jurisdictions where the burden of proving fault is placed ad hoc Other countries operate more on a case-by-case basis: in Finland, for example, courts are prepared to shift the burden of proving fault in cases of extraordinary danger (such as the sudden release of sulphur containing soot from a thermal power station or a leakage from an underground petrol tank), even though there is no general statutory backing for such deviations from the fault principle.128

(e) Differences regarding other deviations from the standard rules of fault liability In numerous particular fields, special rules apply which serve to modify the general fault liability rules. This obviously includes provisions altering the burden of proving fault, as just mentioned. Furthermore, in Austria, for example, fault liability for traffic accidents is effectively made stricter compared to the general rules by expanding liability for auxiliaries, which is otherwise very limited in the law of delict (§ 19 para 2 Railway and Motor Vehicle Liability Act, Eisenbahn- und Kraftfahrzeughaftpflichtgesetz, EKHG).129 The relevance of fault liability in practice may 122 Art 45 Law on Obligations and Contracts. General Part. II. Sources of Obligation. 4. Tort. 123 Art 1045 Civil Obligations Act: Provisions on Non-Contractual Liability. 124 §§ 2911 ff of the Czech Civil Code: Presumption of Negligence; see J Hradek, Regulation of Liability for Damage in the New Czech Civil Code, ELTE Law Journal 2/2014, 229. 125 § 1050 Law of Obligations Act (LOA, Võlaõigusseadus), RT I 2001, 81, 487 as amended; English translation at . 126 § 6.519 Ptk: ‘Whosoever unlawfully causes damage to another person shall be liable for such damage. He shall be relieved of liability if he proves that his act was not wrongful’. As such, this is a fault-based liability rule with presumed fault; see A Fuglinszky, Risks and Side Effects: Five Questions on the ‘New’ Hungarian Tort Law, ELTE Law Journal 2/2014, 200. 127 See M Józon, Romania, in Digest III (fn 85) 93 (1/28 no 2 f). Unlike the prior legal position, the new Civil Code in force since 2011 no longer explicitly provides for such a reversal of the burden of proof, though, leading to uncertainty on this point. 128 See, eg, the cases cited by B Sandvik, Economic Loss Caused by GMOs in Finland, in BA Koch (ed), Economic Loss Caused by Genetically Modified Organisms (2008) 183 (199). 129 Cf infra B.IV.2(b).  

Karner/Koch, Civil Liability for Artificial Intelligence

52

B. Current Tort Law Regimes and Artificial Intelligence

also be impacted by modifying the general age limit of tortious capacity under specific circumstances. Such is the case in Germany, for example, where the standard age limit of seven years is raised to ten years in road traffic accidents.130

(f) Differences regarding the impact of fault on the extent of compensation131 While statutory strict liability regimes often limit liability, eg by reducing the range of compensable harm or by capping damages,132 fault-based liability in tort generally leads to compensation for both personal injury and property damage. In this regard, the principle of full compensation governs tort laws throughout Europe.133 Nevertheless, significant peculiarities exist when it comes to the various recognised heads of damages resulting from such primary losses, and even more so do we see dramatic differences regarding the amounts awarded in compensation of such harm.134 However, some jurisdictions also make the extent of compensation in the law of delict dependent upon whether fault or strict liability applies, and some even differentiate within the various degrees of fault. As a general rule, for example, (only) victims of intentional wrongdoing will face the least restrictions regarding what is compensated.135 Some jurisdictions only indemnify secondary victims such as relatives of the primary victim for the formers’ own immaterial harm if the latter was injured or killed by the fault of the defendant (and in Austria, it is even limited to cases of at least gross negligence). Austria also distinguishes between slight and gross negligence when it comes to indemnifying property losses: lost profit is only compensated in the latter case, but not if the tortfeasor acted with a minor degree of fault.136

130 § 828 BGB reads: ‘(1) Persons below the age of seven are not responsible for loss they inflict on other persons. (2) Persons who have reached the age of seven, but have not yet reached the age of ten, are not liable for loss they inflict on other persons in an accident involving a motor vehicle, a railway or a cableway. This does not apply where the loss was inflicted intentionally. …’ See also infra C.I.3(e) on the (mirror-image) relevance of this provision when assessing the contributory conduct of a child victim of a traffic accident. 131 See also Widmer (fn 97) 353 ff. 132 See below B.V.2(e). 133 Koziol (fn 84) no 8/1 ff. This is otherwise in Austrian law, where §§ 1323, 1324 ABGB stipulate a ‘structured damage concept’. Where harm is caused through slight negligence, only actual damage is compensated, whereas compensation for lost profits requires proof of gross negligence. 134 See already supra at fn 22. 135 Cf, eg, § 1295 para 2, § 1331 ABGB, § 826 BGB. 136 § 1324 ABGB reads: ‘If damage has been caused through malicious intent or conspicuous negligence, the person harmed is entitled to claim full satisfaction; otherwise, he can only claim  



Karner/Koch, Civil Liability for Artificial Intelligence

III. Fault liability

53

(g) Interim result By their nature, all of these national peculiarities of tort laws can lead to divergent results in individual cases. Such differences cannot, however, be attributed exclusively to merely conceptual variations. We must also bear in mind that judges, when deciding whether a conduct was flawed and therefore triggers liability, often resort to value judgements and to balancing the interests involved. This is co-determined by the relevant tort law tradition,137 and the interests of the wrongdoer and of the injured party are often weighted very differently.138 Setting the standard of liability may depend on whether only fault liability is applicable, or if instead a risk-based strict liability applies as well.139 This is particularly evident in those legal systems which even for (conventional) motor vehicles do not recognise risk-based liability: there, the notion of fault applied in practice comes very close to an almost strict liability elsewhere since the duties of care to be expected from the driver are set so high that they are difficult to fulfil and almost invariably breached.140 There is a general tendency especially in those jurisdictions with only a limited catalogue of singular strict liability rules to circumvent such limitations by artificially raising the bar of the duty of care in fault liability, thereby increasing the chances of victims succeeding.141

3. The application of fault-based liability to AI systems In the following, the above-mentioned diversities are applied to accidents triggered by AI systems. However, it should be stated at the outset that these deliberations are not based on any practical experience in the jurisdictions covered, as

actual indemnification. …’ ‘Full satisfaction’ is defined by § 1323 ABGB as compensation that ‘includes lost profit and reparation of the offence suffered’. 137 See K Oliphant (ed), Special Issue: Cultures of Tort Law in Europe, Journal of European Tort Law (JETL) 2012, 147, with contributions from France, Germany, Scandinavia and the United Kingdom. 138 Cf Oliphant, JETL 2012, 156. 139 On the issue of concurrence between fault-based and strict liability, see below B.V.2(f). 140 See the classic English case Nettleship v Weston [1971] 2 QB 691; cf also Latham LJ in Lunt v Khelifa [2002] EWCA Civ 801 at para 20: the Court of Appeal ‘has consistently imposed on the drivers of cars a high burden to reflect the fact that a car is a potentially dangerous weapon.’ See also C van Dam, European Tort Law2 (2013) 413 f. 141 ‘Thus, the liability still falls under the umbrella of fault-based liability but in effect constitutes non-fault-based liability, as the duties of care are stretched to such a degree that they could no longer be fulfilled by any average or even especially careful subject of the law.’ Koziol (fn 16) no 6/145. For Germany, see, eg, Wagner (fn 111) Vor § 823 no 27 f.  



Karner/Koch, Civil Liability for Artificial Intelligence

54

B. Current Tort Law Regimes and Artificial Intelligence

there simply is none yet. Whether or not and to what extent jurisdictions will apply existing tort law rules to such cases or deviate therefrom is a matter of speculation at this point, although on the basis of an educated guess.

(a) Conduct requirements for the deployment of AI systems The rules of fault-based liability presented above – as divergent as they may be in detail – are also applicable to autonomous and automated systems, unless a jurisdiction chose to introduce an exclusive strict liability regime instead.142 With (partially) autonomous motor vehicles, for example, we might thus imagine cases involving the defective maintenance or servicing of the vehicle (eg failure to install software updates, insufficient cleaning of the sensors, or the like), or using automated functions at an inappropriate moment, but also human errors in the design or manufacturing process. Corresponding considerations apply, for example, to automated lawnmowers, combine harvesters, and drones.143 Increasing automation in this context typically leads to adjustments of the standard of care, a fact which can be demonstrated with the example of (partially) autonomous cars.144 If individual driving tasks are shifted from the driver to the vehicle itself, the former’s duties of active manual control are replaced by mere monitoring obligations. Reference can be made in this context to the 2017 reform of the German Road Traffic Act (StVG), which explicitly provides that the user of a car with automated driving functions is allowed to ‘turn away’ from the road and from steering the vehicle as long as she remains ‘perceptive enough’ to identify a need to retake control of the steering functions at any given time (§ 1b StVG). However, given the rather indeterminate character of this provision, this rule has received quite some criticism from practitioners and academics alike.145 It is therefore not entirely predictable to what extent a driver must remain in ‘standby’ alert, and how deviations therefrom will be sanctioned in tort law.146

142 See also below C.I. 143 See the use cases infra C. 144 See J Pehm, Systeme der Unfallhaftung beim automatisierten Verkehr, Zeitschrift für Internationales Wirtschaftsrecht (IWRZ) 2018, 262 f. 145 See J-E Schirmer, Augen auf beim automatisierten Fahren! Die StVG-Novelle ist ein Montagsstück, Neue Zeitschrift für Verkehrsrecht (NZV) 2017, 254 ff; U Magnus, Autonomously driving cars and the law in Germany, Wiadomości Ubezpieczeniowe 4/2019, 13 (); W Wurmnest/M Gömann, Germany, in E Karner/BC Steininger (eds), European Tort Law 2017 (2018) 212 f. 146 Cf also § 1f of the proposed further amendments to the StVG (supra fn 50), foreseeing duties for the keeper of a fully autonomous vehicle such as ‘regular maintenance of the systems required for the autonomous driving functions’.  





Karner/Koch, Civil Liability for Artificial Intelligence

55

III. Fault liability

It is also important to consider that increasing automisation in autonomous systems will at the same time lead to an expansion of monitoring functions assumed by that system itself, often reducing possibilities for individual (human) apprehension and intervention. This, too, will impact upon the extent of monitoring duties imposed upon the human user or operator.147 The design of these systems, in particular how users are meant to interact with them, will have a significant impact on their duties of care and thus on the effectiveness of fault-based liability.148 It may therefore generally be said that the relevant standards of conduct correlate to the state of development of the automation. The more autonomous the system is, the lower the requirements of human users to take care, at least as a general rule of thumb.149 This does not resolve the uncertainties mentioned above, though. It is hardly possible at present to predict which concrete duties European courts will impose ex post (!) upon users of autonomous systems should these be involved in an accident. Considering further that the national regimes governing the use of such systems will also differ, it is to be expected that the outcome of such cases will vary despite comparable fact settings. Even if the legislator should not intervene, establishing ad hoc duties of conduct for autonomous systems in court practice is nothing new, though, and not peculiar to such technology, but part of the jurisprudential task of adapting the law to changing social circumstances.150 Such at least initial uncertainty cannot be buffered entirely by imposing specific duties through legislation ex ante (if this should be feasible at all), as overburdening the user with such duties may be counter-intuitive for a technology which is marketed on the assumption that the automated system should take over and relieve the user of such duties in the first place.151 Moreover, this would effectively contribute to shifting the liability regime closer towards one of strict liability, but covertly and without disclosing the relevant value judgements made.152 Nevertheless, legislative safety provisions will undoubtedly also impact upon the practice of fault-based liability. The NTF Report therefore proposes that non-com-

147 See (on the example of autonomous vehicles) Pehm, IWRZ 2018, 262 f. 148 See Pehm, IWRZ 2018, 263 f. 149 See also J Pehm, Haftung für den Betrieb autonomer Fahrzeuge: Eine komparative Sicht auf die Herausforderungen der Automatisierung, in V Zoufalý (ed), Tagungsband XXVI. Karlsbader Juristentage 2018 (2018) 128. 150 This applies to autonomous medical systems correspondingly, see Karner (fn 45) 59 ff. 151 Pehm, IWRZ 2018, 262 f. 152 See Pehm, IWRZ 2018, 263 and, generally, Widmer (fn 97) 357 f. This is evident, for example, in English law, where (conventional) vehicles are only governed by fault liability, but one which is strictly applied; see already above at fn 242.  









Karner/Koch, Civil Liability for Artificial Intelligence

56

B. Current Tort Law Regimes and Artificial Intelligence

pliance with such ‘safety rules’ should lead to a reversal of the burden of proving causation, fault, and/or the presence of a product defect.153 Insofar as these provisions fall under the concept of protective statutes, in relation to the burden of proving fault (and in part also in relation to causation)154 this proposal corresponds to the prevailing court practice in Germany and Austria.155 While the manifold variations of fault liability regimes throughout Europe will undoubtedly lead to different results also with respect to harm caused by AI systems, adjusting fault liability just in this peculiar field alone may require specific justification, as it is not self-evident why victims of one specific source of harm should be treated preferably in comparison to victims incurring the same harm through another source. Lowering or reversing the burden of proving fault, for example, is a measure that requires careful consideration and thorough cost-benefit analyses in light of alternative scenarios. Someone knocked over by a conventional car will not easily understand without further explanation why a victim with the same injuries but sustained from an autonomous vehicle should be better off in tort law if both base their claims on the misconduct of the tortfeasor. This is particularly relevant if a more traditional technology is gradually being superseded by an AI replacement, with an extended period of coexistence of both technologies in the meantime.

(b) The burden of proving misconduct triggering liability in the case of autonomous systems If harm is allegedly caused by AI systems, establishing misconduct can be more challenging than proving fault in general.156 This is due to the particularities of such systems, especially in light of their complexity, opacity, limited predictability, and openness, as mentioned before.157 One of the peculiar risks of AI is that the behavioural patterns of autonomous systems are often only predictable to a limited extent and hence more difficult to

153 NTF Report (fn 6) 7, 48 f, Key Finding 24: ‘Where the damage is of a kind that safety rules were meant to avoid, failure to comply with such safety rules, including rules on cybersecurity, should lead to a reversal of the burden of proving (a) causation, and/or (b) fault, and/or (c) the existence of a defect.’ 154 For Austria, see eg Karner (fn 39) § 1311 no 6 with further references. 155 See – in relation to fault – for Germany Magnus/Seher (fn 111) 115; Wagner (fn 111) § 823 no 617; for Austria Karner (fn 39) § 1298 no 4 with further references. 156 Cf supra B.III.2(d). 157 See NTF Report (fn 6) 5 (Key Finding 1), 32 f. Cf further – regarding autonomous medical systems – Karner (fn 45) 63 ff.  





Karner/Koch, Civil Liability for Artificial Intelligence

III. Fault liability

57

control ex ante as compared to conventional technologies. Moreover, the ability to explain the operation of such systems is often also limited from an ex post perspective.158 It may therefore be very difficult to establish whether some conduct of the user (eg the latter’s monitoring or maintenance), which ultimately led to an accident, was, indeed, sufficiently flawed to qualify as fault, as any such human behaviour will be accompanied and followed by built-in processes of the AI system that in turn depend upon a variety of internal and external factors independent of such conduct.159 The reaction of an AI system to human input or commands may be less predictable than with conventional technology, making it harder not only to identify the appropriate standard of care as the applicable benchmark, but also any deviation therefrom. In light of the divergent rules allocating the burden of proving fault in Europe outlined above160 (ranging from placing the burden on the injured party to various forms of relaxation, up to charging the tortfeasor instead with proving the absence of fault)161 – we may see different outcomes, coupled with legal uncertainty at least initially, particularly in those jurisdictions where the courts alter the burden of proving fault ad hoc. Modifying the burden of proving fault is often justified by the increased dangerousness of the activity concerned, as already mentioned.162 Historically, discontent with the evidentiary situation in previously unknown accident scenarios was often the starting point for such adjustments of liability for new technologies.163 While some suggest to apply this concept merely to isolated sources of risk on a case-by-case analysis,164 others promote to expand this more generally: the Principles of European Tort Law (PETL),165 for example, propose a blanket

158 See further H Zech, Künstliche Intelligenz und Haftungsfragen, Zeitschrift für die gesamt Privatrechtswissenschaft (ZfPW) 2019, 200 ff. 159 See, in particular in relation to product liability in German tort law, G Wagner, Verantwortlichkeit im Zeichen digitaler Technik, VersR 2020, 729. 160 Supra B.III.2(d) 161 See also the overview by I Giesen (fn 37) no 8 ff; Karner (fn 28) 73 ff. 162 Supra B.III.2(d). 163 See the contributions to M Martín-Casals (ed), The Development of Liability in Relation to Technological Change (2010), illustrating the manifold responses of England and Wales, France, Germany, Italy, and Spain, to (at their respective time) new technologies such as railways, steam boilers, or asbestos. 164 Cf the presentation of the state of opinion in Germany in Borges, Rechtliche Rahmenbedingungen für autonome Systeme, NJW 2018, 981 f; contra Wagner, VersR 2020, 730 f. 165 European Group on Tort Law, Principles of European Tort Law. Text and Commentary (2005). The black-letter text of the PETL can also be found (in multiple language versions) at .  





Karner/Koch, Civil Liability for Artificial Intelligence





58

B. Current Tort Law Regimes and Artificial Intelligence

clause in art 4:201 para 1: ‘The burden of proving fault may be reversed in light of the gravity of the danger presented by the activity.’ In the view of the European Group on Tort Law (EGTL) who authored the Principles, the danger required for a reversal of the burden of proving fault is one of intermediate intensity, between the ‘normal’ risk, which is inherent to any human activity, and the extraordinary or ‘abnormally’ high risk, which triggers strict liability. In Austria (unlike Germany), such a result could already be sought on the basis of a comprehensive analogy to §§ 1319, 1320 ABGB, which the Austrian Supreme Court may be willing to apply. This regime would extend to those autonomous systems which are not dangerous to a degree that would justify full-fledged strict liability. With regard to the use cases below, this could apply to autonomous lawnmowers, for example.166 However, if one were to promote this concept more broadly on a European level, thereby suggesting a general reversal of the burden of proving fault in cases of increased risk, this might lead to residual uncertainty for the victims depending upon how precisely such ‘increased risk’ were to be defined, at least for those bringing a specific risk to court first. On the other hand, this would allow courts to respond more quickly to novel applications of AI, for example, where a legislative intervention on a technology-by-technology basis will necessarily lag behind. Other arguments promoting a reversal of the burden of proving fault167 are systemic difficulties of proof168 as well as the fact that the party who is in control of critical evidence169 should be charged with the burden of producing it.170 An example thereof is the German practice of so-called producers’ liability, which is a fault-based regime applied alongside the strict liability regime harmonised by the PLD. While it is therefore based on the fault of the producer, practice charges her with proving the absence of fault since all the evidence regarding the manufacturing process is in her hands,171 which may be even more true if such production is not a hands-on procedure with (literally) visible steps, but the development of software code. Once the claimant can prove that she was harmed by a product defect, which may be difficult with respect to digital products, though,172 it is

166 See infra C.II. 167 See also I Giesen (fn 37) 52. 168 Cf supra at fn 41 and 65 with respect to the burden of proving causation. 169 Cf supra at fn 43 and 54 with respect to the burden of proving causation. 170 See eg M Martín-Casals/J Solé Feliu, Spain, in Widmer (ed), Fault (fn 85) 227 (229 ff); See also Karner (fn 28) 72 f with further references. 171 See Wagner (fn 111) § 823 no 923, 1014 ff. 172 Wagner, VersR 2020, 729.  



Karner/Koch, Civil Liability for Artificial Intelligence

III. Fault liability

59

therefore for the producer to prove that such defect is not the result of fault within her sphere. Finally, as has already been said above,173 it should be recalled that the NTF Report proposed that a failure to comply with legislative ‘safety rules‘ should lead to a reversal of the burden of proving causation, fault, and/or the presence of a product defect.174 This may include, for example, explicit statutory requirements to update the firm- or software of AI systems regularly, duties to regularly run some built-in self-test, or the prohibition of tampering with the software or its pre-settings. Insofar as such provisions fall under the concept of a protective statute/norm (a classification typically for the courts to make, though, and therefore not entirely predictable), the proposal is in line with prevailing court practice in Germany and Austria regarding the burden of proving fault (and in part also causation).175

(c) The significance of fault-based liability for AI systems The significance of fault-based liability for AI systems depends considerably on whether other, potentially more advantageous, claims are also available to the injured party, in particular any strict (risk-based) liabilities and/or the harmonised European no-fault product liability regime (the latter being disregarded in the following, though). This is made especially clear in the context of road traffic. The significance of fault-based liability is extensively reduced in traffic accident cases in those jurisdictions where the injured party can – as in Germany and Austria – rely on a comprehensive, no-fault, risk-based liability for motor vehicles.176 This encompasses motorised and also non-motorised accident victims and provides compensation for both personal injury and property damage. However, it is important to bear in mind that such a comprehensive risk-based liability is not common to all of Europe. Instead, the relevant risk-based provisions may protect only non-motorised actors (eg in the Netherlands177), may be wholly or at least partially excluded in collisions between vehicles (eg in Greece, Poland, and the Netherlands), or foresee compensation for personal injuries only, eg in Spain.178 To the extent these

173 Supra at fn 39 and in B.III.2(d). 174 Supra fn 153. 175 See already above at fn 111. 176 This is only true, however, as long as the overall damage caused remains within the monetary limits on liability foreseen in these systems. 177 Art 185 Wegenverkeerswet (WVW, Road Traffic Act). 178 See infra at fn 343. Karner/Koch, Civil Liability for Artificial Intelligence

60

B. Current Tort Law Regimes and Artificial Intelligence

strict liability regimes do not apply – especially for motorised actors and property damage – an injured party is therefore again left with a fault-based claim even after a traffic accident.179 Given that risk-based strict liabilities are often regulated by stand-alone statutes, and that an analogous application of such rules to other risks is strictly excluded (with the exception of Austria),180 only fault-based liability will apply to many AI systems that fall outside the immediate scope of any given statutory regime (leaving aside harmonised product liability as mentioned).181 This can lead to considerable gaps in protection, as will be shown in the use cases below.182 As regards errors in the production of AI systems, fault-based liability has limited significance by virtue of the harmonised, strict liability for products introduced by the PLD across Europe within the latter’s sphere of application. However, it is still important to keep in mind that the harmonised regime only blocks alternative claims in strict liability, but leaves concurrent claims in fault (or contractual) liability intact. This is why the above-mentioned (fault-based) regime of producers’ liability in Germany183 or the Austrian equivalent (there based on the concept of contracts with protective effect for the benefit of third parties)184 coexist alongside the strict liability regime. Fault-based liability thus proves to be a ‘universally applicable, catch-all regime’.185 However, the quest to identify misconduct in a case involving AI systems in order to establish liability may often be unsuccessful in light of the complexity of such systems, the multitude of actors involved, and – more importantly – the systemic irrelevance of human behaviour in the operation of AI systems altogether.186

179 See in further detail below C.I. 180 See immediately below in the text. 181 This also applies, for example, for autonomous medical systems, see Karner (fn 45) 73 ff. 182 Infra C. 183 See Wagner (fn 111) § 823 no 916 ff and supra at fn 171. 184 See Karner (fn 39) § 1295 no 19, 20; H Koziol/P Apathy/BA Koch, Österreichisches Haftpflichtrecht III (3rd ed 2014) no B/4 f. 185 G Wagner, Verantwortlichkeit im Zeichen digitaler Techniken, VersR 2020, 725. 186 For German law, see Wagner, VersR 2020, 725 ff, 734. Cf also G Wagner, Produkthaftung für autonome Systeme, Archiv für die civilistische Praxis (AcP) 217 (2017) 707 (708); Karner (fn 45) 62.  







Karner/Koch, Civil Liability for Artificial Intelligence

IV. Vicarious liability

61

IV. Vicarious liability 1. Overview While the term ‘vicarious liability’ may have different connotations in various legal systems,187 it essentially encompasses the liability of one person for the conduct of another, without the need for the victim to prove any personal wrongdoing of the former.188 We will thereby disregard the special case of liability for minors or other persons with limited capacity due to its lack of immediate relevance for our topic. All legal systems foresee at least some instances of vicarious liability, thereby attributing the harmful conduct189 of one person to another, requiring the latter to compensate harm she did not inflict upon the victim herself. By asking someone else to act on her behalf, she thereby expands the sphere for which she may be held liable. However, there are considerable differences between the various national implementations of this fundamental idea, for example when it comes to the kind of relationship between principal and auxiliary that is required in order to trigger vicarious liability of the former for the latter, or with regard to the conduct by the auxiliary that is attributed to the principal.

187 Cf S Galand-Carval, Comparative Report on Liability for Damage Caused by Others, Part I – General Questions, in J Spier (ed), Unification of Tort Law: Liability for Damage Caused by Others (2003) 289. See also P Giliker, Vicarious Liability or Liability for the Acts of Others in Tort: A Comparative Perspective, 2 JETL (2011) 31: ‘The doctrine of “vicarious liability”, generally termed “liability for the acts of others” by civil lawyers, has long been regarded as controversial in the common law world. The term … is itself ambiguous and fails to distinguish clearly between agency liability, and secondary liability for the original tortfeasor.’ 188 This includes, however, national variations where such personal fault of the liable person is presumed and the latter can rebut this. 189 It is the conduct that counts, not the involvement of another person in general. Strict liability for motor vehicles, for example, requires the keeper to compensate losses resulting from a materialisation of the risks brought about by the operation of the vehicle, even though someone else may have driven the car at the time of the accident (but that conduct in itself is not decisive for the keeper’s liability): cf Galand-Carval (fn 187) 290. Karner/Koch, Civil Liability for Artificial Intelligence

62

B. Current Tort Law Regimes and Artificial Intelligence

2. The varieties of vicarious liability in Europe (a) Differences regarding the classification of vicarious liability Some legal systems perceive vicarious liability to be just another sub-category of strict liability, thereby grouping all instances of liability that do not require personal wrongdoing of the liable person together.190 Others focus on the original trigger of liability, which is some wrongdoing by the auxiliary herself or by her principal (such as flaws in the selection or supervision of the auxiliary), and therefore deem it to be more closely related to fault liability or at least a variant thereof, reducing the notion of ‘strict liability’ to risk-based liability. In this report, the latter type of liability will be dealt with separately below under B.V. This classification does not in itself impact upon the legal consequences, but it is important to bear in mind when talking about ‘strict liability’ in general, as this may have a broader meaning in some countries than in others.

(b) Differences regarding the trigger of vicarious liability The question of whether to align vicarious liability with strict or fault-based liability is, however, linked to the question on which basis (at least in theory) someone is liable for the conduct of another, and this is where decisive differences lie. Some jurisdictions have opted for liability without personal fault of the liable person. This includes, eg, France, where art 1242 para 5 Code civil holds ‘masters and employers’ strictly liable ‘for the damage caused by their servants and employees in the exercise of the functions in which they are employed’.191 Others start from a presumption of fault by the liable person herself, such as flaws in selecting, instructing, or supervising the auxiliary. The best-known example thereof is Germany,192 where § 831 para 1 BGB states that:

190 This is particularly true for the Romance jurisdictions; on the shift from a fault-centred to strict liability in France, see Giliker, 2 JETL (2011) 34 f. See also cmt A.1 to art VI-3:201 DCFR. 191 Cf art 2049 of the Italian Codice civile: ‘Employers and principals are liable for damage caused by their employees during the performance of their tasks.’ Also in Greece, the principal is strictly liable in a master-servant relationship; see art 922 of the Greek Civil Code: ‘A master or a person who has appointed another person to perform a function is liable towards a third party for damage unlawfully caused to the latter by his servant or the appointee in the execution of the function.’ In Sweden, ch 3 § 1 of the Tort Liability Act (Skadeståndslagen) holds the employer strictly liable for her employees, but distinguishes between the kinds of losses at stake (with vicarious liability for pure economic loss reduced to crimes committed by the employee when on duty, for example). 192 Cf also art 1903 of the Spanish Código civil, whose para 4 determines that ‘the owners or directors of an establishment or enterprise are liable for damage caused by their employees in ser 

Karner/Koch, Civil Liability for Artificial Intelligence

IV. Vicarious liability

63

‘Whosoever employs a person to perform a task under his directions is liable for the loss inflicted by that person on third parties in the course of the performance of his duties. The obligation to compensate does not arise where the principal has exercised due care in the selection of the agent and – insofar as he has to provide equipment or tools or has to supervise the performance of the duties – has acted with due care in such provision and supervision, or where the loss would have occurred even if such care had been exercised.’

However, if the principal used a helper to perform contractual obligations, he is liable irrespective of personal fault (§ 278 BGB).193 Austria follows a similar (but not identical) distinction: in the absence of a special (though not necessarily contractual) relationship between the principal and the victim, or if the auxiliary is not performing any task at least linked to such a relationship, vicarious liability in tort law is extremely limited and applies in only two scenarios according to § 1315 ABGB,194 both of which in their core relate to the increase of risks by flawed choices of the principal:195 either the auxiliary was habitually unfit for the tasks assigned, or the principal knew that she employed a ‘dangerous person’.196 In contrast to Germany, therefore, § 1315 ABGB is even narrower, but at the same time stricter inasmuch as the principal cannot escape liability by proving the exercise of proper care.197 Similar to § 278 BGB, how-

vice of the branch in which they were employed, or on the occasion of the exercise of their duties’, while para 6 adds that ‘[l]iability according to the present provision will cease to apply when the persons that it mentions prove that they used all the diligence of a reasonable person to prevent damage from occurring.’ See further eg for Hungary § 6:542 para 1 Ptk: ‘A principal shall be jointly and severally liable with his agent for any damage caused to a third party by the agent in that capacity. The principal shall be relieved of liability if he proves that he was not negligent in choosing, instructing, and supervising the agent.’ § 6:540 para 1 Ptk, on the other hand, provides for strict liability of an employer for damage caused by an employee ‘in the course of his employment’. 193 This provision reads: ‘The debtor is responsible for the fault of his legal representative and of the persons he engages for the performance of his obligations as if it were his own. …’ 194 § 1315 ABGB states that ‘[w]hosoever, for the conduct of his affairs, avails himself either of an unfit person, or knowingly of a dangerous person, is liable for the harm such a person causes to another in that capacity.’ 195 H Koziol, Österreichisches Haftpflichtrecht II (3rd ed 2018) no D/3/3, highlighting the underlying principle that someone who creates a risk and profits thereby should also bear the harmful consequences thereof. 196 Actual knowledge is required. The danger relates to bodily or mental characteristics of the auxiliary or to her character and must have materialised in the case at hand: Karner (fn 39) § 1315 no 4. 197 In the case of the ‘dangerous person’, however, the principal is not liable if she had no actual knowledge of the dangerousness, as mentioned. Her liability is therefore based upon the increased risk she posed to others by using such an auxiliary, but not upon an assessment of her selecting or supervising the auxiliary. She is therefore also liable, for example, if she knew of the Karner/Koch, Civil Liability for Artificial Intelligence

64

B. Current Tort Law Regimes and Artificial Intelligence

ever, § 1313a ABGB holds the principal strictly liable for the conduct of any auxiliary who the former engaged to assist in performing an obligation the principal has vis-à-vis the subsequent victim, and if the damage arises in the course of executing such tasks.198 The special case of auxiliaries assisting in the performance of a contract199 as evidenced by § 1313a ABGB, § 278 BGB, or art 334 of the Greek Civil Code points to another facet of vicarious liability: since the principal owes a contractual duty to the (subsequent) victim, the former’s liability may be triggered by a guarantee of a successful performance, at least in theory. Therefore, if something goes wrong in the performance of a contractual obligation, the reason for holding the person obliged liable (even if she did not act herself) is in its core based on the breach of contract.200 Apart from general rules of vicarious liability in tort law, some jurisdictions may also have special rules for specific situations (typically marked by a particular risk),201 which makes the overall comparative picture even more diverse.

(c) Differences regarding the implications of the differing scope of liability In those jurisdictions with very limited vicarious liability in the law of delict such as Austria or Germany, courts have developed ways to circumvent such restrictions in practice, mostly by (artificially) shifting the case into the realm of contract law where more generous rules of attributing the conduct of auxiliaries to their principals apply.202 This was often achieved by extensive use of constructions de-

dangerousness but trusted that the auxiliary will nevertheless not act upon it: Koziol (fn 195) no D/3/18 ff. 198 § 1313a ABGB reads: ‘Whosoever is under an obligation of performance to another is liable to that person for the fault of his legal representative, and of persons he engages to perform the obligation, as if it were his own fault.’ 199 See also art 334 of the Greek Civil Code. 200 Galand-Carval (fn 187) 291. 201 Cf for Austria, eg, § 1319a ABGB (liability of keepers of public roads are liable for all their staff, irrespective of the narrow conditions of § 1315) or § 19 para 2 EKHG (keeper of a train or motor vehicle liable for any person participating in the operation of such means of transport even outside the scope of strict liability), or for Germany, eg, § 701 f BGB (inclusion of the staff of restaurant owners or innkeepers into the latter’s sphere) or § 501 Commercial Code (Handelsgesetzbuch, HGB: liability of the consignor/freight agent for all his staff), to name just a few. 202 On this ‘escape into contract’ in German law, see J Bell/A Janssen (eds), Markesinis’s German Law of Torts: A Comparative Treatise (5th ed 2019) 128 ff. In Germany, yet another way of escaping the straightjacket of § 831 para 1 BGB was to make it more difficult for principals to rebut the statutory presumption of fault: Giliker, 2 JETL (2011) 36 f. Alternatively (or cumulatively), cases of vicarious liability in their core were reinterpreted as a problem of flaws in the organisation of the  





Karner/Koch, Civil Liability for Artificial Intelligence

IV. Vicarious liability

65

signed for contract law such as a culpa in contrahendo (fault in a pre-contractual relationship) or a contract with protective purpose vis-à-vis third parties (Vertrag mit Schutzwirkung zugunsten Dritter).203 The effect thereof is, however, that fullfledged tort cases are sometimes being forcefully pushed into the realm of contractual liability, at the expense of systematic clarity.204 All this is not necessary, though, in those jurisdictions which already have a rather far-reaching vicarious liability in place in tort law proper.

(d) Differences regarding the range of persons for whose conduct someone else may be held liable A further key question answered differently throughout Europe is the link between the actual wrongdoer and the person liable for the former. Some jurisdictions, such as those appertaining to the common law world, the Netherlands, or Poland, require a relationship of employment between the auxiliary and the principal, with the latter paying for services rendered by the former under the supervision and control of the employer. Other jurisdictions such as Austria, France, Germany, Greece, or Italy205 are more generous and do not insist on a traditional employment contract, but also, for example, include services rendered by the auxiliary for free. Either way, jurisdictions tend to insist on at least a theoretical relationship of subordination, with the auxiliary acting under the instruction and control of the principal.206

employer (Organisationsverschulden). See also Wagner (fn 111) § 831 no 2, pointing to various ways how German court practice has circumvented § 831 BGB, thereby ‘pulling its teeth’. 203 This also has the side-effect that in those jurisdictions product liability as harmonised by the PLD can be circumvented by using an alternative claim for compensation, as the CJEU stated expressly that while the PLD regime is the sole regime of strict liability for product defects in the EU, it ‘does not preclude the application of other systems of contractual or non-contractual liability based on other grounds, such as fault …’. See, eg, ECJ 25.4.2002 C-183/00 González Sánchez v Medicina Asturiana SA, ECLI:EU:C:2002:255 (para 31). 204 Also criticised by Wagner (fn 111) § 831 no 2: ‘Der Preis dafür ist allerdings eine Inflation der Vertragshaftung …’. 205 See the references by Galand-Carval (fn 187) 300. 206 See Giliker, 2 JETL (2011) 39 ff on the shifts in European jurisdictions with regard to the traditionally required element of control due to changes in the working environment, highlighting inter alia that ‘an obvious tension exists between the English focus on the contract of employment and the civilian preference for the broader terms of commettant/préposé and Geschäftsherr/Verrichtungsgehilfe (rather than the narrower and more precise employeur/employé and Arbeitgeber/ Arbeitnehmer). … This renders it far easier for civilian systems to respond to changes in employment practices and include agency staff, where appropriate, within the definition of préposé and  

Karner/Koch, Civil Liability for Artificial Intelligence

66

B. Current Tort Law Regimes and Artificial Intelligence

(e) Differences regarding the context of the harmful conduct A further distinction can be found when examining whether the harmful conduct by the auxiliary must have been part of the latter’s exercise of those services for which she was hired, or whether it suffices that the victim was injured merely at the occasion of the auxiliary’s services for the latter’s principal, even though the harmful conduct was not part of rendering the services assigned to the auxiliary.207 While many jurisdictions (such as Germany, Poland, or Spain) exclude vicarious liability of the principal for deeds committed by the auxiliary merely at the occasion of what the latter was supposed to do, others (such as France,208 Italy, or the Netherlands209) are more generous towards the victim and merely require a sufficiently ‘objective link’ between the harmful conduct and the auxiliary’s tasks. ‘This leads to imposing liability on employers for torts, which, in almost all the other countries, would be regarded as outside the scope of employment.’210

Verrichtungsgehilfe. The common law courts’ ongoing preference for a general definition of ‘employee’, applicable in all areas of law, has quite correctly been criticised as unrealistic …’ (47). 207 Due to the peculiar provision of § 1315 ABGB in Austria (supra fn 194 ff), both alternatives apply correspondingly; cf H Koziol/K Vogel, Liability for Damage Caused by Others under Austrian Law, in J Spier (ed), Unification of Tort Law: Liability for Damage Caused by Others (2003) 11 (20): if the helper is ‘unfit’ (within the meaning of § 1315 ABGB 1st alternative), the principal is only liable for conduct assigned to that person; if the auxiliary is ‘dangerous’ instead (§ 1315 ABGB 2nd alternative), the principal is also liable for whatever the helper commits on the occasion of performing the task. 208 S Galand-Carval, Liability for Damage caused by Others under French Law, in J Spier (ed), Unification of Tort Law: Liability for Damage Caused by Others (2003) 85 (93 f): a French employer can only escape liability for her employees if three conditions are met (cumulatively): ‘the employee must have acted outside his functions, without authorization and in furtherance of a purpose alien to his attributions. It is therefore not enough for the employer to show that the employee acted, without authorization, for a purely personal motive.’ 209 Cf art 6:170 para 1 BW: ‘A person whose duty is fulfilled by a subordinate in his service is liable for damage caused to a third person by the fault of the subordinate if the risk of the fault has been increased by the assignment to fulfil the duty and the person in whose service the subordinate was had control over the conduct which constituted the fault because of the legal relationship between him and the subordinate.’ (emphasis added). But see also art 6:170 para 2 BW, which reduces the scope if ‘the subordinate was in the service of a natural person and when he was not working for the profession or business of that person’: in such cases, the principal is only liable for acts in the performance of the duties assigned to the auxiliary. 210 Galand-Carval (fn 187) 301.  

Karner/Koch, Civil Liability for Artificial Intelligence

IV. Vicarious liability

67

(f) Differences regarding the availability of direct claims against the auxiliary As a general rule, the fact that the principal is vicariously liable for the conduct of her auxiliaries does not in itself shield the latter from direct claims by the victims.211 However, the requirements for (personal fault-based) liability of these individuals may differ from the assessment of the principal’s vicarious liability, if only because the duties owed by the auxiliary towards the injured person may differ. More importantly, some jurisdictions shield the auxiliary at least to some extent from direct claims by the victims, particularly (but not only) due to labour law considerations. In France, for example, ‘an agent is not personally liable for damage caused through his fault if he was acting within the bounds of the task ascribed to him by his principal, and was not guilty of a criminal offence or of gross negligence: only the principal is liable in such circumstances’.212 Polish labour law shields an employee from personal liability vis-à-vis the victim altogether (art 120 § 1 of the Polish Labour Code, kodeks pracy), irrespective of the degree of fault. This does not mean, however, that the employer who indemnifies the victim cannot seek redress from the employee internally, which is still limited, though, in cases of negligence.213 The Austrian Dienstnehmerhaftpflichtgesetz (Law on the Liability of Employees, DHG) protects employees who cause damage to another in the course of their employment. If the employee indemnifies the victim directly, either due to a court ruling or with the accord of the employer, the employee may have a contribution claim for all or at least part of the amount paid to the victim in light of the degree of negligence and various factors listed in § 2 para 2 DHG (§ 3 para 2 and 3 DHG).214 Similarly in Germany, the auxiliary herself remains fully liable vis-à-vis the victim (though subject to an independent assessment).215 If the auxiliary was an employee of the principal, however, court practice grants a contribution claim

211 Galand-Carval (fn 187) 304: ‘In many countries, the employer’s liability is still seen as an indirect liability, established in order to protect the victim’s interests and not to insulate the wrongful employee from his own, personal liability.’ 212 J-S Borghetti in H Koziol (ed), Comparative Stimulations for Developing Tort Law (2015) 155. This is also expressly foreseen in the draft of the French tort law reform (fn 14, proposed art 1249 para 4). 213 The redress sought from the employee must not exceed three monthly salaries (art 119 kodeks pracy). 214 In case of an ‘excusable misconduct’ (entschuldbare Fehlleistung, the slightest degree of negligence), the contribution claim is in full (§ 3 para 3 DHG). 215 Wagner (fn 111) § 831 no 12. Karner/Koch, Civil Liability for Artificial Intelligence

68

B. Current Tort Law Regimes and Artificial Intelligence

against the employer on the basis of their (internal) employment relationship subject to the degree of fault.216

(g) Differences regarding the extent of the liability of legal persons for their representatives A separate (though related) question is whether legal persons such as corporations are liable for the acts of those that act as their representatives, such as officers.217 Strictly speaking, this is not an act of liability for others, as the organs of a legal entity are the only ones acting in the real world, so that their acts are technically not conduct of third parties, but acts of the corporation or other entity itself.218 On the other hand, of course, the actors are separate individuals and can (and have to be) distinguished from the entity on whose behalf they act. Nevertheless, due to their representative function on behalf of the corporation or other legal person, even jurisdictions that have otherwise opted for a much narrower attribution of someone else’s conduct to the liable person (like Austria or Germany) are more generous (and therefore more victim-friendly) when it comes to holding a legal person liable for the acts of their representatives.219 According to § 31 BGB, for example, any legal person220 is ‘liable for the damage to a third party that the board, a member of the board or another constitutionally appointed representative causes through an act committed by it or him in carrying out the business with which it or he is entrusted, where the act gives rise to a liability in damages.’221 Austria goes even further, not only considering its much narrower starting point of § 1315 ABGB mentioned above:222 legal persons are not only liable for their official representatives, but for anybody who acts in a leading

216 Wagner (fn 111) § 831 no 13. 217 This has to be distinguished, of course, from the corporation’s own (and personal) liability for flaws in its organisation as such (Organisationsverschulden), which may include wrong choices in organising management. 218 Galand-Carval (fn 187) 293: ‘The theoretical reason is that organs, unlike employees, are not distinguishable from the corporation itself. Their will is regarded as the corporation’s own will, their acts are deemed to be the corporation’s own acts.’ 219 Galand-Carval (fn 187) 293. 220 While the language of § 31 BGB speaks of associations, practice has long expanded this to corporations and other legal entities; L Leuschner in Münchener Kommentar zum BGB I (8th ed 2018) § 31 BGB no 3 ff. 221 Transl by . 222 Supra at fn 194.  

Karner/Koch, Civil Liability for Artificial Intelligence

V. Strict (risk-based) liability

69

or supervising function on behalf of the liable entity (so-called Repräsentantenhaftung).223

3. Analogous application of vicarious liability to AI systems?224 The basic notion underlying vicarious liability could be used as a fruitful argument when discussing liability for AI systems. ‘If someone can be held liable for the wrongdoing of some human helper, why should the beneficiary of such support not be equally liable if they outsource their duties to a non-human helper instead, considering that they equally benefit from such delegation?’225 However, due to the striking differences between the European jurisdictions when it comes to even fundamental aspects of vicarious liability as outlined above, such an analogy would first and foremost only work within each jurisdiction, thereby extending these differences within Europe to liability for AI systems. While acknowledging this problem from a comparative perspective,226 the NTF Report nevertheless suggests that, at least on a national level, each jurisdiction should provide for a similar regime of liability if ‘harm is caused by autonomous technology used in a way functionally equivalent to the employment of human auxiliaries’ (KF 18).227 How this might be implemented depends upon the existing liability framework within each jurisdiction, in particular on whether such technology is subject to strict liability already anyhow.

V. Strict (risk-based) liability 1. Overview The basis for a risk-based liability independent of fault is not misconduct on the part of some wrongdoer. Instead, it proceeds from the understanding that some-

223 Karner (fn 39) § 1315 no 7. 224 See also Karner (fn 45) 66 ff. 225 NTF Report (fn 6) 25. 226 NTF Report (fn 6) 46. 227 The NTF Report (fn 6) 45 f further argues that the applicable benchmark for assessing the performance of autonomous technology should be equal to the standard applicable to humans. ‘However, once autonomous technology outperforms human auxiliaries, this will be determined by the performance of comparable available technology which the operator could be expected to use, taking into account the operator’s duties of care.’ (KF 19).  

Karner/Koch, Civil Liability for Artificial Intelligence

70

B. Current Tort Law Regimes and Artificial Intelligence

one is permitted to use a (particularly) dangerous thing or pursue some risk-prone activity for her own purposes, which is why she should also bear the loss if such risk should materialise. For this reason, it seems preferable to use the term ‘riskbased liability’ instead of the enigmatic term ‘strict liability’.228 The particular dangerousness justifying strict liability is typically understood to be a combination of the likelihood and frequency of potential losses and of the extent of the harm that might result from the dangerous object or activity. Therefore, even if the typical losses that might occur are relatively small, but do so with a rather high probability and/or frequency, or if there may be a tremendous loss, but only under rare circumstances, the underlying risk may still be deemed dangerous enough to justify a shift from fault to strict liability. Due to the wide range of possible applications of AI, it is clear from the outset, though, that not all of them may be deemed sufficiently dangerous to qualify as an obvious candidate for risk-based liability. Like fault-based liability, risk-based liability serves the purpose not only of compensating harm, but also that of preventing its occurrence. With risk-based liability, this is pursued in particular through control of the level of activity undertaken.229 Risk-based liability is typically (but not necessarily) combined with a compulsory liability insurance system.230

2. The varieties of strict liability in Europe231 (a) Differences regarding the liable person Given that risk-based liability draws on the notion that someone who derives benefit from a dangerous object or activity should also compensate the harm that it brings about, it seems appropriate to subject the keeper (Halter, gardien) of the object (or the person pursuing the activity) to liability – as for example in the Ger-

228 Viewed comparatively, ‘strict liability’ is sometimes understood to include eg claims between neighbours (rooted in property law in some jurisdictions), or vicarious liability/liability for auxiliaries; see supra B.IV.2(a). 229 See M Adams, Ökonomische Analyse der Gefährdungs- und Verschuldenshaftung (1985) 47 ff; M Faure, Economic Analysis, in Koch/Koziol (eds), Strict Liability (fn 116) 364 ff; Koziol (fn 84) no 3/6; E Karner, Prevention, Deterrence, Punishment, in H Koziol (ed), The Aims of Tort Law – Chinese and European Perspectives (2017) 54 ff. 230 See also infra C.I.1(b) on mandatory insurance cover for traffic liability. 231 ‘In the area of strict liability, European legal systems show much more diversity than in other areas of tort law.’ Koziol (fn 100) no 8/35.  





Karner/Koch, Civil Liability for Artificial Intelligence

V. Strict (risk-based) liability

71

manic legal systems.232 The term ‘keeper’ should be understood to address the person who is in control of the thing (at least in the abstract) and operates it on her own account.233 Therefore, the keeper of a thing is not necessarily its owner.234 Viewed comparatively, of course, this is not universally true, as the focus is sometimes placed on the possessor235 or owner instead, or on the person benefitting from a dangerous activity.236

(b) Differences regarding the range and scope of strict liabilities Risk-based liabilities exist in all European legal systems.237 However, sometimes these are provided for only by way of specific, individual statutory provisions, for example in the Germanic legal systems.238 In other jurisdictions, a more or less far-reaching general clause for risk-based liability is in place, either alongside individual provisions for specific risks or without such complementary regimes. Nofault liability for things in French law and certain other systems239 occupies something of a unique position in this regard inasmuch as such strict liability is typically imposed regardless of the dangerousness of the thing.240

(1) Singular instances of strict liability In contrast to continental European legal systems, common law jurisdictions adopt a highly restrictive attitude towards risk-based liability and acknowledge

232 See eg – in relation to strict liability for motor vehicles – for Austria § 5 EKHG; for Germany § 7 (1) StVG. 233 See also NTF Report (fn 6) 6, 42 f; Key Finding 10: ‘Strict Liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation (operator).’ 234 Cf, eg, art 1377 of the Romanian Civil Code (Codul civil), which complements its strict liability provisions for animals (art 1375) and things (art 1376) by stating that ‘the custody of the animal or of the object belongs to the owner or to the person who, by virtue of law or a contract, or merely in fact, exercises control and supervision over the animal or object and uses it in his or her own interest’. 235 This is true, for example, of Polish law in relation to strict liability for motor vehicles; see art 436 § 1 KC and infra C.I.3(a). 236 See the range of options at Koch/Koziol (fn 116) 413 ff (no 77 ff). 237 See also the ‘inventory’ of strict liabilities in Koch/Koziol (fn 116) 395 ff. 238 See also the examples of strict liability for animals in fn 117. 239 Infra B.V.2(b)(3). 240 But see the Belgian interpretation infra at fn 259, requiring a defect of the object.  







Karner/Koch, Civil Liability for Artificial Intelligence

72

B. Current Tort Law Regimes and Artificial Intelligence

only few instances thereof. In particular, English, Irish, or Maltese law do not recognise a no-fault, risk-based liability for motor vehicles.241 It has already been noted, of course, that this gap in protection is filled covertly by means of a tightening of the fault-based liability standard.242 In the Germanic legal systems (Austria, Germany, Liechtenstein, and Switzerland), risk-based liability is regulated exclusively by special legislation, each covering a particular dangerous object or activity. Particular mention may be made – for Austria, by way of example – of motor vehicles and railways, airplanes, plumbing and gas installations, nuclear installations, but also, for example, genetically modified organisms.243 This method of singular regulation of risk-based liability through individual statutes bears the risk of creating value contradictions, particularly with respect to comparable risks not (yet) addressed by legislation, and gaps in protection, considering the rapid technological development where such legislative measures necessarily lag behind. Selective strict liability solutions – covering only certain specific and expressly enumerated types of risks – thus require frequent updates in the law to keep up with scientific progress. Such problems may at least be softened if courts are prepared to apply existing statutes by way of analogy to similar (in particular novel) risks. This is only true for Austria, however.244

(2) General clause of strict liability In contrast (and often in addition) to such piecemeal solutions, numerous European legal systems provide for a general no-fault liability clause for particularly dangerous things or activities.245 This is the case, in particular, in Croatia,246 the

241 Cf WVH Rogers, England, in Koch/Koziol (eds), Strict Liability (fn 116) 101 (111): ‘The absence of any strict liability for road accidents is perhaps the most marked difference between English law and that of most European countries.’ 242 Supra at fn 140. Cf also the English peculiarity of an insurance-based solution for autonomous vehicles infra at C.I.4. 243 See, eg, BA Koch/H Koziol, Austria, in Koch/Koziol (eds), Strict Liability (fn 116) 13 f. 244 See also infra B.V.2(c). 245 Cf also art VI-3:206 DCFR, suggesting strict liability ‘for damage caused by dangerous substances or emissions’. 246 See art 1045 (3) of the Croatian Civil Obligations Act regarding ‘things or activities representing an increased risk of damage to their surroundings’ and Section 4 – Liability for Damage Caused by a Dangerous Thing or Activity, art 1063 ff COA.  



Karner/Koch, Civil Liability for Artificial Intelligence

V. Strict (risk-based) liability

73

Czech Republic,247 Hungary,248 Estonia,249 Latvia,250 Lithuania,251 Poland,252 Slovakia,253 and Slovenia.254 Through such general clauses, the aforementioned value contradictions and gaps in protection can be avoided, allowing timely and appropriate responses to the rapid technological developments like those that characterise AI-based systems.255 These general clauses are quite diverse, however, particularly when it comes to the level of dangerousness sufficient to trigger liability. Since the degree of risk is typically not further specified, it is up to the courts to decide whether the general clause applies in each case before them, which at least initially contributes to uncertainty, particularly if a novel technology reaches the court for the very first time.

(3) Strict liability for things A unique type of no-fault liability is the so-called responsabilité du fait des choses, liability for damage caused by things in one’s control. Developed in France through judicial interpretation of art 1242 (formerly art 1384) of the Code civil,256 it has subsequently been adopted by other countries whose private law was influenced by the French Civil Code (Luxembourg, Belgium, Italy, Portugal, Romania, and the Netherlands).257 In its French version, for example, this liability is ex-

247 § 2925 of the Czech Civil Code ‘Damage caused by extraordinarily dangerous operation’. 248 Art 6:535 Ptk: ‘ Liability for extra-hazardous activity’. 249 § 1056 LOA: ‘Liability for damage caused by major source of danger’. 250 Art 2347 of the Latvian Civil Law regarding ‘activities associated with increased risk for other persons’. 251 Art 6.270 of the Lithuanian Civil Code (): ‘Liability arising from the exercise of hazardous activities’. 252 See art 435 KC, establishing the (strict) liability of a person conducting on her own account an enterprise or business set into operation by natural forces. 253 § 432 Slovakian Civil Code regarding ‘damage caused by an extremely dangerous operation’. 254 Art 149 ff Obligacijski zakonik (OZ, Slovenian Code of Obligations): ‘Liability for Damage from Dangerous Object or Dangerous Activities’. 255 On the need for a general rule for risk-based liability, see already E Karner, Liability for Robotics: Current Rules, Challenges, and the Need for Innovative Concepts, in S Lohsse/R Schulze/D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (2019) 117 (122 f). 256 Art 1242 Code civil reads in relevant part: ‘Everyone is liable not only for the damage caused by his own act, but also for that which is caused … by things which are in his custody. …’ 257 G Wagner, Sachhalterhaftung, Handwörterbuch des Europäischen Privatrechts 2009 () no 3.  

Karner/Koch, Civil Liability for Artificial Intelligence

74

B. Current Tort Law Regimes and Artificial Intelligence

cluded only by force majeure,258 and while it requires that the ‘thing’ played some active role in causing harm, it differs from risk-based liability inasmuch as it does not depend on the ‘dangerousness’ or a safety-relevant defect of a thing. Its Belgian counterpart (art 1384 Code civil), however, despite identical wording, is interpreted by the courts to require such a defect,259 as does Dutch law, where ‘a special danger for persons or things’ is required,260 while force majeure is not a valid defence.261 In Italy262 and Portugal263, liability for damage caused by things in one’s control was introduced into the respective civil code, but there it is formulated as a liability based on presumed fault. This presumption is non-rebuttable in Italy where tortfeasors can only exculpate themselves by proving force majeure.264

(c) Differences regarding the possibility to extend strict liability by analogy Civil law jurisdictions that do not foresee a general risk-based liability clause, but which have nevertheless introduced at least some instances thereof linked to specific, peculiar risks, will invariably face the problem of incompleteness, as already indicated.265 The justifications for offering alternatives to fault-based claims in tort law from a theoretical and legal policy point of view are not limited to one specific dangerous object or one specific risky activity, even if the exceptions to the fault principle were tailored to a group of objects or to a range of activities.

258 Similarly in Romania, where the thing need not have been moving when causing harm: cf Înalta Curte de Casație și Justiție (Romanian Supreme Court) 24.6.2003, no 2725/2003. 259 Wagner (fn 257) no 3; H Cousy/D Droshout, Belgium, in Koch/Koziol (eds), Strict Liability (fn 116) 49 (no 17). 260 Art 6:173 para 1 BW provides: ‘The possessor of a movable thing which is known to constitute a special danger for people or things when it does not meet the standards which may be set for such a thing, is liable if the danger materialises, unless he would not have been liable under the previous section had he known of the danger at the time it ensued.’ 261 Only the fact that the defect was caused at such a time that the possessor, had she known of it, still could not have prevented the damage, works as a defence; see E du Perron/W van Boom, Netherlands, in Koch/Koziol (eds), Strict Liability (fn 116) 227 (233). 262 Art 2051 CC provides: ‘Everyone is liable for damage caused by things in his care, unless he proves a fortuitous event.’ 263 Art 493 para 1 Código civil reads: ‘Whoever has in his possession a thing, moveable or immoveable, and owes a duty to supervise it, as well as anyone who has assumed the obligation to supervise any animal, is liable for damage that the thing or the animal causes, except where he can prove that he fulfilled his duty of vigilance or that the damage would have arisen even if he had fulfilled such duty.’ 264 Wagner (fn 257) no 3; Busnelli/Comandé (fn 118) 211 (no 34 ff). 265 Supra B.V.2(b)(1).  

Karner/Koch, Civil Liability for Artificial Intelligence

V. Strict (risk-based) liability

75

There will always be another object or activity whose likelihood to cause frequent and/or serious harm is very similar in kind and by nature to those already addressed by legislation, but still does not fit into the language defining the scope of said legislation. Most jurisdictions nevertheless do not allow the application of their strict liability regimes by way of an analogy thereto. This is true, for example, in Germany,266 Portugal,267 or Spain.268 The only exception thereto can be found in Austria, where courts have long accepted not just an analogy to any individual strict liability regime, but also to the entire range of such regimes, thereby acknowledging at least in principle the fundamental ideas supporting the creation of a general clause of risk-based liability.269 However, not only are Austrian courts fairly cautious when it comes to applying a single statutory regime by analogy, it is very rare that they accept filling the gaps of tort law by an analogy to all strict liabilities jointly. While the possibility of applying by analogy helps to overcome at least some discrepancies in tort law, the reluctance of courts to take that step (even though they sometimes do) results in some uncertainty.270 Still, it may well be that the risks brought about by some applications of AI technology will be deemed so similar to those already addressed by a statutory strict liability regime that courts in Austria will extend the latter by analogy, which is impossible in other European jurisdictions.

(d) Differences regarding the availability of defences It must of course be noted that even risk-based liability is generally not a pure liability for bringing about particular results, however, but instead typically allows for defences.271 These defences range from an ‘unavoidable event’ – ie where there is no error at all in the sphere of the keeper – to an act of God/vis maior. In that regard, it seems appropriate for the defences to be gradated in line with the

266 For Germany, see J Fedtke/U Magnus, Germany, in Koch/Koziol (eds), Strict Liability (fn 116) 153; Wagner (fn 111) Vor § 823 no 25 f. 267 Cf art 483 para 2 Código civil: ‘There is an obligation to pay compensation independently of fault only where specified by law.’ 268 Cf art 1105 Código civil: ‘With the exception of the cases mentioned by statutory law on an explicit basis and those in which the obligation so provides, no one will be held liable for events which could not have been foreseen, or which, had they been foreseen, would have been unavoidable.’; see M Martín-Casals/J Ribot/J Solé, Spain, in Koch/Koziol (eds), Strict Liability (fn 116) 291 (no 39 f). 269 Koziol/Apathy/Koch (fn 184) no A/10/1 ff; Karner (fn 39) § 1306 no 2. 270 See the case references in Karner (fn 39) § 1306 no 2. 271 See in detail Koch/Koziol (fn 116) 420 ff (no 109 ff).  





Karner/Koch, Civil Liability for Artificial Intelligence



76

B. Current Tort Law Regimes and Artificial Intelligence

level of risk: the higher the risk, the fewer the available excuses.272 Austrian law can again serve as an example here. Whilst an unavoidable event can be a defence in the case of motor vehicles (§ 9 EKHG), only an act of God prevents the application of strict liability for losses caused by electricity or gas (§ 1a Reich Liability Act, Reichshaftpflichtgesetz, RHPflG). No defences at all are available in the case of nuclear installations, given the particularly great danger involved; the operator of a nuclear installation is responsible for all risks (Nuclear Liability Act, Atomhaftungsgesetz, AtomHG). An equally restrictive standard applies to defences regarding liability for aircraft (§§ 146 ff Aviation Act, Luftfahrtgesetz, LFG).273 Naturally, the defences that may be relied upon in the case of risk-based liability vary widely depending on the jurisdiction. In general, strict liability can be avoided in most countries by invoking force majeure (eg Croatia,274 the Czech Republic,275 France,276 and Latvia277). Exceptions may apply in cases of increased risk (see above: eg nuclear installations), and in some countries even in the case of general risk-based liability (cf the loi Badinter in France,278 which does not allow the force majeure defence in the context of motor vehicle liability).279 Moreover, (partial or exclusive) causation by the injured persons themselves or the (unavoidable) action of some third party may constitute a defence in many jurisdictions (eg in Austria, Croatia,280 the Czech Republic,281 and Poland282). However, there are differences in terms of whether fault is required, or what degree of fault, when it comes to the conduct of the injured person (see, for example, Latvia and Lithuania, where only intentional or grossly negligent conduct of the injured person allows for the defence).283  

272 Cf Koziol/Apathy/Koch (fn 184) no A/1/7 ff and Koch/Koziol (fn 116) 424 (no 121). 273 Cf also § 33 German Air Traffic Act (Luftverkehrsgesetz, LuftVG), which does not allow the vis maior defence. 274 Art 1067 para 1 Croatian Civil Obligations Act (part of Title IX, Chapter 1, Section 4: ‘Liability for Damage Caused by a Dangerous Thing or Activity’). 275 Art 2925 Czech Civil Code (‘Damage caused by extraordinarily dangerous operation’). 276 Cf S Galand-Carval, France, in Koch/Koziol (eds), Strict Liability (fn 116) 133 (137). 277 Art 2347 para 2 of the Latvian Civil Law: ‘Activities associated with increased risk for other persons’. 278 Loi no° 85-677 du 5 juillet 1985 tendant à l’amélioration de la situation des victimes d’accidents de la circulation et à l’accélération des procédures d’indemnisation. 279 See Galand-Carval (fn 276) 137 (no 24). 280 Art 1067 para Croatian Civil Obligations Act. 281 Art 2925 Czech Civil Code. 282 See art 435 KC, establishing the (strict) liability of a person conducting on her own account an enterprise or business set into operation by natural forces. 283 Cf art 2347 of the Latvian Civil Law and art 6.270 of the Lithuanian Civil Code.  

Karner/Koch, Civil Liability for Artificial Intelligence

V. Strict (risk-based) liability

77

(e) Differences regarding the extent of compensation Whilst in relation to fault-based liability all European legal systems at least in theory abide by the principle of full compensation,284 the extent of harm to be compensated under risk-based liability rules is often limited. Thus, it is not uncommon – even with strict liability rules for motor vehicles – for compensation to be limited to personal injury and to exclude property damage.285 Moreover, in particular in Austria and Germany, monetary caps on liability apply.

(f) Differences regarding the possibility to file concurrent fault-based claims Other differences between European legal systems show inasmuch as some allow victims to choose between pursuing their claims in fault or strict liability, whereas strict liabilities introduced in other jurisdictions are exclusive and cut off alternative claims based on fault liability.286 The former is true, for example, in Germany, Austria, or Poland, whereas the latter is true, for example, for the French loi Badinter. A corresponding position applies in Belgium (art 29bis § 5 Loi du 21 novembre 1989 relative à l’assurance obligatoire de la responsabilité en matière de véhicules automoteurs). The possibility to sue on a fault rather than a risk theory even though the latter were available is crucial where the risk-based regime leaves gaps in its field of application, or limits compensation to particular categories of harm or caps the amounts of compensation.

C. Use Cases In the following, three sample groups of AI systems will be used to test the application of existing tort law rules in European jurisdictions on hypothetical loss scenarios in order to show uncertainties regarding the outcome of such cases in the leges latae, and to highlight differences between these legal systems as they stand. The examples selected are advancements of conventional technology that may pose risks to third parties like their non-AI counterparts. Self-driving cars are the obvious first choice, particularly in light of ample case experience with con-

284 See already supra B.III.2(f). 285 See, eg, infra at C.I.3(b). 286 See the overview by Koch/Koziol, Strict Liability (fn 116) 432. Karner/Koch, Civil Liability for Artificial Intelligence

78

C. Use Cases

ventional vehicles and often specific liability regimes in place already. The next group of AI systems used to illustrate the diversity of applicable existing regimes are autonomous lawnmowers and harvesters. These two products were chosen because of the different risk levels attributable to them: lawnmowers are typically limited in size, weight, and range, often used on private land, thereby less likely to cause massive harm. Harvesters, on the other hand, may cause more substantial damage already by their sheer proportions and weight, but their risks are amplified by the fact that they will often at least pass public roads or otherwise affect third parties when operated.287 Drones is a technology (literally) on the rise, with a wide range of models – from rather small and light toy drones to heavy lifting equipment. Unlike cars, they are not operated on limited traffic areas on the surface, and may therefore expose a wide range of third parties to serious harm, also due to their velocity. Current tort regimes applicable to drones are often (but not always) combined with liability for aircraft. Existing tort laws throughout Europe will therefore not necessarily provide predictable, let alone similar solutions for the risks associated with autonomous drones.

I. Autonomous vehicles288 1. The system of traffic liability (a) Fault-based and risk-based liability Hypothetical 1. A pedestrian is seriously injured in an accident with a self-driving car.289 Her bicycle, which she had intended to push across the road, is also damaged.290

287 Cf . 288 Cf also T Evas, A common EU approach to liability rules and insurance for connected and autonomous vehicles. European Added Value Assessment accompanying the European Parliament’s legislative own-initiative report (2018, available at ), and the Annex I thereto (38 ff), EFD Engelhard/RW de Bruin, EU Common Approach on the liability rules and insurance related to Connected and Autonomous Vehicles. 289 The following analysis proceeds on the assumption of a highly or fully-automated car at levels 4-5 of the SAE classification, cf SAE International, SAE J3016 (2014), . 290 This is modelled on the ‘Arizona case’, cf (accessed 7 August 2020). The original case involved a test drive, during which the driver was inattentive and failed to intervene when she should have. Insofar as this has been clarified, a fault-based liability can be accepted without difficulty. Monitoring du 

Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

79

Hypothetical 2. In a collision between a self-driving car and a conventional car, both of the drivers as well as the passengers are injured. The cars are damaged.

All European legal systems have some form of fault-based liability in place (though quite diverse in detail).291 Traffic accident victims in all jurisdictions can therefore base their claims on misconduct by some wrongdoer (subject to other applicable prerequisites of liability). The most obvious tortfeasors in this context are drivers of motor vehicles or other traffic participants – after all, more than 90 % of all traffic accidents today originate in some wrongdoing by such parties.292 In practice, however, fault-based liability often plays a merely subordinate role. The high number of accidents and the relatively large potential for harm in this area have led to the development of more efficient forms of loss distribution in many legal systems.293 A particularly noteworthy example, widespread across Europe, is strict, risk-based liability, which provides for a distribution of risk that is more advantageous for traffic accident victims and promises more efficient compensation.294 Most Member States that have introduced such a strict traffic liability regime so far also allow concurrent or alternative claims based on the traditional fault theory instead. Bearing in mind that risk-based liability tends to be beneficial for the claimant, it is obviously by far the more attractive option in practice for victims of a traffic accident to seek compensation. If that route is available, the role of fault-based liability is often merely a secondary one, unless the strict liability regime is limited and the victim pursues her claims on a fault theory instead in order to bypass such limitations. Otherwise, fault liability is often limited to recourse claims that might be brought by the liability insurer of the vehicle’s keeper against the driver, for example,295 or comes into play indirectly in  

ties are codified, eg, in the German Road Traffic Act (Straßenverkehrsgesetz), see already above at fn 145. 291 On the differences and commonalities, see already above B.III.2. On the differences in the amounts awarded in particular, see the report submitted in 2009 to DG GROW on the compensation of cross-border victims in the EU, . 292 Statistisches Bundesamt, Unfallentwicklung auf deutschen Straßen 2017. Begleitmaterial zur Pressekonferenz am 12. Juli 2018 in Berlin, 2018, 11 (). 293 See W Ernst, General introduction: legal change? Railway and car accidents and how the law coped with them, in W Ernst (ed), The Development of Traffic Liability (2010) 1 ff. 294 See Pehm, IWRZ 2018, 260 f. 295 On liability insurance, see infra C.I.1(b).  



Karner/Koch, Civil Liability for Artificial Intelligence

80

C. Use Cases

the decision-making process when contributory negligence by the claimant is at stake.296 In some legal systems, risk-based liability even replaces the fault-based alternative within its sphere of application; this is true, in particular, for the French loi Badinter.297 However, liability rules in Europe show very significant differences even in relation to risk-based liability.298 In some of the Member States, such as in Austria, Germany, and France, risk-based liability for motor vehicles is rather comprehensive. Such a broad strict liability for motor vehicles is by no means universally acknowledged throughout Europe, however.299 Other Member States provide for sometimes very significant exceptions to risk-based liability for motor vehicles, and, in particular, limit its sphere of application or the range of compensable losses.300 Some Member States – such as Ireland, Malta, or Cyprus – do not even foresee strict (risk-based) liability at all for traffic accidents.301 In all of these jurisdictions, though, injured parties can nevertheless at least resort to traditional fault liability if flawed conduct contributed to causing their harm (or pursue a product liability theory in case the source of their damage was some defective product). However, significant gaps in protection still remain, as will be discussed in more detail below. Precisely in view of future increases in automation in the traffic sector, such gaps place difficulties in the path towards allocating the risks of motor traffic that must be taken seriously.302 The problems of applying the concepts of fault liability in AI scenarios have already been addressed earlier.303

(b) Liability insurance304 Particularly important for the handling of liability claims and the compensation of traffic accident victims is the compulsory vehicle liability insurance regime as

296 See below C.I.3(e). 297 Supra fn 278. 298 Supra B.V.2. 299 See already supra B.V.2(b)(1) 300 See further below C.I.3. 301 Cf A Renda/L Schrefler, Compensation of Victims of Cross-Border Road Traffic Accidents in the EU: Assessment of Selected Options (Briefing Note for EP’s Committee on Legal Affairs, 2017, available at ) 6. 302 See also Pehm, IWRZ 2018, 264 f. 303 See B.III.3(c). 304 As was stated in the introduction, the unique systems of traffic insurance in the Nordic jurisdictions such as Sweden will not be considered in the following, despite their practical relevance in these countries and their unique underlying concepts and features. Cf Engelhard/de Bruin  

Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

81

harmonised by the Motor Insurance Directive (MID).305 The direct right of action (action directe) provided by art 18 MID allows injured parties to bring their liability claims directly against the insurer of the liable party. The liability insurer thereby serves as a singular point of contact for the injured party’s claims, inter alia covering claims against the owner, keeper, and the driver of a vehicle. If the original cause of the accident can be traced back to a defect in the hard- or software of the vehicle, the liability insurer will in turn be able to seek recourse from the producer on the basis of the harmonised product liability regime.306

2. Consequences of automatisation for road traffic liability Increasing automatisation at the same time reduces the level of control of the individual user – here in particular of the driver or keeper of an autonomous vehicle – and thus narrows the scope of fault-based liability in practice. ‘Even without legal analysis it seems obvious that this shift of control will upset established modes of cost attribution through the liability system.’307 The more vehicles themselves assume driving tasks, the less will human conduct play a role in causing harm, and the more defects in the design or manufacturing of the car will be relevant for attributing losses resulting from an accident. Given the self-learning nature of autonomous systems and the high level of interaction with internal and external data during operation, defects may come into being only after the vehicle has already been brought into circulation.308 Insofar as traffic accident victims are not protected by comprehensive riskbased liability, they will therefore in the future primarily attempt to seek compensation from the manufacturer of the vehicle in the absence of further alternative defendants. Proving a product defect (which is the injured party’s task under art 4 PLD) or establishing fault of some human involved in the accident can be extraordinarily difficult, though. In particular, due to the particularities of such systems

(fn 288) 77 f. For Norway, see, eg, the Norwegian Guide to Compensation (). 305 Directive 2009/103/EC of the European Parliament and the Council of 16 September 2009 relating to insurance against civil liability in respect of the use of motor vehicles, and the enforcement of the obligation to ensure against such liability [2009] OJ L 263/11. 306 On product liability for software defects, see G Wagner, AcP 217 (2017) 707. 307 G Wagner, Robot Liability, in S Lohsse/R Schulze/D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (2019) 27 (39). 308 See, eg, BA Koch, Product Liability 2.0 – Mere Update or New Version? in S Lohsse/R Schulze/D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (2019) 99 (101 ff, 111 ff).  





Karner/Koch, Civil Liability for Artificial Intelligence

82

C. Use Cases

(especially in light of their increasing autonomy, complexity, opacity, openness and limited predictability), identifying someone’s conduct not only as a cause, but also as blameworthy (eg the keeper’s maintenance of the vehicle) will invariably face the challenge of weighing the impact of subsequent internal processes of the AI system itself (eg whether and to what extent it did or at least should have reacted to the actual maintenance).309 These challenges already arise with merely partially automated vehicles that are of primary practical importance today (and likely to remain relevant for quite some time still), but exacerbated if at least one of the colliding vehicles is fully autonomous. If algorithmic and manual driving work together, for example, then – in particular by virtue of the complexity of the former and the interaction with the latter – the true cause of a traffic accident – technical flaw or human error – will often hardly be detectable for an injured party. However, relying on one rather than the other will be an important choice for the victim when seeking compensation, as liability for fault may be unlimited, whereas strict liability for the vehicle itself may be capped, but the former may be more challenging to establish. Traffic accident victims would thus often be faced with the problem of having no clear indication upfront as to whom should be the proper addressee of their claims. They may consider investigating potential flaws of the driver (who may not have resumed control of the vehicle in due time, for example) or of the vehicle’s keeper (who may have abstained from installing crucial updates or otherwise failed to properly maintain the vehicle). Instead, they may suspect a possible defect in the vehicle itself and therefore address claims against its producer.310 All these theories require insight into the actual operation of the vehicle or its inherent features that may not be easily acquired by the victim, or only with the assistance of costly experts. This is already challenging with conventional vehicles at present, but the efforts needed to identify the proper addressee of a liability claim may become prohibitively onerous, particularly the more sophisticated the technology installed in the car is, thereby leading to serious gaps in the protection of injured parties.311 Strict liability triggered merely by the operation of the vehicle will obviously significantly relieve the victim of such complications.312 The vehicle’s keeper and the car’s liability insurer will typically be easily identified, considering also the

309 See above B.III.3(b). 310 There may, of course, also be cases where both the fault of the keeper (and/or of the driver/ user) and a defect of the vehicle jointly may have caused the accident and the ensuing losses. In such cases, all these targets of compensation claims will be jointly and severally liable. 311 See already above B.III.3(b). 312 Cf supra B.II.2. See also Karner (fn 255) 121 f.  

Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

83

comprehensive system of car registration in place, and even if that should fail, compensation may instead be claimed from a guarantee fund as foreseen by art 25 MID. Once the victim is indemnified by these first addressees, the latter may then pursue recourse actions against others involved, but as more sophisticated and less vulnerable players. The peculiar problems of fault-based liability in increasingly automated motor traffic show particularly in those jurisdictions which have so far abstained from introducing strict liability,313 as victims not only need to show the mere involvement of the vehicle in the accident as under the latter regimes, but identify some specific human conduct that was flawed and thereby triggered the accident despite the fact that the operation of an autonomous vehicle by definition will not depend on human steering. As explained above, establishing misconduct in such cases can lead to serious evidentiary problems.314 The behavioural patterns of autonomous systems are often only predictable to a limited extent and hence more difficult to control ex ante as compared to conventional technologies. Moreover, the ability to explain the operation of such systems is often also limited from an ex post perspective. Counter-balancing this by increasing the standard of care does not work if human conduct is no longer of any relevance at all.315 In the United Kingdom, therefore, the legislator intervened by introducing an insurancecentred regime with the Automated and Electric Vehicles Act 2018.316 As already emphasised, gaps in protection do not just emerge where a riskbased liability regime for motor vehicles is entirely absent, but also where a strict (risk-based) regime is restricted in scope. This is often the case in Europe, though. In particular, liability can be limited to the protection of non-motorised actors or to the compensation of personal injuries.317 The following analysis will therefore seek to clarify the most significant differences between the traffic liability systems currently in place in the Member States and the ensuing difficulties focussing on the example of the hypotheticals mentioned above.

313 314 315 316 317

Supra at fn 241. Supra B.III.3(b). Cf supra at fn 242. See also Engelhard/de Bruin (fn 288) 71. Infra C.I.4. Supra B.V.2(e).

Karner/Koch, Civil Liability for Artificial Intelligence

84

C. Use Cases

3. Differences between current motor vehicle liability regimes Generally, it can be presumed that the existing risk-based liabilities for motor vehicles in the Member States will be applicable to autonomous vehicles as well. The legal systems analysed do not distinguish between conventional or self-driving cars when it comes to determining their involvement in an accident.318 However, differences between existing strict liability regimes will also impact upon the handling of accidents involving self-driving cars.

(a) Subject of liability Strict liability regimes in the Member States already differ when it comes to determining which party should be liable. Most jurisdictions opt for a person who has the power to control the vehicle and/or who profits economically from its use – a person who is not necessarily the vehicle’s owner,319 whereas others focus more narrowly on the latter.320 In Germany321 and Austria322, for example, it is the ‘keeper’ (Halter) of a vehicle who bears its risks. In France, the loi Badinter targets both the vehicle’s gardien323 as well as its driver (if not the same person). The Dutch art 185 WVW mentions the owner as the liable person first, but only if there is no keeper (houder), who in essence is defined by art 1 para 1 lit o WVW as someone who has the lasting use of the vehicle. According to art 2054 para 3 CC, Italy focuses on the owner of the motor vehicle as the liable party instead, unless there is a usufructuary, or if someone acquired the vehicle subject to a retention of title. In Poland, liability is attributed on the basis of the property law concept of ‘independent possession’.324 If, however, the independent possessor passes on the use of the vehicle to a ‘dependent possessor’ – in particular a usufructuary or lessee – the latter will be liable instead. Thus, ultimately, it is the

318 As always, there are exceptions thereto as well. The 2017 reform of the German StVG, for example, doubled the caps on liability for highly or fully automated vehicles only. 319 Cf fn 234. 320 Cf supra B.V.2(a). 321 § 7 para 1 StVG. 322 § 5 para 1 case 2 EKHG. 323 Art 2 leg cit. This is understood as the person who had the use, control and direction of the vehicle (similar to the notion of the gardien under art 1242 Code civil). There is a rebuttable presumption that the vehicle’s owner is its keeper, see J-S Borghetti, Extra-strict liability for traffic accidents in France, 53 Wake Forest L Rev 2018, 274 f. 324 Art 436 § 1 KC. An independent possessor is a person who exercises factual control over the vehicle as an owner does (art 336 1st sent KC).  

Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

85

person who is able to decide on time, place, form and means of use for the vehicle who is held liable.325 Alongside the keeper, in some legal systems the driver is also held liable on a strict basis. Apart from France, as already mentioned, this is true for Greece,326 and to a certain extent also for Italy (conducente).327 Considering the highly or fully automated vehicles discussed in this case study, this raises the issue, though, of whether it is even still possible to speak of a ‘driver’ if the driving is conducted entirely automatically.328 This becomes particularly important in legal systems where only the driver is liable under a no-fault motor vehicle liability framework. Such is the case in Spain, in particular, where the vehicle’s owner is only jointly liable with the driver if the former failed to take out the required liability insurance and cannot prove that the vehicle was stolen, if the owner would be liable for the fault of the driver under art 1903 Código civil329, or if the driver was the owner’s employee or otherwise authorised to use the car and committed a crime with it (art 120 para 5 Código penal).330 Leaving aside this rather exceptional case, however, the determination of the liable person may ultimately have no particularly significant impact upon the distribution of risks at the level of the immediate victim. It must be considered that compulsory liability insurance generally must be taken out to cover all claims arising out of ‘civil liability in respect of the use of the vehicle’ (art 3 para 1 MID), whether directed against the vehicle’s keeper, its owner, or the driver. For this reason, from the injured party’s perspective, the distinction is largely irrelevant. Given the action directe, she has a single point of contact in the form of the liability insurer.

325 See K Ludwichowska, Odpowiedzialność cywilna i ubezpieczeniowa za wypadki samochodowe (2008) 115. 326 Art 4 of the Law 3950 /1911 on Liability for Automobiles (with art 5 para 2 allowing the driver to escape liability if she can prove that even with utmost care she could not have known of a causal defect in the vehicle). Under this provision, in addition to the driver, also the keeper of the car as well as its owner (if not keeper herself) are liable, with the owner’s liability capped at the value of the vehicle. 327 A Scarso/M Mattioni, Road Traffic Accidents in Italy, in E Karner (ed), Liability for Road Traffic Accidents (forthcoming) no 6 ff. 328 For France, see eg J-S Borghetti, L’accident généré par l’intelligence artificielle autonome, Le Droit Civil à l’ère numérique, Actes du colloque du Master 2 Droit privé général et du Laboratoire de droit civil (2017) accessible at . 329 This includes parents or guardians, employers, or owners of an educational facility who cannot rebut the presumption of fault in supervising the injurer. 330 Art 1 subs 3 LRCSCVM.  

Karner/Koch, Civil Liability for Artificial Intelligence

86

C. Use Cases

(b) Risks covered Significant differences exist among European jurisdictions as regards the trigger of their respective traffic liability regime.331 Perhaps the most comprehensive risk-based liability framework for traffic accidents can be found in French law. The loi Badinter is applicable to all traffic accidents in which a terrestrial motor vehicle is (merely) involved (impliqué), with the notions of accident and involvement being interpreted very broadly by the courts and not requiring causation in particular.332 The scope of the specific statutory road traffic liability regime is also fairly broad in Germany and Austria, targeting (with almost identical wording) accidents ‘in the course of the operation of a motor vehicle’.333 The crucial question in those countries is whether the victim suffered harm because a typical risk of a motor vehicle’s operation materialised, related in particular to the speed of the relevant vehicle, its relative weight, and also to traffic circulation in general.334 Unlike in France, however, mere involvement without causation does not suffice. None of the applicable definitions of a motor vehicle in France, Germany, or Austria exclude automated vehicles.335 It should be emphasised that all three legal systems decline to differentiate between different types of accidents, and all three cover both personal injury and property damage. Their risk-based liability regime is thus applicable to both case variations above. The pedestrian of Hypothetical 1 can therefore seek compensation for both her personal injuries as well as her property loss from the keeper of the self-driving car irrespective of fault. In Hypothetical 2, both keepers can sue each other on the basis of the applicable strict liability regime in respect of their reciprocal property damage, and

331 See also infra C.II.2(b). 332 Inter alia, no physical contact with the vehicle is required, even vehicles parked in a car park are included – however, there must be some relation to the ‘travelling function’ of the vehicle, see Borghetti, Wake Forest L Rev 2018, 272 f. 333 Germany: § 7 para 1 StVG (‘bei dem Betrieb eines Kraftfahrzeugs’); Austria: § 1 EKHG (‘beim Betrieb eines Kraftfahrzeugs’). 334 See eg M Spitzer, Betrieb und Betriebsgefahr im EKHG, in S Perner et al (eds), Festschrift für Attila Fenyves (2013) 331 ff. 335 For Germany, see T Hammel, Haftung und Versicherung bei Personenkraftwagen mit Fahrassistenzsystemen (2016) 206 f; for Austria M Harnoncourt, Haftungsrechtliche Aspekte des autonomen Fahrens, (österr) ZVR 2016, 546 (548 f); for France J-S Borghetti, L’accident généré par l’intelligence artificielle autonome, Le Droit Civil à l’ère numérique, Actes du colloque du Master 2 Droit privé général et du Laboratoire de droit civil (2017) accessible at . This may also apply for Italy, although it is emphasised in the Traffic Law that motor vehicles are machines of any kind, circulating on roads driven by men, see Scarso/Mattioni (fn 327) no 86 ff.  







Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

87

the passengers can pursue their personal injury claims on the same basis. There are exceptions regarding the injuries of the drivers, which will be dealt with immediately below. Unlike in France, however, it should be noted that the keeper’s strict liability is limited by caps both in Austria and in Germany (§ 12 StVG, § 15 EKHG), so that losses exceeding these limits can only be claimed under a fault theory.336 In Germany, the liability caps were significantly increased recently, but specifically (and exclusively) for accidents with autonomous motor vehicles.337 By contrast, several Member States limit their respective motor vehicle liabilities to accidents between motorised and non-motorised parties. This is true, for example, in the Netherlands, where the relevant risk-based liability only applies to accidents involving a motor vehicle driving on a public road on the one hand and a non-motorised traffic participant (such as a pedestrian or cyclist) on the other.338 Damage caused to other moving or parked vehicles, or to persons or property transported by them is explicitly excluded from the strict liability regime, as are accidents involving stray animals.339 Polish law also explicitly provides that damage arising out of a collision of motor vehicles can only be claimed on the basis of fault liability. This not only affects losses caused by the vehicles to each other, but also damage caused to persons transported gratuitously (such as friends, colleagues, guests, or hitchhikers).340 Under both Dutch and Polish law, the strict motor vehicle liability provisions therefore apply to Hypothetical 1. The keeper (in the Netherlands) or vehicle’s independent possessor (as defined by Polish law)341 is thus liable to the pedestrian irrespective of any misconduct in the operation of the car. In Hypothetical 2, by contrast, strict road traffic liability is excluded or seriously restricted. The keepers (or independent possessors) of the vehicles are protected neither in the Netherlands nor in Poland by the local risk-based liability. For these injured parties, only

336 This is also th00e reason why compensation for losses exceeding such limits will typically be sought on the basis of fault liability in such countries. 337 Supra fn 318. 338 M Slimmen/WH van Boom, Road Traffic Liability in the Netherlands, Conference Paper (2017, ) no 5. 339 Art 185 para 3 WVW. But see the practice of Dutch insurers to nevertheless indemnify passengers on a no-fault basis as cited by Engelhard/de Bruin (fn 288) 73 (fn 188). 340 Art 436 § 2 KC reads in translation: ‘In the case of a collision between mechanical means of transport, the said persons may only bring claims against each other for compensation of their respective damage according to general principles. Likewise such persons shall only be liable according to general principles for damage caused to those whom they carried gratuitously.’ 341 Supra at fn 324 f.  

Karner/Koch, Civil Liability for Artificial Intelligence

88

C. Use Cases

the general, fault-based liability rules remain (apart from potential claims based on product liability). At least in Poland, passengers in one car can rely on the strict liability against the independent possessor of the respective other vehicle, but not of the car they themselves rode in if travelling gratuitously. Other European legal systems provide for similar or other restrictions on strict liability for motor vehicles. Thus, for example, in Greece, the risk-based regime solely applies to damage caused outside of the vehicle (thereby excluding damage to passengers and items transported), and also harm resulting from collisions is only subject to fault liability.342 In Spain, only personal injury is recoverable.343 Therefore, the injured pedestrian of Hypothetical 1 can claim compensation for her injuries regardless of any fault on the driver’s part both in Greece and in Spain. However, the damage to the bicycle is only covered by the Greek, but not the Spanish strict liability regime. Injuries inflicted upon the drivers and passengers in Hypothetical 2 fall under the Spanish risk-based liability, but not in Greece, and any property damage must be pursued on a fault theory in both countries. Such limitations may be traced back to the underlying idea that strict liability should be limited to losses incurred by particularly vulnerable parties (such as non-motorised participants in traffic) or to the infringement of legal interests that are deemed worthy of special protection (such as the life or bodily integrity of a person).344 Reducing the scope of liability to such cases attempts to strike a compromise between improving the position of victims on the one hand and avoiding an excessive burden of liability (leading inter alia to an increase in insurance premiums) of potentially liable persons on the other. While those incurring losses that are excluded can still resort to fault liability (eg of the driver) in traditional traffic accident scenarios, this is not equally possible in cases involving self-driving cars in light of the limited role of human conduct and the ensuing evidentiary problems due to the particular characteristics of AI systems.345 Therefore, any such limitations of current strict liability regimes lead to gaps in protection and leave certain victims (only in accidents with autonomous vehicles) empty-handed (unless they have a valid claim in product liability against the manufacturer of the vehicle). This hardly seems appropriate, though, since this runs afoul of a fundamental principle underlying strict liability – those who are permitted to use parti-

342 Art 4 and 10 of Law 3950/1911 on Liability for Automobiles. 343 Art 1 subs 1 para 3 LRCSCVM. 344 Cf Koch/Koziol (fn 116) 410 (no 65). Cf also art VI-3:205 para 1 DCFR, excluding property damage to the vehicle itself as well as to freight from the proposed strict liability for motor vehicles. 345 Cf supra at B.III.3(b). Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

89

cularly dangerous objects for their own interest and benefit should also bear all consequences should such danger materialise.346

(c) Protection of the driver/user? In some legal systems, the strict liability of the keeper of a motor vehicle does not extend to its driver, which is in line with art 12 para 1 MID, requiring the mandatory liability insurance to ‘cover liability for personal injuries to all passengers, other than the driver, arising out of the use of a vehicle’ (emphasis added).347 In Austria and Germany, for example, someone injured while ‘active in the operation of the motor vehicle’ cannot claim compensation from its keeper (§ 3 no 3 EKHG; § 8 no 2 StVG).348 In Poland, by contrast, a driver who was not at the same time the possessor of the driven vehicle349 would have a strict liability claim against the latter under art 436 KC unless the accident was caused exclusively due to the driver’s own fault.350 This raises the question how this would apply to the user of a self-driving car, bearing in mind that she will not be ‘driving’ the vehicle herself anymore. If she were still to be deemed a ‘driver’ or equivalent thereto (as the primary person in charge of operating the vehicle, even though this would not involve much more than starting the engine and entering the destination), any injuries sustained in the self-driving car would not be subject to the strict liability of its keeper in Austria or Germany. Instead, she would have to pursue any claims on the basis of fault (or on a product liability theory against the manufacturer) in those jurisdictions. In Hypothetical 2 therefore, neither of the drivers could sue the keeper of the vehicle they drove themselves, but only the keeper of the respective other car on a risk-based theory. Were the driver of Hypothetical 1 also injured in that accident, she would have no strict liability claim at all. This is highly unsatisfactory in light of the difficulties in establishing fault in such scenarios.351 If she were deemed a passenger instead, she would be covered in both scenarios (and under the MID).

346 See, eg, Koch/Koziol (fn 116) 412 (no 71). 347 The keeper is protected under the MID, though, as long as she is merely a passenger or a person outside of the insured vehicle; see ECJ 30.6.2005 C-537/03, Candolin v Vahinkovakuutusosakeyhtiö Pohjola, ECLI:EU:C:2005:417 (passenger); 14.9.2017 C-503/16, Delgado Mendes v Crédito Agrícola Seguros, ECLI:EU:C:2017:681 (outside). 348 Similarly in Estonia according to § 1057 para 4 LOA. 349 Cf supra at fn 324 f. 350 Likewise eg in France, cf Borghetti, Wake Forest L Rev 2018, 275 f. 351 Cf supra at B.III.3(b).  



Karner/Koch, Civil Liability for Artificial Intelligence

90

C. Use Cases

In the UK, this problem was specifically addressed by the Automated and Electric Vehicles Act 2018:352 the motor liability insurance of the vehicle owner must also compensate harm suffered by the driver of the vehicle involved in the accident if it is attributable to a defect in the automated vehicle.

(d) Exclusion of liability in case of a technical defect? With respect to autonomous motor vehicles specifically, particular significance attaches to how a technical failure (resulting from a hardware or software defect) plays out in the context of strict motor vehicle liability. In order to assess whether accidents involving autonomous vehicles are subject to risk-based liability, the question thus arises whether a technical failure falls under one of the relevant defences to liability – which are, again, regulated very differently across Europe. Several Member States provide for a defence in cases of force majeure. Thus, for example, in Germany, the keeper is not strictly liable if the accident was caused by höhere Gewalt.353 In case multiple vehicles caused the accident (such as in Hypothetical 2), recourse actions amongst their keepers as well as claims by the vehicle’s owner (who is not the keeper at the time) are further excluded if the accident was due to an ‘unavoidable event’ (unabwendbares Ereignis, § 17 para 3 StVG), which presupposes that both the keeper as well as the driver exercised utmost care under the circumstances.354 In Austria, though, the latter defence applies with very similar wording to all accidents falling under the EKHG, not just to collisions (and therefore also in Hypothetical 1).355 However, in both countries it is explicitly provided that there is no unavoidable event if the accident is caused by a defect inherent in the vehicle or a failure of its mechanisms. Therefore, the defence does not apply in cases where hard- or software of an autonomous vehicle are flawed or fail for other reasons (eg if a person crossing the road is not identi-

352 See infra C.I.4. 353 § 7 para 2 StVG. Similarly, eg, in Estonia: § 1057 para 3 LOA. 354 § 17 para 3 StVG defines this as an ‘event which was neither due to a defect of the vehicle nor a mechanical failure. An event is only considered unavoidable where both the keeper and the driver of the vehicle have exercised utmost care in the circumstances. …’ According to § 17 para 4 StVG, the entire section (and therefore also said exclusion) applies correspondingly in cases of a vehicle colliding with a trailer, an animal, or a train. 355 § 9 para 1 EKHG. In Austria, however, keepers are still liable in cases of an ‘extraordinary operational risk’ (außergewöhnliche Betriebsgefahr, § 9 para 2 EKHG), when the inherent risk of the motor vehicle is amplified by extraordinary circumstances (eg passengers injured in the course of emergency braking, or if the car is suddenly pulled over in order to avoid a collision). In such cases, liability can even arise despite force majeure, cf eg M Schauer, in M Schwimann/G Kodek (eds), ABGB-Praxiskommentar4 (2016) § 9 EKHG no 45. Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

91

fied by the algorithm).356 The same is true, for example, in Spain,357 the Netherlands,358 and in Poland.359 France goes even further by excluding the force majeure defence altogether (art 2 loi Badinter). Allocating the risk of vehicle defects and sudden technical failures to the keeper’s sphere is of considerable significance in cases involving autonomous vehicles as discussed here. With decreasing influence of human conduct, accidents will increasingly be attributable to technical failures or design or construction defects. It is entirely unclear yet whether courts would determine, for example, a complete network failure, cutting of autonomous vehicles from backbone data while on the road,360 as force majeure or as an unavoidable event, bearing in mind that such vehicles should be prepared for such situations, if only by providing for safe emergency stops in such cases. Italy deviates from the aforementioned positions significantly in such cases. Although the scope of the Italian risk-based liability for motor vehicles is relatively comprehensive, cases of an unexpected technical failure (caso fortuito) like a tyre blowout are typically excluded (unless there are construction defects or flaws in the maintenance of the car).361 It is therefore also questionable how the Italian courts would assess the two case hypotheticals posed here if the accident was caused by a technical defect in the autonomous vehicle that shows unexpectedly. Resorting to product liability instead will not always help, since it is up to the victim to prove that the defect was inherent in the vehicle already when put into circulation.362

356 If there was a defect inherent in the vehicle from the beginning, the keeper may seek recourse under the product liability regime, though. 357 An unforeseen and unavoidable event (fuerza mayor) only leads to an exclusion of this liability where the event is attributable neither to a defect in the vehicle, nor to an unforeseeable breakdown or any kind of failure of its parts or mechanism; see Pacanowska, in W Ernst (ed), The Development of Traffic Liability (2010) 183. 358 Hoge Raad 16 April 1942, NJ 1942/394; see Slimmen/van Boom (fn 338) no 34, 79. 359 See Machnikowski/Śmieja, in Olejniczak (ed), System Prawa Prywatnego. Prawo zobowiązań. Część ogólna (3rd ed 2018) 651. 360 Cf the scenario outlined by T Evas (fn 288) 25 f, and scenario 3 by Engelhard/de Bruin (fn 288) 51. 361 See Scarso/Mattioni (fn 327) no 34. 362 On liability for automated motor vehicles in Italy, see also A Albanese, La responsabilità civile per i danni da circolazione di veicoli ad elevate automazione, Europa e diritto privato 4/2019, 1011 ff. But see the Dutch ‘backup’ provision of art 6:173 BW, which foresees strict liability for defects (inter alia) of a vehicle that fall outside the scope of the PLD regime: Slimmen/van Boom (fn 338) no 40.  



Karner/Koch, Civil Liability for Artificial Intelligence

92

C. Use Cases

(e) Relevance of contributory negligence Significant differences between European jurisdictions also exist in relation to the extent to which any misconduct or carelessness on the part of the victims themselves impacts on their compensation claim.363 In some jurisdictions, special rules favouring accident victims apply.364 Thus, for example, in Germany, only negligence of a victim older than ten years of age will be held against her, while the ordinary age limit for considering contributory negligence is seven.365 Perhaps the most generous regime for victims is provided by the loi Badinter in France with respect to personal injuries: only faute inexcusable of the victim will be considered,366 and even that only if it was the cause exclusive de l’accident (in which case the victim therefore has no claim at all). In practice, this means that contributory negligence is almost never considered but for cases of suicide (attempts) or similarly targeted conduct aiming at or at least accepting the self-infliction of harm.367 However, this applies neither with respect to property losses (art 5 leg cit) nor if the victim was the driver (art 4 leg cit), to whom hence ordinary rules of contributory negligence apply (also with regard to the latter’s bodily harm). In other legal systems, contributory conduct is sometimes only considered if it amounts to intent,368 or if it can at least be classified as gross negligence (eg in Sweden):369 sometimes no such restrictions apply, so that also merely simple negligence or even a risk within the victim’s sphere for which she would be strictly li-

363 See U Magnus/M Martín-Casals (eds), Unification of Tort Law: Contributory Negligence (2004). 364 Cf also the Dutch approach described by Slimmen/van Boom (fn 338) no 28 ff. 365 §§ 254 and 828 BGB read together. This does not apply if a child over the age of seven acted intentionally. 366 The courts interpret that as ‘la faute volontaire d’une exceptionnelle gravité exposant sans raison valable son auteur à un danger dont il aurait dû avoir conscience’ (eg Cour de cassation 28.3.2019, nos 18-14.125 and 18-15.855 : there is no ‘faute inexcusable’ if cyclists ride without lights or protective clothing on the main carriageway, rather than on a cycle path; cf ). Cf also art 29bis § 1 para 6 of the Belgian Loi du 21 novembre 1989 relative à l’assurance obligatoire de la responsabilité en matière de véhicules automoteurs, under which only ‘victimes âgées de plus de 14 ans qui ont voulu l’accident et ses conséquences’ are unable to rely on the liability provided for. 367 For victims under 16 or over 70 years of age, as well as for those suffering at least 80 % disability, art 3 al 3 of the loi Badinter expressly limits the relevance of their contributory conduct of such particularly vulnerable victims even further to merely intentional self-harm (faute intentionnelle). 368 § 1057 para 3 of the Estonian LOA. 369 § 12 Trafikskadelagen (Traffic Damage Act).  



Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

93

able vis-à-vis third parties suffices to reduce or even exclude her claim under the strict liability regime (eg in Austria).370

(f) Joyriders and hacking A certain significance for autonomous vehicles could also attach to the common defences available in cases involving ‘joyriders’ and other unauthorised drivers or passengers. Many legal systems provide explicit liability exclusions or restrictions for the case that the vehicle was used without the knowledge or consent of its keeper (provided that the latter had not facilitated this by her own careless conduct).371 While the control over conventional cars could so far only be assumed by physical action on site, such a hostile takeover could, in the future, happen from a distance via hacking, without the person taking over control even in close proximity to (let alone in) the autonomous car. If this were deemed equal to the more traditional forms of carjacking, liability of the keeper would be excluded under the aforementioned rules unless she failed to install security updates or take similar measures within her control. The range of further options an end-user or keeper might have to prevent someone unauthorised from taking over control are presumably more limited than, say, keeping doors open or leaving car keys behind. From the perspective of the otherwise liable party, this seems reasonable. However, in order to secure the protection of injured parties, it would be crucial to ensure that they could at least re-route their claims in such cases to a compensation fund (cf art 25 MID).

4. Excursus: A specific insurance solution for autonomous motor vehicles As already noted, English law does not recognise a risk-based liability in the area of motor vehicle accidents; instead, only a fault-based regime is applied.372 Apparently aware of ensuing potential gaps in protecting victims injured by autonomous vehicles, the legislature has meanwhile introduced a specific insurance solution for such cases. The following provides a brief overview thereof. The English regulatory approach is outlined by the new Automated and Electric Vehicles Act 2018 (AEVA). The Bill for this Act was first introduced into Parlia370 Koziol/Apathy/Koch (fn 184) no A/2/63 ff; H Koziol, Österreichisches Haftpflichtrecht I (4th ed 2020) no C/9/51 ff. 371 Eg in Italy (art 2054 para 3 CC); Germany (§ 7 para 3 StVG); or Austria (§ 6 para 1 EKHG). 372 See above at fn 241.  



Karner/Koch, Civil Liability for Artificial Intelligence

94

C. Use Cases

ment with explicit regard for the fundamental importance of cars to the transport system over the past century and the expectation that ‘[o]ver the next decades, cars will change more than they have for lifetimes. In those changes, it is vital that we consider the scale of the opportunities that now present themselves, how those opportunities may be shaped and, indeed, how they will need to be constrained.’373 In light of these expectations, it became obvious that adjustments to the liability and insurance regime were necessary. Those alterations include the introduction of compulsory insurance protection for accidents caused by vehicles in selfdriving mode which would currently not be covered: ‘Solving the question of how automated vehicles can be insured is essential if they are to become a feature on British roads.’374 According to sec 2 para 1 AEVA, the insurer is liable (sic) for damage suffered by the insured or any other person as a result of an accident caused by an insured automated vehicle driving itself375 on a road or other public place in Great Britain.376 The insurance industry thereby effectively indemnifies victims as if these fell under a strict liability regime. Furthermore, such cover cannot generally be limited or excluded by the insurer, whether under the contract of insurance or otherwise.377 The owner of the vehicle herself is only liable exceptionally as a backup (sec 2 para 2 AEVA). Compensable harm under this statutory regime includes death, personal injury, and property damage, but does not extend to damage to the automated vehicle itself, or to cargo or property in the custody or control of the insured (sec 2 para 3 AEVA). Liability therefore clearly prioritises protection against bodily harm and the protection of third parties but does not extend to the insured party’s own property or property entrusted to the insured for transportation.

373 The Minister for Transport Legislation and Maritime, Hansard, HC Deb (23.10.2017), vol 630, col 60. 374 L Butcher/T Edmonds, Automated and Electric Vehicles Act 2018: Briefing Paper (CBP 8118) (2018) 5; cf ibid 8. 375 According to sec 8 para 1 lit a AEVA, ‘a vehicle is “driving itself” if it is operating in a mode in which it is not being controlled, and does not need to be monitored, by an individual’. 376 This part of the Act extends to England and Wales and Scotland, but not to Northern Ireland. 377 Sec 2 para 6 AEVA. Sec 4 AEVA foresees limited exceptions thereto in respect of accidents which occur as a direct result of prohibited software alterations made by or with the knowledge of the insured, or the latter’s failure to install software updates that are ‘safety-critical’ as the insured should at least have known. An update is ‘safety-critical’ according to art 4 para 6 lit b ‘if it would be unsafe to use the vehicle in question without the updates being installed’. Karner/Koch, Civil Liability for Artificial Intelligence

I. Autonomous vehicles

95

Liability is to be reduced under the same terms as provided by the Law Reform (Contributory Negligence) Act 1945 (sec 3 para 1 AEVA). The insurer is relieved from liability if the accident was wholly due to the negligence of the person in charge of the vehicle, ‘allowing the vehicle to begin driving itself when it was not appropriate to do so’ (sec 3 para 2 AEVA). The insurer’s liability under the AEVA is without prejudice to other liabilities, and the insurer may seek recourse on such other bases. The ultimate distribution of losses caused by accidents with self-driving cars has thus not been significantly altered (at least in theory). The mechanism of insurance is used to secure clear responsibilities as well as swift and cost-efficient payments to injured persons,378 but the ordinary mechanisms of tort law liability can then operate to redistribute the burdens.379

5. Findings of this use case An overview of existing European systems of road traffic liability highlights significant differences. Many of the Member States already now rely on some form of strict (risk-based) liability to deal with the large number of traffic accidents. Often, however, there are more or less extensive limitations and/or exceptions to this risk-based liability, which (leaving aside any product liability) continue to make it necessary to fall back on fault-based liability and its associated uncertainties as well as evidential difficulties in traffic accident cases. For the injured party, this can lead to manifest gaps in protection, particularly in future semi- or fully automated traffic.380 One thereby needs to bear in mind that increasing automatisation will lead to a sharp decline in the significance of misconduct on the part of a (motorised) participant as a possible basis for liability. All this is exacerbated by the fact that identifying a cause of the accident which leads to a person being held liable is subject to the complications already listed above.381 As has been mentioned there, proving causation may be easier for victims of a traffic accident if all potential causes are covered by a strict liability regime, since typically only their involvement in the accident will be the trigger of liability, but not the exact inter-

378 L Butcher/T Edmonds (fn 374) 10. 379 See also House of Commons, Automated and Electric Vehicles Bill (Bill 112): Explanatory Notes (2017) no 12. 380 On the significance of fault-based liability for autonomous systems – and in particular for self-driving motor vehicles – see already above B.III.3. 381 Supra B.II. Karner/Koch, Civil Liability for Artificial Intelligence

96

C. Use Cases

nal technical course of operation that led to the vehicle’s collision.382 If it is fault liability upon which the victim has to base her claim, however, she will have to identify some wrongdoing within the sphere of the defendant, even though an autonomous vehicle will (at least not immediately) be operated by a human (other than starting the engine and entering the destination). Only the most important differences were summarised above. On the one hand, these concern the range of victims of traffic accidents eligible to claim compensation under such liability regimes: sometimes, risk-based liability is entirely or at least partially excluded in collision cases, protecting only or primarily nonmotorised parties (eg in Greece, the Netherlands, and Poland). On the other hand, also the addressees of claims vary – in Spain, for example, it is de facto only the driver of the vehicle, in France, it is both driver and keeper, and in other jurisdictions, it is just the keeper. The availability of the force majeure defence or the relevance of contributory conduct by the victim for her claim varies as well from jurisdiction to jurisdiction, leaving potential gaps in liability in at least some of them. Even if the applicable traffic liability regime grants a claim for compensation, the range or extent of recoverable losses differs throughout Europe, and not merely the assessment of the various heads of damage that are deemed compensable.

II. Autonomous lawn mowers and harvesters 1. Autonomous lawnmowers (a) Fault liability Hypothetical 3. A fully autonomous lawnmower fails to identify a stone on its path, which is consequently hurled through the air and hits a passer-by. It is conceivable that this can be attributed to a failure in the lawnmower’s AI-supported controls.

Typically, no special liability regimes apply to household and work appliances, so autonomous lawnmowers will only be subject to fault-based liability in most jurisdictions. In some European countries, an increased (stricter or strict) liability may apply instead, though. Before discussing this further, fault-based liability will be considered first. Fault-based liability could apply, for example, where a user/keeper of an autonomous lawnmower fails to maintain it properly, uses it under inappropriate circumstances, or breaches a duty to monitor its operation. In this regard, de-

382 NTF Report (fn 6) 21. Karner/Koch, Civil Liability for Artificial Intelligence

II. Autonomous lawn mowers and harvesters

97

pending on the jurisdiction, courts may tighten the duty of care expected from the user of the lawnmower ex post, though, as for example German case law shows, raising the bar so high that the defendant will more likely be deemed at fault.383 This is just one of the variables of fault liability mentioned above that make the outcome of such cases difficult to predict – apart from tweaking the standard of care (typically in favour of the victim),384 courts may also adjust the standard or the burden of proving fault.385 Misconduct may have to be proven by the injured party (which was at least historically the default rule), but in some jurisdictions it is the defendant instead who needs to prove that she was not to blame. Apart from such a general reversal of the burden of proving fault, there may be more selective instances where courts decide to shift that burden from the claimant to the defendant, eg in cases involving an increased dangerousness linked to the activity at stake, throughout Europe often used in cases involving animals or constructions,386 but whether and how this will apply to the cases considered here will most likely not be answered uniformly throughout Europe. While there is at least some experience with traditional lawnmowers already, it is even more difficult to foresee how this mixed toolbox of handling the fault requirement will be used in future cases involving autonomous lawnmowers. After all, the peculiar features of such AI-based systems are even less likely to predict. Just think of the chances of hitting a stone with a conventional lawnmower, which may fly off in a curve and ultimately hit a car or passer-by. A conventional lawnmower is operated and therefore controlled by a human being, leaving that person at least with theoretical possibilities to assess the situation, detect that stone, and adjust her conduct accordingly if needed (as subsequently expected by the courts). Autonomous lawnmowers, on the other hand, will (at least ultimately) be marketed as being able to handle all challenges of any area on which they are supposed to operate, including the detection of obstacles such as stones, while not leaving much room for human interference – after all, they will be designed and marketed specifically to make human participation in the process unnecessary, relieving that person in particular of any duties to monitor the areas of the lawn where the mowers are in operation. This may make it more difficult for the

383 German Supreme Court (Bundesgerichtshof, BGH) 28.11.2002 III ZR 122/02 VersR 2003, 1274 (hand-held mower used in the vicinity of cars, one of them damaged by stone hurled by the mower: worker operating the mower should have searched the area for stones, kept more distance to cars, sealed off the area before mowing, etc); 4.7.2013 III ZR 250/12 NJW-RR 2013, 1490 (similar facts: in addition, the worker should have used a mobile protective wall). 384 Supra B.III.2(b). 385 Supra B.III.2(d). 386 See the text accompanying fn 112 above. Karner/Koch, Civil Liability for Artificial Intelligence

98

C. Use Cases

injured person to explain to the court how the defendant should have acted differently, and whom to blame for such misconduct. Since the production, keeping, and use of lawfully marketed autonomous lawnmowers will not be deemed misconduct per se, identifying any actual wrongdoing will depend on concrete knowledge of the operations of such machines, and of how the actual defendant could have influenced the turn of events in order to prevent the harm that was inflicted upon the claimant. The peculiar features of AI-driven lawnmowers will therefore make it particularly challenging for victims to succeed in proving some wrongdoing as the cause of their harm. It is hardly possible at present to predict which concrete duties European courts will impose upon operators of autonomous systems should their use lead to an accident, but it is hard to imagine that these will require human interaction with fully autonomous lawnmowers while in operation. Their keepers or operators may therefore not be liable at all in fault, unless they used it on terrain where it should not be operated (eg with a steeper slope than permissible), under inappropriate weather conditions, or otherwise violated the instructions of using the machine. The significant uncertainty in applying fault-based liability just described387 raises the concern that courts in Europe will decide our Hypothetical 3 in very different ways, or agree in denying liability altogether. Furthermore, since the ultimate user of an autonomous lawnmower may not have had many choices of alternative conduct, other possible defendants may come into play such as the keeper of the machine (who need not necessarily be the same person as its user) who inter alia has to keep it updated, or those providing firmware updates, GPS data, etc (leaving aside the manufacturer per se as outside the scope of this study). Not only will the claimant therefore have to know what exactly happened and why (which will already be a major challenge subject to the evidentiary problems mentioned earlier388), but also who would really have had the possibility to alter the course of events by actively intervening. Depending upon the applicable prescription period, it may be necessary to take the procedural (and therefore costly) risk of suing more than one party.

(b) Strict liability In view of such complexities of fault-based liability, it may therefore be significantly easier for the injured party if the user or keeper of the autonomous lawn-

387 See more generally supra B.III.3. 388 Supra B.II. Karner/Koch, Civil Liability for Artificial Intelligence

II. Autonomous lawn mowers and harvesters

99

mower were subject to a risk-based liability, on which basis they can be held responsible for an accident during its operation regardless of any fault. In those European jurisdictions where risk-based liability is only introduced by singular, tailor-made statutes – as so far eg for motor vehicles, railways, aeroplanes, or nuclear installations389 – one would have to wait for the legislator to intervene, which will typically not be the case before serious incidents have already occurred. This is true, for example, in Germany, Switzerland, Portugal, or Spain. In Austria, on the other hand, there may be a slight chance that courts would draw an analogy to an existing strict liability regime (which is ruled out in the jurisdictions named previously).390 Nevertheless, we deem it unlikely that Austrian courts would take that step, considering in particular that autonomous lawnmowers (at least those for household use) do not pose a comparable risk of harm as compared to instances of strict liability already legislated391 and that such an analogy was already rejected for a conventional lawnmower that hurled a stone at high speed, injuring a passer-by.392 However, it ultimately remains to be seen how much weight courts would give to the particular risks of an autonomous counterpart in light of its distinct features. Those jurisdictions that have a general strict liability clause linked to dangerous objects (such as Croatia, the Czech Republic, Hungary, or Poland)393 may not necessarily qualify autonomous lawnmowers sufficiently dangerous to meet that definition. This seems highly questionable in particular in those jurisdictions where a high degree of risk is required, such as an ‘extraordinarily dangerous operation’, an ‘extra-hazardous activity’, or the like. However, the level of dangerousness required to trigger such liability differs from jurisdiction to jurisdiction, and, moreover, depends on the value judgements of the relevant courts in light of the circumstances of the cases before them. Thus, for example, the Slovenian Supreme Court – in marked contrast to the Austrian case law just mentioned – already deemed a conventional lawnmower sufficiently dangerous as to trigger the general clause of art 150 OZ.394 In contrast, the question of whether the Italian lia-

389 Cf supra B.V.2(b). 390 Supra B.V.2(c). Similarly, also Norway leaves more discretion to judges to develop risk-based liability: B Askeland, Norway, in H Koziol (ed), Basic Questions of Tort Law from a Comparative Perspective (2015) no 2/101 ff. 391 The Austrian Supreme Court (Oberster Gerichtshof, OGH), for example, refused to apply motor vehicle liability by analogy to go-carts, denying a comparable degree of risk: OGH 30.3.2000 2 Ob 84/00t ECLI:AT:OGH0002:2000: 0020OB00084. 00T.0330.000. 392 OGH 19.12.1996 2 Ob 2416/96z ECLI:AT:OGH0002:1996:0020OB02416.96Z.1219.000. 393 Supra B.V.2(b). 394 Vrhovno sodišče republike Slovenije (VSRS, Slovenian Supreme Court) 12.1.2017, I Ips 295/2016, ECLI:SI:VSRS:2017:II.IPS. 295.2016, reported by G Dugar, Slovenia, in E Karner/  

Karner/Koch, Civil Liability for Artificial Intelligence

100

C. Use Cases

bility for dangerous activities in art 2050 CC can be applied to a (conventional) lawnmower remains as yet unresolved.395 General strict liability regimes will typically only be applicable if they are very broadly conceived, and if they encompass almost all risks related to things. This is, in particular, true of the French liability for things (gardien liability).396 The Cour de cassation thus held the keeper of a conventional lawnmower liable under (what is now) art 1242 CC where someone injured himself whilst it was in operation.397 This would also be true in Belgium, where art 1384 CC is in principle applicable to defective household and work appliances,398 but only if the lawnmower was defective.399

(c) Summary In summary, one can conclude that harm caused by an autonomous lawnmower will mostly fall only under a fault-based liability regime, though with significant variations. This uncertainty relates, as discussed, to the prerequisites for liability, to the burden of proof, and above all also to the choice of defendant. Few jurisdictions may impose strict liability instead, as is relatively certain for France with its liability for things (gardien liability). Whether other legal systems would extend their risk-based liabilities (at least exceptionally) to autonomous lawnmowers can hardly be predicted, considering the opposing examples of Austrian and Slovenian law.

2. Autonomous combine harvesters (a) Fault liability Hypothetical 4. An autonomous combine harvester crosses a public street and thereby causes a road traffic accident.

BC Steininger (eds), European Tort Law 2017 (2018) 579 ff. Art 150 OZ reads: ‘The holder of a dangerous object shall be liable for damage therefrom; the person involved in the dangerous activities shall be liable for damage therefrom.’ 395 Corte di Cassazione 9.5.2019, N 12278 (ECLI:IT:CASS:2019:12278CIV): blade of a mower, riskbased liability barred for procedural reasons. See also supra at fn 118. 396 See above at fn 256. 397 Cour de cassation 19.6.2003, N 01-17575. 398 Cour de cassation de Belgique 18.12.2008 C.07.0424.F-C.07.0433.F (ice machine); 18.10.2013 C.12.0457.F (washing machine). 399 Cf supra B.V.2(b)(3) at fn 259.  

Karner/Koch, Civil Liability for Artificial Intelligence

II. Autonomous lawn mowers and harvesters

101

Hypothetical 5. An autonomous combine harvester leaves its designated work area and thereby causes an accident on an adjacent field. Hypothetical 6. A child is injured while running past the blades of an autonomous combine harvester that is parked (a) on a street or (b) in a field, but is not in operation.

In all three hypotheticals, fault-based liability can be considered. However, in light of significant differences between European jurisdictions already mentioned,400 the outcomes of these cases may likely vary from country to country.401 In this regard, reference can be made to the above analysis of cases involving autonomous lawnmowers.402

(b) Strict motor vehicle liability When it comes to strict, risk-based liability, noteworthy differences exist between autonomous lawnmowers and autonomous combine harvesters. Unlike the former, autonomous combine harvesters may more likely fall under a risk-based liability regime in Europe already now. In this respect, a central question is whether a strict liability regime is in place for motor vehicles, whether this regime also applies to autonomous motor vehicles,403 and whether this can be extended to also include agricultural vehicles such as autonomous combine harvesters. After all, these are undoubtedly motorised vehicles. However, this is rarely the only criterion for applying strict liability.404 The strict rules governing motorised traffic are typically tailored to vehicles that are intended to be used on roads, whereas autonomous combine harvesters are machines for use in fields and which are only exceptionally driven on roads, if at all (eg for commuting between farm or garage to the field).

400 Supra B.III.2. 401 Cf for Spain, Tribunal Supremo 3.3.1998, STS 1435/1998, ECLI:ES:TS:1998:1435 (fatal accident involving harvester); for Ireland, Cosgrove v Ryan & The Electricity Supply Board [2008] IESC 2 (contributory negligence of a harvester driver running into a power cable), and for Bulgaria, Supreme Court of Cassation, no 503/21-07-2010, III CD (harvester cutting off a leg, vicarious liability), reported by C Takoff/V Tokushev, Bulgaria, in H Koziol/BC Steininger (eds), European Tort Law 2010 (2011) 66. 402 Supra C.II.1(a). 403 See supra C.I. 404 This is probably true for Croatia’s art 1068 COA. Cf M Grgić, Kroatien, in M Buse/A Staudinger (eds), Münchener Kommentar zum Straßenverkehrsrecht, vol 3: Internationales Straßenverkehrsrecht (2019) no 27. Karner/Koch, Civil Liability for Artificial Intelligence

102

C. Use Cases

Nevertheless, autonomous combine harvesters – like their conventional counterparts – are generally at least subject to the strict special provisions for motor vehicles when, as in Hypothetical 4, they are in fact entering a road – even if only to reach their place of operation and not for the transportation of persons or goods as such.405 However, there are still significant differences to expect: in Germany, for example, risk-based liability does not attach to vehicles with a maximum speed not exceeding 20 km/h (§ 8 para 1 StVG), which is why it will not apply to a slower harvester.406 In Austria, only vehicles faster than 10 km/h are covered by the strict liability regime in place for motor vehicles (§ 2 para 2 EKHG), which is a velocity that such machines will typically reach, though. Even if the harvester should fall under the definition of a vehicle under such statutory strict liabilities, it is still not clear whether these will apply in cases where the accident does not occur on a public road, but on agricultural land (or otherwise off-road) as in Hypothetical 5. Provisions of several European legal systems do not apply to accidents that occur on private land not intended or used for traffic. Thus, the relevant French loi Badinter speaks of an ‘accident de la circulation’ (art 1), the Italian art 2054 CC of a ‘circolazione del veicolo’, the Spanish road traffic liability statute of ‘daños causados … con motivo de la circulación’ (art 1 subs 1 para 1 LRCSCVM), and the Danish Road Traffic Act407 of a ‘road accident’ (§ 101 para 1 leg cit), whilst the Dutch Road Traffic Act requires a ‘motor vehicle that is driven on the road’ (§ 185 para 1 WVW). The Belgian insurance solution likewise requires an accident on land that is at least accessible to a certain number of people,408 and the Swedish insurance system equally requires the involvement of a ‘vehicle in traffic’.409 These conditions are not necessarily met on agri-

405 See eg in Spain explicitly art 2 no 2 lit b Real Decreto 1507/2008 de 12 de septiembre, for Austria see OGH 20.6.2002 2 Ob 142/01y ECLI:AT:OGH0002:2002:0020OB00142.01Y.0620.000 (snowcat on street open to the public), as well as the following references. 406 Cf a case involving a conventional harvester on a public road: Oberlandesgericht (OLG) Hamm 9 U 17/13, NJW-RR 2014, 281; NZV 2014, 213. 407 Act no 1320 of 28.11.2010. See further V Ulfbeck/A Bloch Ehlers, Road Traffic Accidents in Denmark, in E Karner (ed), Liability for Road Traffic Accidents (forthcoming) no 11. 408 Art 29bis read together with art 2 § 1 Loi relative à l’assurance obligatoire de la responsabilité en matière de véhicules automoteurs (‘ouverts à un certain nombre de personnes ayant le droit de les fréquenter’). See Cour de cassation 25.1.2008 C.07.0261.F (sport cars, law applied); 7.2.2011, C.10.0147.N Pas N 108 (concrete mixer in dock with limited access, law applied). 409 §§ 1, 10 Traffic Damage Act. A strict liability was accepted for a forestry vehicle that proceeded at walking speed on a forest track and thereby started a forest fire, which is to say the prerequisite of a ‘vehicle in traffic’ was found to be met, see Supreme Court (Högsta domstolen) 26.2.2019, Nytt Juridiskt Arkiv (NJA) 2019, 89, reported by S Friberg, Sweden, in E Karner/BC Steininger (eds), European Tort Law 2019 (2020) 647 (no 2 ff).  

Karner/Koch, Civil Liability for Artificial Intelligence

II. Autonomous lawn mowers and harvesters

103

cultural land (while operating in the field). Thus, under Italian410 and Spanish411 law, a public area foreseen for traffic is traditionally required, so that combine harvesters in Hypothetical 5 would not fall under the respective strict liability regime (in Spain, agricultural machines are even explicitly excluded by art 2 no 2 lit b of the liability insurance decree).412 But the notion of ‘traffic’ is vague, and the exclusion of agricultural areas is not a conceptual necessity, as is exemplified by cases decided on the basis of the French loi Badinter. On the one hand, courts in the past regularly applied motor vehicle liability even where a harvester or other moving utility vehicle caused harm while driving on agricultural land.413 On the other hand, the Cour de cassation implied in more recent decisions that localities which are not used for traffic at all (residential buildings, sporting grounds) do not fall under the liability regime,414 which casts some doubt on whether it would apply in Hypothetical 5. Insofar as strict motor vehicle liability also applies to accidents on agricultural land not intended for public traffic, it is generally irrelevant whether the vehicle was being used to transport people or goods, or whether it was merely performing its designated tasks. Thus, for example, unlike in Poland,415 tractors (Czech Re-

410 Corte di cassazione 20.10.2016, N 21254 ECLI:IT:CASS:2016:21254CIV (a car on a skiing slope may be in a publicly accessible area, but not in one foreseen for road traffic: ‘La circolazione presuppone … una strada o un’area – pubblica o destinata ad uso pubblico – ad essa equiparata …’). 411 Tribunal Supremo 29.6.2009, STS 4431/2009, ECLI:ES:TS:2009:4431 (tractor parked in a field falling on a person during bull festival not covered: ‘Ni el tractor estaba circulando, ni era propulsado por el motor y, además, la zona en la que se produjo el accidente no es zona de tráfico de vehículo …’); but see 17.12.2019 STS 3983/2019, ECLI:ES:TS:2019:3983 (car ignited in a private garage, follow-up to CJEU in Línea Directa, infra fn 420). 412 Real Decreto 1507/2008 de 12 de septiembre. 413 See in particular Cour de cassation 10.5.1991, N 90-11377 (accident while boarding a harvester on a field; but see also the contrary appellate court decision therein); Cour d’appel de Bordeaux 15.11.2007, N 06-02614 (leg caught in harvester on a field). 414 Cour de cassation 26.6.2003, N 00-22250 (moped parked at entrance of apartment building); Cour de cassation 4.1.2006, N 04-14841 (training accident in motor sports). On the problem of the locality, see further G Viney/P Jourdain/S Carval, Les régimes spéciaux et l’assurance de responsabilité (4th ed 2019) no 96 with further ref. 415 Strict liability for motor vehicles under art 436 KC is denied in cases of industrial machines in a use deemed to be a background function only: P Machnikowski/A Śmieja in A Olejniczak (ed), System Prawa Prywatnego. Prawo zobowiązań. Część ogólna (3rd ed 2018) no 611. Karner/Koch, Civil Liability for Artificial Intelligence

104

C. Use Cases

public)416 and snowcats or ride-on mowers (Germany)417 which were driven for work purposes were already subjected to the respective strict liability regimes for motor vehicles. These results are in line with the jurisprudence of the CJEU on compulsory liability insurance under the MID,418 which likewise neither requires the carriage of people or goods (as long as the vehicle is moving) nor its operation on public roads. Even the start or end of a ride is included (eg, if a passenger opens the door of an already parked vehicle and hits an adjacent parked car with it),419 as is the parking between rides.420 The Court requires a use ‘consistent with the normal function of a vehicle’, which also extends to a tractor used on private land,421 but not if it is stationary with its motor being solely used for powering a pump.422 The ‘normal function of a vehicle’ may be limited to the transportation of people and goods,423 but it could also extend to the ‘normal functions’ of any self-propelled working machine,424 including autonomous

416 Supreme Court of the Czech Republic (Nejvyšší soud České republiky) 18.2.2015, no 25 Cdo 272/2013, reported by J Hrádek, Czech Republic, in E Karner/BC Steininger (eds), European Tort Law 2015 (2016) 109 ff (tractor pulling out a harvester stuck in the field barely qualifies as a vehicle, as its use is mixed and it is slowly moving forward; as regards harvesters, see the reporter’s comment at 111). 417 OLG München 8.6.2011, 10 U 5433/08 (snowcat); OLG Hamm 3.7.2015, 11 U 169/14, NJW-RR 2015, 1370 (mowing trailer, liability denied on other grounds). 418 Supra fn 305. 419 CJEU 15.11.2018, C‑648/17, BTA Baltic Insurance Company v Baltijas Apdrošināšanas Nams, ECLI:EU:C:2018:917 (‘the act of opening the door of a vehicle amounts to use of the vehicle which is consistent with its function as a means of transport, inasmuch as, among other things, it allows persons to get in or out of the vehicle or to load and unload goods which are to be transported in the vehicle or which have been transported in it’). 420 CJEU 20.6.2019, C-100/18, Línea Directa Aseguradora, SA v Segurcaixa, Sociedad Anónima de Seguros y Reaseguros, ECLI:EU:C:2019:517 (vehicle caught fire after being parked in a garage for more than 24 hours). The court held ‘that parking a vehicle in a private garage constitutes a use of that vehicle which is consistent with its function as a means of transport. … Parking a vehicle presupposes that it remains stationary until its next trip, sometimes for a long period of time.’ (paras 43 f). 421 CJEU 4.9.2014, C-162/13, Vnuk v Triglav, ECLI:EU:C:2014:2146. 422 CJEU 28.11.2017, C-514/16, Rodrigues de Andrade v Proença Salvador, ECLI:EU:C:2017:908. 423 CJEU 20.12.2017, C-334/16, Núñez Torreiro v AIG Europe Ltd, Sucursal and Unespa, ECLI:EU: C: 2017: 1007 (off-road vehicle on military training area). 424 While the Rodrigues de Andrade ruling (fn 422) seems to exclude agricultural equipment whose primary functions are not the transportation of persons or objects, it only builds on the fact that the tractor in that case was not used for its own purposes, but merely for powering another machine. A moving harvester may still fall within the scope of the MID. As the CJEU emphasised in said decision at para 40, ‘in the case of vehicles which, like the tractor in question, are in 

Karner/Koch, Civil Liability for Artificial Intelligence

II. Autonomous lawn mowers and harvesters

105

harvesters, should autonomous vehicles be included in the MID’s scope at all.425 Summarising these various aspects of Hypothetical 5 (combine harvester in the field), the example of Austrian law demonstrates how complex the problem of strict motor vehicle liability in such a case can be: a ‘motor vehicle’ to which the risk-based liability of the EKHG is applicable is defined as a ‘vehicle intended for the road or used on roads’ (§ 2 para 2 EKHG, referring to § 2 para 1 no 1 Kraftfahrgesetz, KFG). An autonomous industrial vehicle is thus subject to risk-based liability, regardless of its designation for road use, as long as it is in fact in motion on the roads (cf (a)).426 However, it is also subject to motor vehicle liability whilst in use in fields (Hypothetical 5), as long as it is at least designated for road use.427 Insofar as an autonomous combine harvester is not built for roads, however, the EKHG would not apply to Hypothetical 5. An analogous application of the statute would still be conceivable in this case, however, this would require the presence of a risk comparable to those arising from road traffic. This has been considered to

tended, apart from their normal use as a means of transport, to be used in certain circumstances as machines for carrying out work, it is necessary to determine whether, at the time of the accident involving such a vehicle, that vehicle was being used principally as a means of transport, in which case that use can be covered by the concept of ‘use of vehicles’ …, or as a machine for carrying out work, in which case the use in question cannot be covered by that concept.’ (emphasis added). See also the proposed amendment to art 1 MID in COM(2018) 336, inserting a new no 1a that defines the ‘use of a vehicle’ as a use ‘intended normally to serve as a means of transport, that is consistent with the normal function of that vehicle, irrespective of the vehicle’s characteristics and irrespective of the terrain on which the motor vehicle is used and of whether it is stationary or in motion.’ (emphasis added). As the explanation for that reform states: ‘The European Court of Justice has clarified in its rulings that motor vehicles are intended normally to serve as means of transport, irrespective of such vehicles’ characteristics. Furthermore, the Court has clarified that the use of such vehicles covers any use of a vehicle consistent with its normal function as a means of transport, irrespective of the terrain on which the motor vehicle is used and of whether it is stationary or in motion.’ (10). 425 This is to be expected, as the wording of art 1 no 1 MID clearly applies to self-driving cars as well, defining ‘vehicle’ as ‘any motor vehicle intended for travel on land and propelled by mechanical power, but not running on rails, and any trailer, whether or not coupled’, without mentioning the necessity of a driver or other human person in control. 426 See OGH 20.6.2002 2 Ob 142/01y ECLI:AT:OGH0002:2002:0020OB00142.01Y.0620.000 (snowcat on street open to the public). 427 Koziol/Apathy/Koch (fn 184) no A/2/21 with further ref. Sometimes, even this was arguable, but rather doubtful. See OGH 20.6.1984 8 Ob 28/84 ECLI:AT:OGH0002:1984:0080OB0002 8.840.0620.000 (liability for wheel loader on factory premises); 27.5.1999 2 Ob 151/99s ECLI:AT: OGH0002:1999: 0020OB00151. 99S. 0527.000 (liability for forklift). Karner/Koch, Civil Liability for Artificial Intelligence

106

C. Use Cases

apply to the operation of snowcats on open ski slopes.428 By contrast, an analogous risk-based liability for a bulldozer not authorised for road use has been rejected so far,429 and it is therefore unlikely that the EKHG would be applied (at least by way of analogy) to a harvesting vehicle intended purely for use in fields. Somewhat more uniform results show where a possible application of the various domestic strict motor vehicle liability regimes is tested on Hypothetical 6. Regardless of whether the accident occurs (a) on the road or (b) in a field, in most jurisdictions no strict motor vehicle liability is triggered by an injury inflicted by the harvester’s blades while the vehicle is stationary. Often, strict motor vehicle liability regimes are applicable to accidents caused by stationary vehicles if there remains a – sometimes rather broadly conceived – connection to road traffic risks (eg the risk presented by blocking traffic430 and by loading and unloading,431 and even fires in the engine432). This corresponds to the current understanding of the MID as indicated above.433 By contrast, if properly parked, stationary industrial machines with engines turned off cause harm in a way that is unconnected to their ability to move, this will lack the necessary correlation to the risks presented by road traffic, for example, as suggested by Austrian, French, German, Czech, and Greek case law.434 For this reason, a claim based on strict motor vehicle liability would generally fail in both alternative scenarios (a) and (b) of Hypothetical 6.

428 This is explicitly left open by the OGH, which was only concerned with accidents outside skiing hours. See OGH 15.9.2004 9 ObA 49/04b ECLI:AT:OGH0002:2004:009OBA00049.04B.091 5.000; 27.1.2011 2 Ob 30/10s ECLI:AT:OGH0002:2011:0020OB00030.10S.0127.000; 19.12.2016 2 Ob 223/15f ECLI:AT:OGH0002:2016:0020OB00223.15F.1219.000. 429 OGH 24.10.1985 8 Ob 22, 23/85 ECLI:AT:OGH0002:1985:0080OB22.85.1024.000 (bulldozer on construction site). 430 Compare Corte di Cassazione, Sezione Unite 29.4.2015, N 8620, ECLI:IT:CASS:2015:8620CIV (truck crane on public street). 431 Eg OGH 28.6.2011 9 ObA 52/11d ECLI:AT:OGH0002:2011:009OBA00052.11D.0628.000 (unloading a truck); OLG Köln, 6.12.2018, 3 U 49/18 (loading a truck). 432 BGH 26.3.2019 VI ZR 236/18 ECLI:DE:BGH:2019:260319UVIZR236.18.0 (towed vehicle caught fire in garage); Tribunal Supremo 17.12.2019 STS 3983/2019, ECLI:ES:TS:2019:3983 (car ignited in a private garage, follow-up to CJEU in Línea Directa, supra fn 420); Cour de cassation 22.11.1995, N 94-10046 (car on fire). 433 Supra at fn 419 f. 434 OGH 28.5.2019 2 Ob 236/18x ECLI:AT:OGH0002:2019:0020OB00236.18X.0528.000 (tractor used for pulling wires); BGH 24.3.2015 VI ZR 265/14 NJW 2015, 1681 (harvest machine damaged by lost blade of a tedder); OLG Brandenburg 18.2.2010, 12 U 142/09 NZV 2011, 193 (harvester parked in a field catching fire); Cour de cassation 14.10.1992, N 91-11.611 (foot cut by stationary harvester; on the general inapplicability of the regime to stationary vehicles used for work); Areios Pagos 1168/3.5.2007; reported by E Dacoronia, Greece, in H Koziol/BC Steininger (eds), European Tort Law 2007 (2008) 321; and the Czech decision quoted in fn 416.  

Karner/Koch, Civil Liability for Artificial Intelligence

II. Autonomous lawn mowers and harvesters

107

(c) Strict liability for things As with autonomous lawnmowers discussed under 1, where strict motor vehicle liability is to be denied (as may here be the case for Hypothetical 5, and even more likely for Hypothetical 6), the application of a more general strict or no-fault liability regime remains to be considered, in particular the extensive liability for things acknowledged by the Romance legal family. Accordingly, in France the gardien liability of art 1242 (formerly 1384) Code civil has indeed already been applied to conventional combine harvesters,435 but presumably not in Hypothetical 6, since liability under art 1242 Code civil requires at least some movement (rôle actif) of the object. The Belgian liability for things under art 1384 Code civil – unlike its French counterpart with the same language – requires some defect of the object.436

(d) General risk-based liability There is a significantly higher chance here than with autonomous lawnmowers437 that general risk-based liability clauses may apply, as recognised, for example, in Croatia, the Czech Republic, Hungary, and Poland.438 This is because combine harvesters are machines with more power and a greater potential for harm – unlike with a liability for things, however, such higher risk is probably only present while these machines are in operation (Hypothetical 5). In this regard, Polish doctrine, for example, emphasises that the liability under art 435 KC of a ‘person conducting on his own account an enterprise or business set into operation by natural forces’ is mostly applicable to the use of harvesting machines.439 In Austria, too, an analogous risk-based liability would be conceivable, though it should be noted in that regard that such an analogy has already been rejected in cases involving a bulldozer and a building crane in motion.440

435 Cour de cassation 10.5.1991, N 90-11377; however, there is no liability under art 1242 if it catches fire: Cour de cassation 15.12.1976, N 75-12122. 436 It is applicable to vehicles where the special provision of the Law on mandatory motor vehicle liability insurance (fn 366) is not. See also Cour de cassation de Belgique 13.9.2012, C.10.0226.F (child releasing the brake of a car). 437 See above C.II.1(b). 438 See above B.V.2(b) with further examples. 439 Machnikowski/Śmieja (fn 415) no 606 (fn 749); E Bagińska, Poland, in H Koziol/BC Steininger (eds), European Tort Law 2008 (2009) 518 fn 28. 440 OGH 24.10.1985 8 Ob 22, 23/85, ECLI:AT:OGH0002:1985:0080OB22.85.1024.000 (bulldozer); 28.6.2016, 2 Ob 129/15g ECLI:AT:OGH0002:2016:0020OB00129.15G.0628.000 (crane). Karner/Koch, Civil Liability for Artificial Intelligence

108

C. Use Cases

Such general risk-based liability will only apply, though, if the peculiar risk of the harvester materialises, but not if it is turned off and merely stationary (Hypothetical 6).

3. Findings of this use case In summary, it can be said that for accidents caused by autonomous combine harvesters while moving on public roads (Hypothetical 4), strict motor vehicle liability may apply (if available in the jurisdiction concerned at all). However, it must be recalled that these risk-based liabilities for motor vehicles are regulated very differently across the EU, with some regimes for example only protecting nonmotorised actors, others only addressing personal injury but not property damage.441 In many European legal systems, by contrast, strict motor vehicle liability will probably not apply to accidents caused by autonomous combine harvesters while operating in a field (Hypothetical 5) or otherwise away from a public road. However, the unequivocal and often controversial meaning of ‘traffic’ as a legal concept renders this variation most difficult to assess, particularly in light of current trends in motor liability insurance. Extensive general risk-based liability clauses and liability for things can potentially close these gaps in some European jurisdictions. A strict liability can hardly be considered, though, where stationary (not dangerously parked) industrial machines are involved (Hypothetical 6). Altogether, significant differences thus show between the presumed outcome of such cases in various European jurisdictions, bearing in mind that the assumptions made above could not be based on already decided cases, so how courts will handle such cases once they actually arise is yet to be seen. Whether or not a strict liability regime may apply depends on the fact settings as well as on the tort law of each jurisdiction, in particular on its openness for expanding risk-based liability to novel technologies. Already in the past, as was shown, some jurisdictions were not even consistent in their case law domestically, let alone in a comparative overview. If claims have to be based on fault liability, though, victims will have a hard time identifying misconduct in accidents involving fully autonomous mowers or harvesters, and even if they do, they will still face different conditions to make their case in the various jurisdictions.

441 See supra B.V.2(e). Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones

109

III. Autonomous drones 1. Preliminary remarks While there is ample case law and (in most jurisdictions) special legislation governing liability for motor vehicles or at least foreseeing specific guidance for proper conduct in traffic, the status quo regarding drones is still quite different. While there are obviously different types of cars on the roads, the range of unmanned aircraft (UA) is even wider, from very small and light models as currently on the consumer market, weighing 250 g or even less, to much heavier equipment in commercial and military use with a maximum take-off mass (MTOM) of 800 kg442 and beyond.443 At least the former small varieties, due to their limited size and weight, are less likely to cause enormous harm.444 However, also irrespective of their weight, the risks posed by drones in general are different from the dangers of road traffic: as they are destined to operate in mid-air and not on any confined paths such as streets aligned with construction, they can more easily circumvent collisions in case of an emergency (and can do so both horizontally and vertically), whereas cars often face the mere alternative of crashing into a wall or some other delimitation. At present, there is much less traffic in the altitudes UAs fly, which reduces the likelihood of confrontation with other airborne vehicles, and unlike cars on roads, they will hardly encounter any non-motorised traffic participants once they have reached a certain altitude. On the other hand, the very fact that the flight path is less restricted than terrestrial traffic exposes bystanders to the risk of harm at locations where they may not expect it (such as in their private gardens) and therefore they take fewer precautions than what they would on a public pavement, for example. In the following, the primary focus is on AI-driven UAs. Drones that are actually steered by a pilot will be mostly disregarded, as in this case there is still a hu-

442 Cf, eg, the VoloDrone () with a cargo load of up to 200 kg. 443 In the following, military drones and UAs used for other state functions with public authority will not be considered, as they are typically excluded from more general regimes, see eg art 2 para 3 lit a of Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4.7.2018 on common rules in the field of civil aviation and establishing a European Union Aviation Safety Agency, OJ L 212, 22.8.2018, 1. 444 It is still possible, though, that even small drones cause catastrophic harm – just think of drones sucked into the jet engines of a passenger aircraft (although a single engine failure may not necessarily bring it down). Karner/Koch, Civil Liability for Artificial Intelligence

110

C. Use Cases

man that can influence the flight, similar to a driver in a terrestrial vehicle. For truly autonomous drones it is irrelevant whether they remain within a certain visual line of sight or not, as they will no longer be steered, but merely started after entering the destination, perhaps also with the flight path selected by the operator beforehand. Some recreational drones currently on the market, eg camera drones with functions such as follow-me mode, stand in between, as they perform certain parts of the flight independently (ie without direct steering by a pilot). However, in the latter cases, the operator or some auxiliary still has to monitor the flight and to interfere should the drone pose a risk to another.445

2. Three main damage scenarios Apart from the wide range of drones themselves, which due to their specificities may have to be treated differently in tort law, one has to differentiate three main damage scenarios involving UAs: – Damage to passengers, baggage or cargo transported by drones: Such cases will most likely be resolved under contractual liability. The Montreal Convention446 will apply to losses incurred during international flights (unless offered for free by a non-commercial operator),447 as will Regulation (EC)

445 This focus on AI-driven UAs does not preclude the possible necessity for a strict liability regime even though a pilot (or other person in control of the UA) may be held liable for misconduct. There may still be uncertainties of causation that may leave victims uncompensated, particularly if it remains unclear whether the cause of the accident was indeed such misconduct or instead a risk inherent in the gadget that materialised irrespective of (and despite) the pilot’s control. After all, strict liability for motor vehicles was introduced inter alia to bypass such questions in order to alleviate the challenges victims of traffic accidents may face when pursuing their claims, even though it is statistically proven that in the vast majority of car accidents human fault was the primary cause: In Germany, for example, 88.2 % of all 2019 traffic accidents with personal injuries were caused by misconduct of the driver (). 446 Convention for the Unification of Certain Rules for International Carriage by Air (1999, ). All EU Member States as well as the EU itself are parties. According to art 49 of the Convention, it provides for an exclusive liability regime, as any contractual deviations (including choices of another law) are null and void. 447 While the Montreal Convention does not define the term ‘aircraft’, it is commonly understood to include drones, particularly in light of Annex 7 to the Chicago Convention. See in particular the ICAO Legal Affairs and External Relations Bureau study on ‘Legal Issues Relating to Remotely Piloted Aircraft (Liability)’, reproduced in the Appendix to Working Paper LC/36-WP/2-4  

Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones





111

2027/97448 for drones operated under a Member State licence. This scenario will therefore be disregarded in the following. Damage on the ground: While the Rome Convention449 would apply to such cases,450 only four Member States have ratified it,451 whereas purely domestic rules apply in the remainder of the EU. In such cases, the victim is an innocent bystander, which means that the relevance of her contributory negligence in practice will be significantly lower.452 If the national rules provide for strict liability, causal uncertainty will typically not be a problem as long as liability is triggered by the operation of the drone as such (and does not require evidence of any detailed aspect thereof), at least if only one drone is involved. For those jurisdictions without strict liability for ground damage (at least in certain cases), identifying the proper addressee for compensation claims and proving causation and wrongdoing within the latter’s sphere can, however, be challenging. Collisions of drones themselves or with other aircraft in mid-air: While indirect (consequential) losses on the ground (eg from falling debris) may be covered by the regimes applicable to direct ground damage (scenario 2), losses inflicted upon the other aircraft, its passengers and cargo are not necessarily covered by any special regime, at least not in purely domestic scenarios.

The focus of this report is on damage to persons or property and not, for example, to invasions of privacy. One should further keep in mind that, in common law jurisdictions, the tort of aerial trespass453 may be committed by drones (if flying too

(), A-4 f (but see also ibid at A-6: ‘[G]iven current technological limitations and associated safety concerns, it is highly unlikely that passenger transport operations … will be performed using an RPAS in the foreseeable future. Thus, in the context of RPAS, the issue of liability related to passengers and their baggage has little practical relevance at the present time.’). 448 Regulation (EC) No 2027/97 of the Council of 9.10.1997 on air carrier liability in respect of the carriage of passengers and their baggage by air, OJ L 285, 17.10.1997, 1, as amended. 449 Convention on Damage Caused by Foreign Aircraft to Third Parties on the Surface, signed at Rome on 7.10.1952, . On its regime, see infra at C.III.3(a)(1). 450 See the ICAO Legal Affairs and External Relations Bureau study cited in fn 447 at A-7. 451 Belgium, Italy, Luxembourg, and Spain. 452 Though not zero, as someone who shoots a drone, which subsequently falls upon her, for example, may be injured as well. 453 Cf Section 5 of the 2019 Draft Uniform Tort Law Relating to Drones Act by the Uniform Law Commission (). Karner/Koch, Civil Liability for Artificial Intelligence

112

C. Use Cases

low), which triggers liability irrespective of consequential harm. This peculiarity will also be disregarded in the following.

3. Current legal landscape in Europe (a) International and EU law (1) Rome Convention 1952 As mentioned, four EU Member States have ratified the 1952 Rome Convention (RC),454 which provides for strict liability of the operator of an aircraft that directly causes ground damage while in flight,455 either itself or by persons or things falling out of the aircraft. However, even in those four countries, it only applies if an aircraft registered in another contracting state caused the damage (art 23 para 1 RC). Art 2 RC defines the ‘operator’ as the person making use of the aircraft at the time, unless the person granting such use retains control of the aircraft’s navigation.456 The registered owner of the aircraft is presumed to be the operator unless she proves that someone else had been at the time. Art 5 RC provides for a defence if the damage was ‘the direct consequence of armed conflict or civil disturbance’, but not in other cases of force majeure. Contributory conduct of the victim or of one of her auxiliaries457 reduces or excludes the liability of the operator accordingly (art 6 RC). If ground damage was caused as a consequence of a collision of aircraft, their operators are jointly and severally liable (art 7 RC), and the caps of art 11 ff RC apply per operator (art 13 para 2 RC). However, the Rome Convention does not apply to damage caused to a colliding aircraft (art 24 RC). Art 9 RC excludes any other claim for compensation against the persons liable under the Convention unless these acted intentionally, which cuts off fault-based claims that would not be subject to the limitations of the Convention. Art 11 RC limits the extent of liability according to the MTOM.458 The class applicable to drones would be art 11 para 1 lit a RC, which foresees a cap of 500,000 gold francs459 if the air 

454 Supra fn 449 f. 455 An aircraft is in ‘flight’ as defined by art 1 para 2 Rome Convention from the moment it is powered for take-off until the end of its landing. 456 Irrespective of who was actually navigating the aircraft, the person granting its use is also jointly and severally liable with the operator if the latter’s right did not exceed fourteen days (art 3 Rome Convention). 457 Only if these auxiliaries acted within the scope of their authority. 458 It is defined in art 11 para 3 as ‘the maximum weight of the aircraft authorised by the certificate of airworthiness for take-off, excluding the effect of lifting gas when used’. 459 At the current gold price, this would be the equivalent of around 1.6 million EUR (with one franc being defined by art 11 para 4 as 0.0655 g of gold with a 90 % purity), so the historic reserva 



Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones

113

craft weighs up to 1,000 kg. These caps do not apply if the damage was caused intentionally by the operator or her auxiliaries. Claims exceeding the monetary limits are reduced proportionally if they concern either bodily harm or property damage. If both personal injuries and damage to property were sustained, half of the limit shall be used for indemnifying the claims relating to personal injury. Art 15 RC only allows contracting states to foresee mandatory liability insurance without requiring them to do so. According to art 19 RC, claims must be brought or at least notified to the operator within six months from the date of the accident, otherwise competing claims brought in time are treated preferentially, which means the late claimant will only be indemnified if the maximum amount was not already spent on the timely claims. Art 21 RC prescribes compensation claims after two years from the date of the accident.

(2) Regulation (EC) No 785/2004 Regulation (EC) No 785/2004460 establishes ‘minimum insurance requirements for air carriers461 and aircraft operators462 in respect of passengers, baggage, cargo and third parties’ (art 1 para 1) while flying ‘within, into, out of, or over the territory of a Member State’ (art 2 para 1). However, art 2 para 2 excludes some aircraft from its scope, including ‘model aircraft with an MTOM of less than 20 kg’ (lit b), which means that drones up to that weight will not be covered. Furthermore, ‘aircraft … with a MTOM of less than 500 kg … which are used for non-com-

tions against the limits as being too low are no longer equally justified (considering that the gold price has increased by more than 45 times in absolute numbers and has been inflation-adjusted at least five times since the 1960s). The 1978 Montreal Protocol to the Rome Convention would have converted the limits from gold francs to SDR and re-categorised the weight classes, with the limit for the lowest class (now up to 2,000 kg) being 300,000 SDR (currently about 360,000 EUR, at the time around 400,000 USD, whereas the contemporary value of the Rome Convention’s gold franc limit in USD would have been around half of that amount). However, none of the European signatories ever ratified this Protocol. 460 Regulation (EC) No 785/2004 of the European Parliament and of the Council of 21.4.2004 on insurance requirements for air carriers and aircraft operators, OJ L 138, 30.4.2004, 1, as last amended by Regulation (EU) 2019/1243 of the European Parliament and of the Council of 20.6.2019 adapting a number of legal acts providing for the use of the regulatory procedure with scrutiny to Articles 290 and 291 of the Treaty on the Functioning of the European Union, OJ L 198, 25.7.2019, 241. 461 ‘Air carrier’ is defined by art 3 lit a of the Regulation as ‘an air transport undertaking with a valid operating license’. 462 Unless they are ‘air carriers’, someone ‘who has continual effective disposal of the use or operation of the aircraft’ is its ‘operator’ according to art 3 lit c of the Regulation, with a rebuttable presumption in favour of the person in whose name the aircraft is registered. Karner/Koch, Civil Liability for Artificial Intelligence

114

C. Use Cases

mercial purposes or … for local flight instruction’ need not be insured against the risks of war and terrorism. Therefore, at least drones exceeding the 20 kg MTOM limit need to be insured already now, inter alia for the risks of liability vis-à-vis third parties, therefore including cover for surface damage.

(3) The new EU drones regime The new EU regime governing drones – Regulations (EU) 2018/1139,463 2019/ 945464 and 2019/947465 – inter alia provides for a new categorisation of drones and corresponding operation obligations, but does not foresee any specific liability rules. While these regulations cover all drones irrespective of function or MTOM, Recital 26 of Regulation (EU) 2018/1139 states that drone operations ‘should be subject to rules that are proportionate to the risk of the particular operation or type of operations’.466 Implementing Regulation (EU) 2019/947 accordingly defines three main categories of UAs which are inter alia meant to correspond to the respective risks attached, with the so-called ‘open category’ presenting the lowest risk (Recital 8). UAs falling in that category have a MTOM up to 25 kg and have to be controlled by a remote pilot who keeps the UA ‘at a safe distance from people’ and below an altitude of 120 m.467

463 Fn 443. 464 Commission Delegated Regulation (EU) 2019/945 of 12.3.2019 on unmanned aircraft systems and on third-country operators of unmanned aircraft systems, OJ L 152, 11.6.2019, 1. 465 Commission Implementing Regulation (EU) 2019/947 of 24.5.2019 on the rules and procedures for the operation of unmanned aircraft, OJ L 152, 11.6.2019, p 45, as amended by Commission Implementing Regulation (EU) 2020/639 of 12.5.2020 and Commission Implementing Regulation (EU) 2020/746 of 4.6.2020. According to the latter, the original date of the regime’s entry into force was postponed from 1.7.2020 to 31.12.2020. 466 Cf also recital 5 of Regulation (EU) 2019/947: ‘The rules and procedures applicable to UAS operations should be proportionate to the nature and risk of the operation or activity and adapted to the operational characteristics of the unmanned aircraft concerned and the characteristics of the area of operations, such as the population density, surface characteristics, and the presence of buildings.’ 467 See art 4 of Regulation (EU) 2019/947 for more details. Typically, these UAs fall into one of the classes defined by Chapter II of Regulation (EU) 2019/945. Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones

115

(b) Domestic law (1) Specific (strict) liability for drones The Portuguese Decreto-Lei no 58/2018 applies specifically (and only) to UAs,468 introducing inter alia strict liability of their operators in art 9, which is only excluded if the accident was caused by the victim’s ‘exclusive fault’ (culpa exclusiva do lesado). Art 10 leg cit requires operators of UAs with a MTOM exceeding 900 g to take out liability insurance for material harm, unless already insured under a sports policy. The minimum insurance cover shall be determined by a joint ordinance of the ministries competent for finance and civil aviation, thereby taking into account the peculiar risks associated with the respective UAs and their varying MTOM (art 10 para 3 leg cit). That minimum insurance cover in turn also serves to cap liability according to art 9 para 2 leg cit.

(2) Strict liability for aircraft also potentially applicable to some drones In most European jurisdictions that have already introduced strict liability for UAs, this was accomplished by extending an existing liability regime for aircraft, typically by broadening the definition of ‘aircraft’ to include at least some varieties of drones and other UAs. Thereby, the domestic regime applicable to harm caused by aircraft in general applies correspondingly, typically including liability insurance requirements and other provisions attached. Some of this legislation, in particular the categorisation of drones, may change in light of the pending implementation of the new EU drones regime.469 The Austrian Luftfahrtgesetz (Air Traffic Law, LFG),470 for example, currently excludes from its scope drones with a kinetic energy up to 79 joules flying at a maximum height of 30 metres471 (apart from a general requirement in § 24d LFG that its operation must not endanger persons or other objects). All other drones,

468 Liability for surface damage caused by other aircraft is governed by Decreto-Lei no 321/89 as amended. The Decreto-Lei no 58/2018 expressly states in its introduction that the definition of UA in this law shall not lead to an extension of the term ‘aircraft’ elsewhere. 469 Supra C.III.3(a)(3). Cf A Lopatka/C Schmelz, Does Austrian aviation law comply with the new EU rules for drones? ; see also the Danish draft amendments of the Air Navigation Act at . 470 Federal Gazette (Bundesgesetzblatt, BGBl) no 253/1957 as amended. On the following, see Koziol/Apathy/Koch (fn 184) no A/9/16 ff for more details. 471 This typically excludes, eg, drones with an approximate weight of up to 250 g (although the weight is not the only decisive criterion, as both altitude and speed have to be calculated in). Only some of the drones falling within class C0 of Regulation (EU) 2019/945 would therefore be ex 

Karner/Koch, Civil Liability for Artificial Intelligence

116

C. Use Cases

however, fall within the scope of the statute’s provisions on liability and insurance (§§ 146 ff LFG), as expressly stated by § 24f para 4 and § 24g para 2 LFG.472 These provisions serve as gap fillers for cases not covered by international conventions or EU regulations. According to § 148 LFG, the keeper of the aircraft473 is strictly liable if an accident during its operation caused any harm to persons, baggage or cargo on board,474 including damage to other aircraft in case of mid-air collisions.475 If the aircraft is operated without the keeper’s consent, the latter nevertheless remains liable alongside the operator if it was the fault of the keeper or its staff that the operator could take control. If the operator was a staff member of the keeper, or if the latter permitted the use of her aircraft, the operator is rebuttably presumed to have been at fault when causing damage with the aircraft, making her liable under general fault liability alongside the keeper (§ 149 LFG). Multiple keepers or operators are jointly and severally liable (§ 150 LFG). Liability is particularly strict, as it is not excluded by an unforeseeable event or an act of God. However, contributory conduct of the victim or someone within her sphere will impact upon the keeper’s liability according to general rules of tort law (§ 161 LFG). The extent of liability per accident476 is capped according to the MTOM of the aircraft, mirroring the amounts of Regulation (EC) 785/2004, but adding another limit of 500,000 SDR477 for damage caused by aircraft weighing less than 20 kg and other airborne objects excluded by art 2 para 2 Regulation 785/2004 from its scope (§ 151 LFG). These amounts also determine the minimum mandatory insurance cover (§ 164 LFG). Two thirds of these amounts are reserved for the compensation of personal injury; property losses exceeding one third of these limits can therefore only be indemnified if the remainder is not fully spent  

cluded, whereas the remainder (class C0 flying above 30 m as well as the remaining four classes) would be covered by the current LFG and its liability regime. 472 It is disputed whether autonomous drones fall within the scope of the LFG at all, as they are not expressly mentioned, but they arguably do: I Eisenberger, Drohnen in den Life Sciences: Das Luftfahrtgesetz zwischen Gefahrenabwehr und Chancenverwirklichung, ÖZW 2016, 66 (70 f). 473 This is not necessarily, but typically the owner – key aspects identifying the keeper are control of the aircraft, financing its maintenance, and power to decide about its operations. 474 Injuries to persons entering or exiting the aircraft are also excluded. 475 While the wording of § 148 LFG does not expressly say so, the recourse provision of § 154 LFG inter alia explicitly refers to liability of keepers of aircraft vis-à-vis each other. 476 The number of victims and the number of keepers of a single aircraft therefore has no impact on these limits. If the overall loss caused by one accident exceeds these limits, individual claims are reduced proportionally. However, if more than one aircraft caused the accident, the limits apply per aircraft. 477 Special Drawing Rights (SDR) is a unit defined by the International Monetary Fund. One SDR currently corresponds to the equivalent of 1.2 EUR (so 500,000 SDR are about 600,000 EUR). Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones

117

on compensating bodily harm (§ 150 para 3 LFG). An important provision is § 155 LFG, according to which, the victim forfeits her claim for compensation if she fails to notify the keeper of the accident within three months after acquiring knowledge of the damage and the identity of the keeper.478 She retains her claim, however, if she was prevented from notifying the keeper due to circumstances beyond her control, or if the keeper learned of the accident otherwise in the meantime. In Croatia, art 108 of the Law on Obligations and Real Property Relations in Air Transport479 foresees strict liability of the operator of an aircraft (or its lessee: art 109) for any ground damage caused by the aircraft, but limited to 100,000 SDR per person injured or killed (or – in the case of property damage – limited by the value of a new aircraft of the same type, art 114). Those limits do not apply, though, if the victim can prove that the operator acted intentionally or with gross negligence (art 115). The operator is excused to the extent the damage was caused by the victim herself, by the victim’s auxiliary, or by some third party, but also by an unforeseeable and unavoidable event outside the operator’s control (arts 110 f). In the case of a collision between two or more aircraft, all operators are liable jointly and severally for the ensuing third-party loss. § 127 of the Danish Air Navigation Act480 holds the owner of an aircraft strictly liable without caps if its use leads to personal injury or damage to property outside that aircraft, unless the victim herself caused the damage intentionally or by gross negligence. If the owner left the use of the aircraft ‘to an independent user who has assumed the full responsibility for the operation and maintenance of the aircraft’, the latter is liable instead of the former. The owner is required to take out liability insurance according to § 130 leg cit. These provisions also apply to drones, which are addressed by chapter 9a of said statute. § 126c para 1 leg cit expressly extends the insurance requirement to drones, but subject to exemptions as foreseen by the competent Ministry according to para 2.481 The French Code des transports in its arts L6131-2 ff provides for strict liability of the operator (‘l’exploitant’)482 of an aircraft (which is defined in art L6100-1 as  



478 A similar provision can be found in § 18 EKHG, which correspondingly excludes strict liability of the keeper of a motor vehicle or train. 479 Zakon o obveznim i stvarnopravnim odnosima u zračnom prometu, National Gazette (Narodne novine) nos 132/1998, 63/2008, 134/2009, 94/2013, consolidated version eg at . 480 Lov om luftfart no 1149 of 13.10.2017, as last amended by lov no 970 of 26.6.2020. 481 Small drones not exceeding 250 g are exempted from these rules (currently only if flying outside urban areas, which is about to be repealed). Liability for those remains fault-based, and they are not subject to the liability insurance requirement. 482 The notion of ‘operator’ is disputed in France, as some focus on whoever profits from the aircraft’s use, whereas others lay emphasis on the control of the aircraft and its operation. The owner  

Karner/Koch, Civil Liability for Artificial Intelligence

118

C. Use Cases

‘tout appareil capable de s‘élever ou de circuler dans les airs’, therefore including drones483) for any damage caused on the surface either by the aircraft itself or by persons or objects falling out of it. The only defence foreseen is contributory negligence of the victim (art L6131-2 para 2). There are no caps on liability. In the case of a collision between two or more aircraft, art L6131-1 points to the general rules of liability, so the regime just mentioned does not apply. However, apart from the rules on fault liability, the general responsabilité du fait des choses of art 1242 al 1 Code civil also applies in the latter cases, which foresee – again – a strict liability of whoever has the aircraft in her control at the time of the accident.484 The German Air Traffic Act (Luftverkehrsgesetz, LuftVG) holds the keeper485 of an aircraft486 strictly liable for any bodily harm or damage to property caused by an accident in the course of the operation of an aircraft (§ 33 para 1 LuftVG). Contributory negligence of the victim is relevant according to the standard rule of § 254 of the German Civil Code (BGB). Like in Austria, the operator cannot raise the defence of force majeure. §§ 35 ff LuftVG contain more specific rules on the compensability of harm, including in particular caps on liability in § 37 LuftVG: liability for aircraft below 500 kg MTOM is limited to 750,000 SDR (and double that amount for other aircraft, para 1). In addition, § 37 para 2 LuftVG provides for specific caps in cases of bodily injuries (600,000 EUR or an annual rent of 36,000 EUR per victim,487 to be reduced accordingly if multiple victims are injured and the total amount of compensation would exceed the limits of para 1). If an accident harms both persons and property, the first two thirds of the caps of para 1 are to be spent on compensating the injured persons, and the remainder is to be divided proportionally for the remaining losses not yet covered, ie compen 

is jointly and severally liable with the lessee of an aircraft unless the latter is entered in the aircraft register, in which case the owner is only liable for fault established by the victim (art L6131-4). 483 This is also made clear by art L6111-1 Code des transports, introduced by art 1 of the Loi n° 2016-1428 du 24 octobre 2016 relative au renforcement de la sécurité de l’usage des drones civils. 484 Conseil pour les drones civils, Fiche «Donneur d’Ordre/Exploitant/Télépilote» () 9. 485 Even if the aircraft is being used by some third party without the consent or knowledge of the keeper, the latter remains liable if she negligently enabled that third party to take control of the aircraft (§ 33 para 2 LuftVG). 486 In general, drones are aircraft within the meaning of § 1 para 2 LuftVG and therefore subject to the liability regime of § 33 LuftVG. However, it is not entirely clear whether the smallest and lightest types of drones similar to toys would also be subjected to strict liability: C Schäfer, Kollisionen von Drohnen mit Fahrzeugen oder Personen, Deutsches Autorecht (DAR) 2018, 67 (69); P Schimikowski/H-J Wilke, Drohnen und Privathaftpflichtversicherung, Recht und Schaden (r+s) 2019, 490 (492). 487 Oddly, these specific limits of § 27 para 2 LuftVG are expressed in EUR and not in SDR. Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones

119

sation for personal injuries exceeding the two thirds as well as property damage (§ 37 para 4 LuftVG). Like in Austria, the victim has to notify the accident to the keeper within three months after acquiring knowledge of the damage and the identity of the liable person (§ 40 LuftVG). If more than one cause contributed to the damage, all those to whom such causes can be attributed are jointly and severally liable vis-à-vis the victim, but only proportionally to their respective share in causing harm internally (§ 41 LuftVG). Other bases of liability remain unaffected by the LuftVG’s regime (§ 42 LuftVG). § 43 LuftVG contains insurance requirements.488 In Italy, the Rome Convention is also made applicable to damage caused by domestic drones to third parties on the surface by art 965489 of the Codice della navigazione490 (whose art 743 extends its regime to drones, though subject to exceptions). Art 972 of the Codice della navigazione further extends the Rome Convention’s liability regime to cases of aircraft collisions with other aircraft or other accidents (‘danni da urto, spostamento d’aria o altra causa analoga’). Art 971 leg cit brings the Rome Convention’s caps in line with Regulation (EC) No 785/2004, which also sets the minimum liability insurance cover amounts for drones according to art 32 of the ENAC Remotely Piloted Aerial Vehicles Regulation.491 While surface damage in Spain is covered by the Rome Convention if the aircraft causing the harm is registered in another contracting state, the remaining instances of damage on the ground are addressed by art 119 ff of the Spanish Air Navigation Act, which are modelled on the Rome Convention, though.492 According to its art 119, the operator is strictly liable, but subject to caps relative to the aircraft’s MTOM. If it is 500 kgs or less, the limit is 220,000 SDR, with personal injury to be prioritised.493 Caps do not apply if the damage was caused by gross negligence or intent. Liability is strict and cannot be excused by force majeure (art 120).  

488 Again, it is disputed whether toy drones up to 250 g fall under said requirement or not: Schimikowski/Wilke (fn 486) 493. 489 ‘La responsabilità dell’esercente per i danni causati dall’aeromobile a persone ed a cose sulla superficie è regolata dalle norme internazionali in vigore nella Repubblica, che si applicano anche ai danni provocati sul territorio nazionale da aeromobili immatricolati in Italia. …’ 490 RD 30.3.1942, no 327 as amended. 491 Regolamento ‘Mezzi aerei a Pilotaggio Remoto’ (English translation at ). 492 Ley 48/1960, de 21 de julio, sobre Navegación Aérea, as amended. 493 Liability per injured person is further defined as follows: for death or total permanent disability, the victim shall receive 120,000 SDR lump sum compensation. In cases of partial disability, special limits apply (69,600 SDR for permanent and 34,800 SDR for temporary partial disability). Karner/Koch, Civil Liability for Artificial Intelligence

120

C. Use Cases

The Swedish Liability for Damage Caused in the Course of Aviation Act494 provides for strict liability of the aircraft owner495 for ground damage, but expressly not for the damage to other aircraft in mid-air (§ 2 para 2 leg cit). In the United Kingdom, according to Section 76 para 2 of the Civil Aviation Act 1982, the owner496 of an aircraft is strictly liable for damage it causes to persons or property on the ground, unless the negligence of the victim at least contributed to the harm.

(3) Strict liability for means of transport also potentially applicable to some drones Some jurisdictions do not have specific liability provisions for drones in particular or aircraft in general, but more broadly for means of transportation. Drones may fall within the scope of such special regimes, but it is not entirely predictable whether at all and if so, which types of drones courts would consider to be such ‘means of transportation’ (particularly when it comes to toy drones, for example). The Civil Code of the Czech Republic,497 for example, in its sections 2927-2932 foresees strict liability for means of transport, which includes drones.498 Art 2927 para 1 inter alia holds the ‘operator of a vehicle, boat or plane’ strictly liable for ‘damage caused by the specific character’ of its operation.499 Para 2 emphasises that the operator ‘cannot be exempted from the duty to compensate damage if the damage was caused by circumstances that have their origin in the operation’; in all other cases she can excuse herself by proving that she could not have prevented harm despite exercising utmost care. If someone else took over control of the means of transport without the operator’s knowledge or consent, the latter remains jointly and severally liable with the former if she negligently enabled the

494 Lag (1922:382) angående ansvarighet för skada i följd av luftfart. 495 If someone else exercises her right to use the aircraft, she is jointly and severally liable with the owner (art 4; see also art 3a for credit purchase cases). 496 According to Section 76 para 4 of the Civil Aviation Act 1982, if use and control of the aircraft was taken over by some third party outside the employ of the owner for a period exceeding fourteen days, the lessee shall be liable instead of the owner. 497 Zákon občanský zákoník, 89/2012 Sb. Translation in the following by J Hrádek, JETL 7 (2016) 308 ff. 498 The term ‘means of transport’ (dopravní prostředek) in art 2927 is more broadly interpreted and does not require actual transportation, as long as it is not driven by human power. 499 The full paragraph translates as follows: ‘A person who operates a transport business shall compensate damage caused by the specific character of that operation. The same duty also pertains to any other operator of a vehicle, boat or plane, unless the means of transport is operated by human power.’  

Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones

121

former to take control (art 2929). If the operator (defined by the combination of control of and benefit from the vehicle) cannot be determined, its owner will be deemed the operator (art 2930). If two or more means of transport collide, they are jointly and severally liable vis-à-vis the victim, but only according to their respective causal share vis-à-vis each other upon recourse (art 2932).

(4) General strict liability also potentially applicable to some drones Other jurisdictions provide for a general clause of strict liability not limited to vehicles or means of transport. Drones may fall within the scope of such general clauses. However, their relevance in practice depends upon their respective scope. Some are triggered by a ‘dangerous activity’ or only by some ‘extremely dangerous activity’, others by a ‘dangerous object’ or ‘thing’, again others by a vice of a thing within someone’s control. Typically, such general clauses are not limited by any maximum amounts of compensation, though more general reduction clauses may still serve as buffers in extreme cases.500 Whether or not drones as such or their operation fall within the definitions of such dangerous objects or activities is for the national courts to decide, which is even less predictable than in the previous case of general clauses for means of transportation. It is particularly questionable whether smaller and lighter drones would be deemed sufficiently ‘dangerous’ within the meaning of said provisions. According to § 1056 of the Estonian Law of Obligations (LOA),501 for example, if damage results from a ‘danger characteristic to a thing constituting a major source of danger or from an extremely dangerous activity’,502 the person in control of the object or whoever conducts the activity is strictly liable for any ensuing losses that are characteristic for the risks.503 Other grounds of liability are not

500 See, eg, § 140 of the Estonian LOA or art 6:109 of the Dutch BW. 501 Võlaõigusseadus, RT I 2001, 81, 487 as amended; English translation at . 502 According to § 1056 para 2, it is ‘a major source of danger if, due to its nature or to the substances or means used in connection with the thing or activity, major or frequent damage may arise therefrom even if it is handled or performed with due diligence by a specialist.’ 503 § 1056 para 1 reads: ‘If damage is caused resulting from danger characteristic to a thing constituting a major source of danger or from an extremely dangerous activity, the person who manages the source of danger shall be liable for causing of damage regardless of the person’s culpability. A person who manages a major source of danger shall be liable for causing the death of, bodily injury to or damage to the health of a victim, and for damaging a thing of the victim, unless otherwise provided by law.’ See, eg, T Liivak/J Lahe, Strict Liability for Damage Caused by SelfDriving Vehicles: The Estonian Perspective, Baltic Journal of Law & Politics 12 (2019) 2, 1 (6): ‘The Karner/Koch, Civil Liability for Artificial Intelligence

122

C. Use Cases

thereby excluded (para 3). It is argued that the operation of drones may fall within the scope of § 1056 subject to its requirements. Art 6:173 BW holds the ‘possessor of a movable thing which is known to constitute a special danger for people or things when it does not meet the standards which may be set for such a thing’ strictly liable if this risk materialises. While drones may fall within the notion of such a dangerous thing, para 3 of art 6:173 BW inter alia expressly excludes aircraft.504 This means that all UAs other than toy drones fall outside the scope of said provision.505 In the absence of further strict liability rules (which were envisaged at the time of enactment but never actually introduced), accidents with larger drones therefore can only be addressed by the general fault liability regime of the Netherlands.506 Art 2347 part 2 of the Latvian Civillikums507 holds persons ‘whose activity is associated with increased risk for other persons’ strictly liable for harm caused by the source of increased risk unless they prove that the damage was in fact caused by force majeure.508 Also, contributory negligence of the victim may serve as a defence, but only if the latter acted with at least gross negligence or intent. Drone owners are required to take out liability insurance according to a specific regulation.509

courts have a wide margin of discretion as to what objects or activities may be considered sources of a greater danger on the basis of the provision.’ 504 Cf the broad definition of art 8:3a para 1 BW: ‘In the present Book (Book 8) “aircraft” shall mean: devices that may be kept in the atmosphere by virtue of forces that are exerted thereon by air, …’. 505 Cf AJ Mauritz, Liability of the operators and owners of aircraft for damage inflicted to persons and property on the surface (2003, ) 118. But see E Schijvenaars, Schade door drones (Master Thesis Tilburg 2019, ) 25, who is in favour of including drones into the scope of art 6:173 BW despite its clear language (with further references). 506 Mauritz (fn 505) 117 f; D Stolker/C Levine, Compensation for Damage to Third Parties on the Ground as a Result of Aviation Accidents, 2 Air & Space Law (1997) 60 (61): ‘The Netherlands now have the somewhat strange situation that someone who causes damage to an apartment window with a radio-controlled toy airplane will be held to a strict liability standard, whereas when a fully-loaded Boeing 747 crashes into an apartment building, the operator is liable only if the victim can prove negligence.’ 507 . 508 If the owner/holder/user loses possession of the source of danger without her fault, but due to the unlawful act of another, the latter shall be liable instead. 509 Art 11 of the Latvian Regulation on Procedures for Unmanned Aircraft and Other Aircraft Flights (Kārtība, kādā veicami bezpilota gaisa kuģu un tādu cita veida lidaparātu lidojumi, kuri nav kvalificējami kā gaisa kuģi, Regulations of the Cabinet of Ministers no 368 of 13.8.2019), starting at 150,000 EUR for UA with a MTOM between 250 g and 1.5 kg (in that category, however, only if high-risk flights are performed).  

Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones

123

In the absence of specific rules on liability for drones in Romania,510 general tort law provisions apply, which include a rule on strict liability of the keeper511 of a thing in art 1376 Codul civil. Para 2 of said provision includes a special solution for collisions of vehicles.512 Art 1380 Codul civil excludes liability if the damage was caused exclusively by the act of the victim herself or of some third party, but also in cases of unforeseen circumstances. Slovenia’s Obligations Code (Obligacijski zakonik, OZ) also provides for strict liability in its art 149 ff, which even (rebuttably) presumes causation if damage occurs ‘in connection with a dangerous object or dangerous activities’ (art 149 OZ). In such case, the holder of the object513 or the person conducting the dangerous activity is strictly liable (art 150 OZ). Liability is excluded or reduced if the damage originated from some unforeseeable and unavoidable external cause, or if it was caused by the victim or some third party whose interference was equally unforeseeable and unavoidable (art 153 OZ). Also in Slovenia, an insurance obligation has been introduced specifically for UA systems.514  

(5) No specific regime applicable at all In Ireland515 or Malta, for example, no specific liability regime is in place, which is why damage caused by UAs is governed by the ordinary rules of tort law and therefore subject to fault-based liability. However, regulatory provisions govern-

510 There are other provisions which specifically address drones, such as art 3 para 1 no 11 of the Aviation Code (Codul Aerian, law no 21/2020, OJ no 222 of 19.3.2020), defining UAs as ‘airplanes without a flight crew aboard which can fly controlled by a software or remotely’ and extending all rules of the Aviation Code thereto in the absence of an express exclusion. 511 The keeper (or ‘custodian’) is defined in art 1377 Codul civil as the owner or ‘the person who, by virtue of law or a contract, or merely in fact, exercises control and supervision over … object and uses it in his or her own interest’. 512 According to its (translated) wording, strict liability also applies ‘in the case of collisions between vehicles in other similar cases. However, in such cases, the custodian will be subject to the duty to repair damage only if his or her faulty act has, for the other participants, the significance of force majeure.’ 513 If someone else had taken the dangerous object from its holder without the latter’s fault, the former is liable instead (art 151 OZ). 514 Art 7 of the Slovenian Regulation on unmanned aircraft systems (Uredba o sistemih brezpilotnih zrakoplovov, Uradni list RS, št. 52/16, 18/16, ) requires liability insurance cover corresponding to the requirements for traffic liability. 515 However, art 7 of the Irish Aviation Authority Small Unmanned Aircraft (Drones) and Rockets Order 2015 (SI no 563 of 2015) contains rules of conduct which may contribute to assessing the negligence of the operator. Karner/Koch, Civil Liability for Artificial Intelligence

124

C. Use Cases

ing the operation of aircraft may impact the assessment insofar as these may constitute statutory duties. Also in the Netherlands, most drones heavier than toy drones will be governed by fault-based liability due to the express exclusion of aircraft from the scope of art 6:173 of the Dutch Civil Code.516

4. Findings of this use case As the overview shows, ground damage caused by drones is often (but not in all Member States) governed by strict liability. However, the details thereof vary substantially, starting with the range of drones covered. Apart from jurisdictions without any strict liability regime, the Netherlands stand out the most in light of the peculiar situation that an operator of toy drones would be strictly liable, but not if the drone was heavier and were to be considered an aircraft. The reverse is true for the other jurisdictions providing for strict liability, but their exclusion of smaller drones does not use identical standards. Furthermore, the availability of defences (and therefore the ‘strictness’ of the liability regime) is quite diverse: some jurisdictions allow a force majeure defence (such as Croatia, Latvia, or Slovenia), others do not (such as Austria, Germany, or France). In some countries, any contributory negligence by the victim may affect the latter’s claim (eg in Austria, Croatia, France, or Italy), in others only if the victim acted with at least gross negligence (eg in Denmark). Not all jurisdictions foresee caps on liability (eg in Austria, Croatia, Germany, Italy, Portugal, or Spain, but not Denmark or France). Those that do sometimes give preference to bodily harm when applying the maximum amounts (eg Austria, Germany, or Spain), others do not. The limits themselves also vary considerably.517

5. Case hypotheticals Hypothetical 7. An autonomous drone owned by A and used by B crashes into a building owned by X and falls onto passer-by Y, injuring the latter. The cause of the crash remains unclear, but is presumed to be a misinterpretation of geodata by the drone’s algorithm. Who is liable for these losses if it cannot be proven that A or B were at fault?

516 Supra at fn 505. 517 While many countries apply a 750,000 SDR limit, the maximum is 500,000 SDR for drones up to 20 kg in Austria. While the limits typically apply per accident (so that losses of multiple victims have to be reduced accordingly if the total exceeds the limit), Croatia only caps the amount due to each individual victim for personal injuries at 100,000 SDR. Karner/Koch, Civil Liability for Artificial Intelligence

III. Autonomous drones

125

As explained above,518 establishing misconduct can lead to serious evidentiary problems due to the particularities of AI-based systems operating with a high degree of autonomy, especially their complexity, opacity, openness and limited predictability. If proving that A or B were at fault therefore failed, jurisdictions such as Ireland or Belgium would hold neither of them liable. If it were just a small drone, whether the respective strict liability regime in other jurisdictions would kick in would depend on various factors (mostly the drone’s weight or MTOM), whereas in the Netherlands, only the keeper of a toy drone would be strictly liable, while the operators of any more substantial drone would not have to compensate X or Y. In most jurisdictions other than the Netherlands with strict liability, presumably B would be held liable rather than A, as the former would be deemed the operator of the drone, but this typically (though not universally) depends upon the degree and/or duration of control passed on to B by A.519 Hypothetical 8. Same as Hypothetical 7, but here the drone crashed because it was struck by lightning in an unforeseeable and sudden thunderstorm.

While the keeper/operator of the drone would of course not be liable in those countries where there is no strict liability at all or where it does not apply, the case shows further differences between those jurisdictions where a strict liability regime is in place: in Austria, Germany, or France, the keeper/operator would still be liable, as these jurisdictions do not recognise a defence of force majeure, whereas a Croatian, Latvian or Slovenian operator would go free. Hypothetical 9. An autonomous drone owned and operated by A falls out of the sky, crashes into the windshield of a bus, which in turn collides with oncoming traffic. Six people are killed, twelve injured, and the bus is a total write-off. Who indemnifies these losses to what extent if it cannot be proven that someone involved was at fault?

This Hypothetical points at the different maximum amounts of liability that apply in those jurisdictions where this case would fall under a strict liability regime. While (also in those countries) a fault-based regime would not be subject to such limitations, this is of no help for our victims here since no-one can be blamed for this accident. The key question is first whether liability is unlimited, which is true, eg, in Denmark or France. The next question is how high the total

518 See above B.III.3(b). 519 Cf, eg, the Czech solution where the owner remains liable if no other person can be identified as the operator (§ 2930 of the Czech Civil Code). Karner/Koch, Civil Liability for Artificial Intelligence

126

C. Use Cases

of all losses combined is. If it exceeds 500,000 SDR but remains below 750,000 SDR, the victims would face a possible reduction of their claims in Austria, but not in Germany or Italy. Otherwise, it depends on how high the combined amount of compensation for bodily harm and fatalities is. If it exceeds two-thirds of the overall limit, that amount will be distributed first to those victims in Austria or Germany, and the remaining third would then be divided among those not yet compensated in full (both bodily harm and property losses). In Spain, for each person killed, liability could reach a maximum of 120,000 SDR (with other limits in place for surviving victims). The Croatian solution is entirely different, with up to 100,000 SDR foreseen for each victim of bodily harm or killed, whereas the bus owner would only receive compensation up to the value of a comparable new drone. Hypothetical 10. Same as Hypothetical 9, but bus driver X who was killed in the accident could have avoided the accident, though failed to do so because of slight negligence. Does this affect the claims of X’s next of kin?

This variation of Hypothetical 9 addresses the different approach regarding the relevance of contributory negligence of the victim. While the claims of X’s surviving next of kin would be unaffected by her conduct in Denmark or Latvia (where contributory negligence is only relevant if at least gross negligence is attributable to the victim), the compensation due for her death may be reduced, eg, in Austria, Croatia, Germany, France, or Italy. Hypothetical 11. Two drones owned and operated by A and B respectively collide in mid-air. Both drones damage each other, but cause no harm on the ground.

Not all strict liability regimes apply to losses other than those incurred by thirdparty victims on the ground. Collisions are excluded, eg, in France or in the UK, but also under the Rome Convention’s regime. This means that in the latter countries, A and B would have to sue each other on the basis of fault liability (which will typically not be helpful if the drones’ operations were controlled by AI). However, in France, A and B could also avail themselves of art 1242 para 1 Code civil, as each of them was in control of her respective drone at the time of the collision despite its autonomous flight.

Karner/Koch, Civil Liability for Artificial Intelligence

D. Conclusions

127

D. Conclusions As stated at the beginning, this study was not intended to present a comprehensive overview of all tort laws in all EU jurisdictions as relevant in all imaginable cases of liability for AI. Instead, we were asked to illustrate the range of approaches offered within the EU with a particular eye to the challenges posed by the peculiar features of AI systems, thereby leaving aside product liability due to the ongoing reform at EU level. In light of the scope of the assignment, we had to limit ourselves to selected key issues, using examples from sample jurisdictions of different legal families. Here are some of the main findings:520 – It is doubtful whether the liability regimes currently in place in all EU Member States provide for an adequate distribution of all such risks, and it is to be feared that at least some victims will not be indemnified at all or at least remain undercompensated if harmed by the operation of AI technology even though the principles underlying the tort law regimes in those jurisdictions would speak in favour of remedying their harm. More importantly, the outcome of such cases in the Member States will often not be the same due to peculiar features of these legal systems that may play a decisive role especially in cases involving AI. – While claims for compensation invariably require that the victim incurred some harm, the range of compensable losses and the recognised heads of damage will not be different in AI cases than in any other tort scenario. Without going into detail in this study, however, it is important to bear in mind that there are important differences throughout Europe when it comes to recognising which damage triggers tort claims in the first place (specifically evident in the case of pure economic loss). Furthermore, jurisdictions differ with regard to which consequences of an initial harm will be indemnified at all. The range and extent of remedies available are equally divergent, in particular (but clearly not limited to) the extent of compensation for immaterial harm. – It is equally universally acknowledged throughout Europe that a duty to compensate the loss of another requires that its cause at least possibly lay within the sphere of the addressee of the claim. However, identifying such a cause and convincing the court of its impact on the turn of events is particularly challenging if it is an AI system that is suspected to have at least contributed to damaging the victim. This is due to the very nature of AI systems and their particular features, such as complexity, opacity, limited predictability, and

520 These Key Findings are repeated in the Executive Summary above at 9. Karner/Koch, Civil Liability for Artificial Intelligence

128



D. Conclusions

openness. Just think of a collision of autonomous vehicles, which may have been triggered by some flaws of the hardware as in an accident involving traditional cars, but it may also have been caused instead by the internal software of either vehicle, which in turn may or may not have been altered by some over-the-air update that did not necessarily originate from the car manufacturer. The vehicle’s ‘decision’ ultimately leading to the collision may have been the result of flawed data (collected either by the vehicle itself or by some external provider) or of some errors in processing that information. Even the question of what constitutes an ‘error’ of the AI will be difficult to answer. The success of proving something in court depends upon the applicable standard of proof in place in the respective jurisdiction, ie the degree of conviction that the judges must have in order to be satisfied that the burden of proof has been met. There are significant differences throughout European jurisdictions with respect to this procedural threshold. Some are already satisfied with a mere preponderance of the evidence, with the result that the person charged with proving something already succeeds if it is more likely than not that her allegations are true. In other jurisdictions, the degree to which the fact finder must be persuaded is much higher, making it correspondingly much more difficult to prove something. This difference directly impacts upon the outcome of a case: if, for example, the claimant has to prove that an AI system caused her loss, and evidence only supports a 51 % likelihood that this was indeed the case, she will win the case in full (subject to the other requirements of liability) in the first group of jurisdictions and lose it completely in the remainder, collecting full compensation in the former countries and getting nothing at all in the others. In the latter jurisdictions, courts may at some stage consider lowering the threshold for victims of AI. Any such alleviations of the position of only a select group of claimants necessarily triggers the problem of equal treatment of tort law victims as a whole. This may be justified due to the particular features of AI as a potential source of harm, but whether courts throughout the EU will acknowledge that is yet unpredictable. Those charged with the burden of proof can sometimes benefit from certain alleviations along the way, for example if the courts are satisfied with prima facie evidence. It is hard to foresee whether and to what extent this will also be available in cases of harm caused by AI systems. At least initially, it would be difficult to apply, considering that this requires as a starting point a firmly established body of experience about a typical sequence of events which at first will be lacking for novel technologies. Another approach helping claimants is to shift the burden of proof to the opposing party. Apart from express legislative presumptions (which at present  





Karner/Koch, Civil Liability for Artificial Intelligence

D. Conclusions







129

seem to be lacking in AI cases so far), some courts charge the defendant rather than the claimant with proving certain facts if it is the former who may be in control of crucial evidence needed to establish (or disprove) such facts. Failure to submit such evidence may then reverse the burden of proving whatever it could have corroborated. In the AI context, this may apply, for example, to log files or the like produced or at least stored within the sphere of the defendant. To what extent this would be truly beneficial for victims of AI systems not only depends upon the existence of logging devices in the first place, but also on what kind of information is to be expected from such files that could contribute to clarifying the cause of the damage. How difficult it will be for those injured by an AI system to prove that it contributed to causing their harm will also depend on which tort theory they can rely on in order to pursue their claim. If the only available claim is based on fault liability, triggered by some blameworthy conduct attributable to the defendant, claimants need to show that their damage was indeed caused by such behaviour. If the applicable law provides for strict liability instead, holding someone liable for a risk within her sphere, it is not any specific flawed conduct that claimants have to show but merely that this risk attributable to the defendant materialised, without necessarily having to prove how. In the case of harm possibly caused by the operation of an AI system, at least that still needs to be established, which may not necessarily be self-evident in all cases though, particularly in light of the connectivity and openness of such technology. In the case of fault liability, the greatest challenge for a victim whose damage was caused involving an AI system is to identify human conduct which significantly impacted upon the course of events at all. Unlike other technology, where a person will be actively involved in operating it and whose behaviour will influence the use of the technology as such, AI is often used to replace that human factor in the operation entirely or at least to a large extent, the best example being self-driving cars, as their colloquial name already indicates. Therefore, at least in the immediate events preceding the infliction of harm, there will often be no human actor involved at all who played a decisive role, unless the user or operator failed to update or monitor the system as required (a duty the extent of which is in itself yet hard to predict). The victim will hence often have to go further back in the timeline in order to identify conduct that may have at least contributed to the causal chain of events, and then find evidence supporting the conclusion that such conduct was faulty. Identifying harmful conduct will be the more difficult the more independent the behaviour of the AI system is designed, or – figuratively speaking – the more black the box is. After all, while human conduct is literally visible and

Karner/Koch, Civil Liability for Artificial Intelligence

130









D. Conclusions

can be witnessed, identifying the processes within an AI system and persuading the court thereof seems much more challenging, even if the defendant’s conduct relating to the AI system can be traced back with convincing evidence. While fault liability is a basis for tortious liability in all European legal systems, there are important differences between these jurisdictions when it comes to the details, which will impact significantly upon the outcome of any future case involving AI systems. This includes, in particular, the question of who has to prove fault in practice. Some jurisdictions apply the default rule, requiring the claimant to prove this prerequisite of liability (though subject to exceptions – where further differences lie), others generally charge the defendant with proving that she was not at fault. These differences come in addition to the aforementioned variations regarding the standard of proof, which apply correspondingly to proving fault. To the extent a legislator should decide to impose concrete rules of conduct, by introducing, for example, monitoring or updating obligations, any disregard of such duties may be treated distinctly by the courts, as some jurisdictions alleviate or shift the burden of proving fault if the defendant or someone in the latter’s sphere violated such express rules of conduct designed inter alia to prevent harm. To what extent we will see such regulation in this field, and whether it will happen on a purely domestic or also on the EU level, remains to be seen. Apart from differences regarding the burden of proving fault, jurisdictions in the EU also differ with respect to the applicable standard of care and the way it is determined. This is decisive inasmuch as it defines the threshold to determine whether causal conduct is sufficiently blameworthy to justify holding the defendant liable. Often, a comparison is drawn to a ‘reasonable’ or similarly ideal person, but in the absence of experience in the case of novel case scenarios, it is for the courts to define what should and could have been done under the circumstances. The reduced factual relevance of human conduct to be expected in future tort cases involving AI systems, at least in the immediate environment of their operations, will also impact upon the significance of vicarious liability, ie the attribution of someone else’s conduct to the liable person. This variant of liability is still being discussed in the context of liability for AI as a potential basis for analogy, based on the (convincing) argument that if someone is to be held liable for the conduct of another to whom the former had outsourced tasks of her own, the same should apply correspondingly if that auxiliary is not a human, but an AI system instead. However, this should not distract us from the fact that vicarious liability is a rather diverse concept in a European compariKarner/Koch, Civil Liability for Artificial Intelligence

D. Conclusions









131

son. Some jurisdictions are very restrictive in tort law and only in rather exceptional cases attribute the conduct of an auxiliary to her principal, whereas other countries are much more generous in that respect. Further differences are evident with respect to the expected relationship between the auxiliary and the principal (such as employment), or the actual context in which harm was caused by the former. The aforementioned problems with respect to liability triggered by human conduct do not arise if the reason for holding someone liable is not some flawed behaviour, but instead rather a risk within the defendant’s sphere whose source is in her control and from which she benefits. While all European jurisdictions have at least some instances of strict liability in addition to and alongside fault liability, there are tremendous differences both with respect to the range of risks covered by such no-fault regimes as well as with respect to the details of how such liabilities are designed. This is best illustrated with the example of traffic accidents: most, but not all EU jurisdictions provide for strict liability for the motor vehicles involved. If they do, their liability regimes differ substantially, however, for example, with respect to which victims are covered by it, and which can only resort to traditional fault liability instead. Some countries put a cap on such liability, others do not. Some allow alternative paths towards compensation at the victim’s choosing, others do not. Some allow a wider range of defences, others are extremely restrictive. Yet others have no strict liability for motor vehicles at all as said and apply only fault liability instead. Also when it comes to drones, the current legal landscape is quite diverse: while many jurisdictions have at least some strict liability regime in place for ground damage, for example, scope and details thereof vary considerably throughout Europe. Such differences are evident already with regard to what kind of drones are subject to such a special liability regime, but also with respect to the range of defences available to the defendant, the degree to which contributory conduct by the claimant is considered, and whether and to what extent the amount of possible compensation is capped. In those jurisdictions where all or at least some drones are not covered by strict liability, fault liability applies instead, subject to the general variations already mentioned above. The catalogue of risks subjected to strict liability in any given jurisdiction in general differs from country to country. All address some more or less specific risks (eg for motor vehicles of a certain kind or for means of transport more generally), which may also apply if such sources of risk include AI technology, such as self-driving cars, to the extent they match the definitions foreseen in such legislation.

Karner/Koch, Civil Liability for Artificial Intelligence

132







D. Conclusions

Some jurisdictions have introduced broader instances of strict liability, either instead of or in addition to strict liabilities limited to specific risk sources. Broader strict liability regimes may apply, for example, to dangerous substances, things, or activities without further specifications. While those broader variations may also extend to novel technologies not specifically addressed in other tort legislation, whether or not courts will be prepared to do so in the case of an AI system is yet unclear, as they would first have to qualify it as ‘dangerous’ within the meaning of such general clauses. After all, due to the absence of experience with such systems at least at first, and in light of the expectations towards AI systems promoted to be safer than traditional technology, it is difficult to foresee whether and where courts will be prepared to classify them as required by such general clauses. One must also bear in mind, though, that not all AI systems will be equally likely to cause frequent and/or severe harm, so the justification for subjecting them to strict liability will already for that reason differ depending upon the technology. Some jurisdictions hold persons liable for ‘things’ in their control. Whether or not this requires such ‘things’ to be defective, or whether this liability is designed as a true strict liability or just as a regime of – rebuttably or irrebuttably – presumed fault differs from country to country. This will also impact upon the likelihood of victims of an AI system in such countries to successfully pursue claims for compensation on that basis. Summarising the current situation in the EU, one can observe that, while there are at least some strict liabilities in place in all European jurisdictions, it is clear that at present many AI systems do not fall under such risk-based regimes, leaving victims with the sole option of pursuing their claims for compensation via fault liability. However, the latter is triggered by human (mis-) conduct and thereby not only requires victims to identify such behaviour as the cause of their harm, but also to convince the court that it was blameworthy. Due to the nature of AI systems, at least their immediate operation and therefore the activities directly preceding the damaging event will typically not be marked by human control. Connecting the last input of someone within the defendant’s sphere with the ultimate damage will therefore be more challenging than in traditional cases of fault liability. The procedural and substantive hurdles along the way of proving causation, coupled with the difficulties of identifying the proper yardstick to assess the human conduct complained of as faulty, may make it very hard for victims of an AI system to obtain compensation in tort law as it stands.

Karner/Koch, Civil Liability for Artificial Intelligence

Mark A Geistfeld

Regulation of Artificial Intelligence in General and Autonomous Vehicles in Particular in the US*

A. Status of Artificial Intelligence Regulation in the United States In the U. S., a patchwork of federal and state laws expressly address various issues involving artificial intelligence (AI). With the notable exception of medical devices (discussed in section 5) and autonomous vehicles (discussed in part B), this developing body of law has only limited relevance for the liability and insurance questions that will arise when AI causes bodily injury or property damage. No federal statute of this type expressly governs AI. Nevertheless, some existing statutory schemes can be applied to the safety performance of AI. For example, the U. S. Food and Drug Administration has already used its existing statutory authority to regulate the use of AI for some types of medical-support software and medical devices.  



1. Scope of this Report In addition to bodily injury or property damage, AI can cause other types of harms governed by existing laws. In some cases, for example, plaintiffs have alleged that the defendant trained the AI with datasets that violate state privacy or unfair competition laws.1 Other cases involve algorithms that allegedly violate anti-

* Sheila Lubetsky Birnbaum Professor of Civil Litigation, New York University School of Law. I thank Jack Barnett and Mitchell Pallaki for valuable research assistance. 1 For some recent examples, see Janecyk v. International Business Machines, Case No. 1:20-cv00783 (N.D. Ill.) (filed January 22, 2020) (class-action suit alleging that defendant’s use of publicly available images to create a dataset for training machine-learning models violates the Illinois Biometric Information Privacy Act); Burke v. Clearview AI, Inc., Case No.: 3:20-cv-00370-BAS-MSB (S.D. Cal.) (filed February 27, 2020) (class-action suit alleging that defendant created a facial reGeistfeld, Regulation of Artificial Intelligence in the US

134

A. Status of Artificial Intelligence Regulation in the United States

discrimination laws.2 Yet others involve allegations that AI inaccurately characterized citizens as filing fraudulent claims for benefits.3 Cases like these further illustrate the extent to which existing laws can be applied to a variety of harms caused by AI. A comprehensive analysis of these forms of liability is outside the scope of this report, which instead focuses on the safety performance of AI and the associated liability and insurance issues. But to adequately depict the overall regulatory environment for AI, the report also describes myriad contexts in which governments at both the federal and state levels have started to address different types of regulatory issues posed by this emergent technology.

2. Statutes as a Source of Liability A statute (or safety regulation promulgated thereunder) can either expressly or by implication create liabilities by giving individuals the right to recover compensatory damages from someone whose violation of the statute caused the claimant to suffer injury. Even if a statute does not give the plaintiff such an express or implied right of action, courts will award compensatory damages to a tort plaintiff for the defendant’s violation of a safety statute under certain conditions. Different doctrines define the relations between statutory law and the common law of torts, but all of them are based on the underlying principle that courts will defer to any legislative policy decision that is relevant to the resolution of a tort claim.4 The relevance of a federal or state regulation of AI for liability purposes, therefore, depends on whether it addresses some component of the safety problem required for resolution of a tort claim. Although these possibilities cannot be comprehensively analysed for the statutes and regulations described below, their relevance for liability purposes can be identified by asking whether they would help to specify the types of safety decisions tort law requires.

cognition database by ‘scraping’ more than 3 billion photos from online social media and other sources in violation of California’s Unfair Competition Law). 2 See Andrew Burt, Artificial Intelligence Liabilities are Increasing, LegalTech News, June 26, 2020, https://www.law.com/legaltechnews/ 3 Id. 4 See generally Mark A. Geistfeld, Tort Law in the Age of Statutes, 99 I OWA L. R EV . 957 (2014) (relying on a range of tort doctrines to establish this principle of common-law deference). Geistfeld, Regulation of Artificial Intelligence in the US

4. Express State Regulation of Artificial Intelligence

135

3. Express Federal Regulation of Artificial Intelligence Federal law can preempt (or wholly displace) state law based on the Supremacy Clause of the U. S. Constitution.5 The preemptive effect of a federal statute extends to regulations promulgated by federal agencies pursuant to such statutory authority. There are no federal statutes that expressly address AI or the liabilities and insurance issues that will arise when AI causes bodily injury or property damage. Instead, some federal statutes and regulations indirectly reference the use of AI by appropriating money for research or otherwise expressing regulatory interest in formulating more specific AI regulations, outlining the administration’s goals for expanding research in the area, or providing guidance to countries that are party to trade agreements.6 These measures do not implicate safety concerns and are not relevant for liability or insurance purposes.  

4. Express State Regulation of Artificial Intelligence In the absence of federal law that resolves an issue of liability or insurance, state law will govern these issues. As of June 2020, only seven states have adopted statutes that expressly address the use of AI in a manner that has potential liability implications.7 In early 2020, Illinois created the Artificial Intelligence Video Interview Act,8 which is directed at the use of AI during the employment process. The statute sets out specific guidelines that an employer must comply with if there is no human review of videos that applicants submit as part of the application process.9 Among other requirements, the employer must notify the applicant that AI is being used,10 explain what the AI does in its evaluation of submitted videos,11 and

5 U. S.   C ONST . art. VI, cl. 2 (commanding that the laws of the United States ‘shall be the supreme Law of the Land; … any Thing in the Constitution or Laws of any state to the Contrary notwithstanding’). 6 See Annex III. 7 For state regulation that indirectly addresses the use of AI, see Annex II. 8 2019 Ill. Legis. Serv. 101-260 (West) (codified at 820 I LLLL . C OMP . S TAT . 42/1 (2020)). T AT . 42/5. 9 820 I LL . C OMP . S TAT 10 Id. at 42/5(1). 11 Id. at 42/5(2).  

Geistfeld, Regulation of Artificial Intelligence in the US

136

A. Status of Artificial Intelligence Regulation in the United States

obtain the applicant’s consent.12 In addition, employers must limit which parties view these videos,13 and must destroy them if the applicant requests.14 In 2019, Alabama also expressly considered the effects of AI when it enacted specific requirements for employers seeking to relocate call centers.15 The rules, however, only exclude companies that utilize AI for this business reason from the reporting requirements that would otherwise apply.16 In a similarly tangential fashion, Oklahoma – apparently following a nearly identical statute previously enacted by Kentucky17 – adopted legislation addressing the use of AI in the professional-licensing context, through The Practice of Optometry Act.18 It defined an assessment mechanism as an optical tool, ‘include[ing] artificial intelligence devices … that is used to perform an eye assessment.’19 The statute requires the AI to have specific capabilities, such as enabling ‘synchronous or asynchronous interaction between the patient and … optometrist,’20 ‘collect[ion of] the patient’s medical history,’21 compliance with the American Disabilities Act22 and Health Insurance Portability and Accountability Act of 1996 (HIPAA),23 and the ability to ‘perform a procedure with a recognized Current Procedural Terminology code maintained by the American Medical Association, if applicable.’24 Most importantly, the statute requires that such assessment mechanisms, including those using AI, ‘maintain liability insurance, through its owner or lessee, in an amount adequate to cover claims made by individuals examined, diagnosed, or treated based on information and data, including any photographs, and scans, and other digital data generated by the assessment mechanism.’25 The statute requires optometrists to continue to perform their duties

12 Id. at 42/5(3). The same section explicitly prohibits ‘use [of] artificial intelligence to evaluate applicants who have not consented to the use of artificial intelligence analysis.’ Id. at 42/5. 13 Id. at 42/10. 14 Id. at 42/15 (requiring destruction ‘within 30 days after receipt of the request’). 15 A LA . C ODE § 41-23-230 (2019). 16 Id. § 41-23-230(1) (‘The term[, Call Center,] does not include locations … at which similar calls are resolved in whole or in part by … artificial intelligence.’). AT . & R. S E ERV RV .  44 (West) (codified 17 See Consumer Protection in Eye Care Act, 2018 K Y . R EV . S TTAT T AT . A NN . § 367.680 (West 2018)). Notes 55-68, infra, will indicate the relevant porat K Y . R EV . S TAT tions of the Kentucky Code which contains the similar if not identical language. AT . tit. 59, § 646.1 (2019)). 18 2019 Okla. Sess. Law Serv. 427 (West) (codified at O KLA . S TTAT E V . S TAT . A NN . § 367.680(1)(b). 19 O KLA . S TAT . tit. 59, § 646.1(1); cf. K Y . R EV E V . S TAT . A NN . § 367.6802(1)(a). 20 O KLA . S TAT . tit. 59, § 646.2(A)(1); cf. K Y . R EV AT . A NN . § 367.6802(1)(b). 21 O KLA . S TAT . tit. 59, § 646.2(A)(2); cf. K Y . R EV . S TTAT T AT . A NN . § 367.6802(1)(c). 22 O KLA . S TAT . tit. 59, § 646.2(A)(3); cf. K Y . R EEVV . S TAT 23 O KLA . S TAT . tit. 59, § 646.2(A)(4); cf. K Y . R EV . S TAT . A NN . § 367.6802(1)(d). 24 O KLA . S TAT . tit. 59, § 646.2(A)(5); cf. K Y . R EV . S TAT . A NN . § 367.6802(1)(e). E V . S TAT . A NN . § 367.6802(1)(f). 25 O KLA . S TAT . tit. 59, § 646.2(A)(6); cf. K Y . R EV Geistfeld, Regulation of Artificial Intelligence in the US

4. Express State Regulation of Artificial Intelligence

137

as is if they had performed the assessment in person.26 The evaluation is permitted only if the patient signs a disclosure agreement that, among other things, warns the patient that the ‘assessment is not a replacement for an in-person’ exam,27 that it will not ‘generate an initial prescription,’28 and that the federal recommendation is to visit ‘an eye doctor one time a year or more.’29 Moreover, patients cannot use the assessment unless they ‘had an in-person comprehensive eye health examination within the previous’ two years.30 Beyond the administrative requirements applicable to the use of this new technology, the physician is ‘held to the same … standard of care as those in traditional in-person clinical settings.’31 Outside of the clinical setting and provision of care, the individual tasked with dispensing ‘bears the full responsibility for the accura[cy]’ in filling the prescription.32 These statutes illustrate some of the uncertainties that arise from applying the traditional tort standard of care – the reasonable professional standard, in this instance – to AI. Suppose the AI system provides statistically more accurate diagnoses than optometrists. What if the statistically more accurate diagnosis is wrong in a particular case that would have been diagnosed correctly by an optometrist? Would the optometrist be liable for having relied on the AI system that generally improves upon his or her own decision-making? And if an optometrist did not rely on such a system and missed a diagnosis that would have been made correctly by the AI, would the optometrist be liable? The point for present purposes is not that existing liability rules are incapable of resolving these issues, but

26 The doctor must evaluate the data collected by the machine, O KLA . S TAT . tit. 59, § 646.2(B)(1); AT . tit. 59, § 646.2 cf. K Y . R EV . S TAT . A NN . § 367.6802(2)(a), confirm the patient’s identity, O KLA . S TTAT (B)(2); cf. K Y . R EV . S TAT . A NN . § 367.6802(2)(b), keep the patient’s medical records for HIPAA, O KLA . S TAT . tit. 59, § 646.2(B)(3); cf. K Y . R EV E V . S TAT . A NN . § 367.6802(2)(c), and must sign off on all T AT . tit. 59, § 646.2(B)(4); cf. K Y . R E V . S TAT . A NN . § 367.6802(2) prescriptions or diagnoses, O KLA . S TAT (d). The Kentucky statute also reiterates the necessary conditions that must be met before AI machinery is used for an eye exam. K Y . R EV . S TAT . A NN . § 367.6802(2)(e)-(f) (allowing use of the ‘assessment mechanism’ for glasses and contacts prescriptions only if the patient is eighteen and has had a comprehensive exam in the last two years with the added requirement that contact lens evaluations must occur after the initial prescription and first follow-up or renewal). AT . A NN . § 367.6802(2)(a). 27 O KLA . S TAT . tit. 59, § 646.2(C)(1); cf. K Y . R EEVV . S TTAT 28 O KLA . S TAT . tit. 59, § 646.2(C)(2); cf. K Y . R EV . S TAT . A NN . § 367.6802(2)(b). 29 O KLA . S TAT . tit. 59, § 646.2(C)(4); cf. K Y . R EV . S TAT . A NN . § 367.6802(2)(d). 30 O KLA . S TAT . tit. 59, § 646.2(C)(3); cf. K Y . R EV . S TAT . A NN . § 367.6802(2)(c). 31 O KLA . S TAT . tit. 59, § 646.2(D); cf. K Y . R EV . S TAT . A NN . § 367.6802(4). T AT . A NN . § 367.684(1). 32 O KLA . S TAT . tit. 59, § 646.6(A); cf. K Y . R E V . S TAT Geistfeld, Regulation of Artificial Intelligence in the US

138

A. Status of Artificial Intelligence Regulation in the United States

rather that these statutes do not obviously clarify matters and reduce uncertainty about potential liabilities.33 In 2018, Ohio enacted a statute addressing the use of AI in real estate and other appraisal companies.34 The statute defines an ‘[a]utomated valuation model’ as a ‘computer software program that analyzes data using an automated process, such as regression, adaptive estimation, neural network, expert reasoning, or artificial intelligence programs, that produces an output that may become a basis for appraisal or appraisal review.’35 The only individuals allowed to perform appraisals are those ‘licensed or certified under this chapter to do the appraisal.’36 However, this prohibition is limited in scope as it ‘does not apply to … an automated valuation model or report based on an automated valuation model … [created] to validate or support the value conclusion provided by the person licensed or certified … to do the appraisal.’37 Companies that utilize appraisal mechanisms of this type are subject to specific record-keeping requirements, including ‘the name of th[e automated] system’ used to generate the appraisal.38 Moreover, all parties working for the appraisal company are barred from ‘ordering an automated valuation model in connection with a financing transaction unless … there is a reasonable basis to believe that the initial appraisal was flawed,’39 the AI was

33 To be sure, the liability rule must at least implicitly incorporate human decision-making into the inquiry in order to attribute legal responsibility to a defendant. The AI, however, is the product of human decision-making, and so a liability rule that focuses on the performance of the AI in question is still sufficiently linked to human decision-making. Cf. Meg Leta Jones, The Ironies of Automation Law: Tying Policy Knots with Fair Automation Practices Principles, 18 V AND . J. E NT . & T EC H . L. 77, 92-96 (2015) (explaining that historic legislation and legal precedent handling railroad safety as well as warrantless searches was dependent upon the presence of human action in the causal chain in order to attach liability); Kenneth Anderson & Matthew C. Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won’t Work and How the Laws of War Can 15-17 (Am. U. Wash. C.L. Research Paper No. 2013-11, 2013), (explaining the hesitation of using automated weapons systems given the major criticism that humans need to be a part of the decision chain ethically as well as in order to hold parties accountable if something goes wrong). 34 Appraisal and Appraisers – Change in Law – Regulation, 2018 Ohio Laws 67 (amending O HIO R EV . C ODE A NN . §§ 4763.01, 4768.01 among other provisions). 35 O HIO R EV . C ODE A NN . § 4763.01(U); cf. id. § 4768.01(I); Nebraska Real Property Appraiser Board, 298 N EB . A DMIN . C ODE § 1-001.02 (2019) (‘Automated Valuation Model means any computer software program that analyzes data using an automated process. The program may use regression, adaptive estimation, neural networking, expert reasoning, and/or artificial intelligence.’). 36 O HIO R EEVV . C ODE A NN . § 4763.19(A). 37 Id. § 4763.19(B). 38 Id. § 4768.10(B)(1). 39 Id. § 4768.11(A)(9)(a).  

Geistfeld, Regulation of Artificial Intelligence in the US

4. Express State Regulation of Artificial Intelligence

139

utilized ‘pursuant to a bona fide pre- or post-funding appraisal review or quality control process,’40 or ‘[a] second appraisal is required under state or federal law.’41 This scheme implies that just as with the optometry statutes,42 Ohio’s legislation conceptualizes the use of AI as a secondary check and does not explicitly allow the regulated entities to exclusively rely on it.43 For this reason, the legislation raises the same types of liability questions discussed above in relation to the standard of care governing optometrists who use AI. Finally, Mississippi recently adopted a statute that incorporates AI into its regulations governing the medical field.44 The ‘utilization of Artificial Intelligence’ for medical treatment is considered a ‘complementary, alternative, or regenerative medicine/therapy’ for statutory purposes, because it is not customary medical practice.45 The statute, like others previously described in this section, holds licensed medical practitioners to the same standards of care as would otherwise be in place if they had not used an innovative medical treatment.46 This provision means that the provider must satisfy a rigorous informed consent requirement,47 as well as preserve documentation of the evaluation48 and course of

40 Id. § 4768.11(A)(9)(b). 41 Id. § 4768.11(A)(9)(c). 42 See supra notes 17-31 and accompanying text. 43 Cf. 298 N EB . A DMIN . C ODE § 1-001.02A (‘An automated valuation model is a tool that delivers E V . S TAT . § 76-2204, or an estimation or calculation, and is not in itself an appraisal under N EB . R EV by itself a report under N EB . R EV . S TAT . § 76-2216.02. If the output from an automated valuation model is communicated as an analysis, conclusion, or opinion of value concerning identified real estate or identified real property that implies the exercise of judgment to the client, intended user E V . S TAT . § 76-2221, the analysis, conclusion, or the public by any person not exempt under N EB . R EV T AT . § 76-2204 and communication of the or opinion of value is an appraisal under N EB . R EV . S TAT EV. ST TAT AT . § 76-2216.02.’). Thus, analysis, conclusion, or opinion of value is a report under N EB . R EV the Nebraska code explicitly requires the appraisal value to be treated merely as support, and if any individual were to treat it as conclusory then the licensed party would be liable as if she had created the appraisal without the help of a computer. 44 Practice of Medicine: Complementary and Alternative Therapies, 30-2635 M ISS . C ODE R . § 13.2 (LexisNexis 2020). 45 Id. § 13.2(B). 46 Id. § 13.5 (‘Parity of evaluation standards should be established for patients, whether the licensee is using conventional medical practices or alternative therapy.’). 47 Id. § 13.4 (requiring informed consent that details the parties involved, the manner in which information was discussed between the doctor and patient, ‘overt agreement from the patient with the licensee’s determination about … [the] appropriate[ness]’ of the treatment, the ‘benefits and risks’ of standard as opposed to alternative treatment, and the ‘right to withdraw … without denial of standard of care’). 48 Id. § 13.5(1)-(4). Geistfeld, Regulation of Artificial Intelligence in the US

140

A. Status of Artificial Intelligence Regulation in the United States

treatment the patient received.49 The doctor must also be properly educated in the alternative treatment,50 and seek to provide such treatments only if the benefits outweigh the risks in light of the patient’s specific needs.51 The legislation also mandates that any advertising of alternative treatments must not be misleading and must be backed by research from ‘peer-reviewed publications.’52 To enforce this strict regulation of the use of alternative treatments, including when AI assists medical diagnoses, the statute states that provision of such treatment ‘outside of the … regulations stated herein constitutes unprofessional conduct, dishonorable or unethical conduct likely to deceive, defraud or harm the public’53 and carries a penalty, such as having one’s medical license denied, suspended, or revoked depending upon the severity of the transgression.54 The Mississippi statute further illustrates some of the legal uncertainties that will arise when AI causes bodily injury. For liability purposes, physicians are held to the standard of care defined by customary practices for the procedure in question. Adherence to a customary practice does not necessarily guarantee positive health outcomes. Suppose that instead of following such a customary procedure, a physician used AI and did not obtain a positive health outcome. The adverse health outcome or injury would not conclusively establish malpractice liability – the same outcome could have occurred if the physician had followed customary procedures. In cases like this, what does it mean to hold physicians to the same standard of care as if they had not used AI?

5. Application of Existing Statutes to Artificial Intelligence Regardless of whether a newly enacted statute or regulation expressly addresses AI, existing statutory and regulatory schemes can still apply to these technologies. The most prominent example involves medical support software and devices. Based on its existing regulatory authority, the U. S. Food and Drug Administration (FDA) ‘has already cleared or approved around 40 AI-based medical devices.’55 The earliest approval was for ‘Sedasys,’ which enabled sedation based  

49 50 51 52 53 54 55

Id. § 13.7. Id. § 13.8. Id. § 13.6. Id. § 13.9. Id. § 13.10. M ISS . C ODE A NN . § 73-25-29 (West 2019). Sara Gerke et al., Ethical and Legal Challenges of Artificial Intelligence-Drive Health Care, in A RTIFICIAL I NTELLIGENCE NTELLIGE NCE IN H EALTHCARE E ALTHCARE (Adam Bohr & Kaveh Memarzadeh eds., forthcoming 2020) Geistfeld, Regulation of Artificial Intelligence in the US

5. Application of Existing Statutes to Artificial Intelligence

141

on computer technology. Approval was only granted ‘after Johnson & Johnson agreed to only use the system for simple procedures,’ and as long as the staff ‘anesthesiologist [was] … on-call to handle any emergencies.’56 Since then, the FDA’s approval process has permitted widespread use of the very first entirely autonomous diagnostic system, called IDx-DR.57 After a doctor uploads the patient’s retinal scan, ‘the IDx-DR software then provides the physician with the recommendation either to rescreen in 12 months or to refer the patient to an eye specialist when more than mild diabetic retinopathy is detected.’58 Other examples include machine-learning applications that assist with cardiac magnetic resonance59 or highlight suspect areas that might indicate a wrist fracture.60 The increased number of requests to the FDA seeking approval for such products has led the agency to update its guidelines and regulations. It provided new draft guidelines soliciting comments on clinical-decision support software,61 which would apply directly to AI because the FDA has encouraged companies to use the lessons from past technology to serve as guidance for creating new approval applications. The FDA also issued a discussion paper specifically addressing the regulatory structure that it might implement to govern medical products using AI and machine-learning software.62 In January 2021, the agency announced a five-part Action Plan moving towards implementation of this regulatory framework for treating AI/machine-learning software as a ‘medical device’ that the agency can regulate under its existing statutory authority.63 The relevance of other statutory and regulatory schemes for AI depends on how one defines the technology. There appears to be a generational gap in how AI has been defined over time. The term ‘artificial intelligence’ was first used in the (manuscript at 7) (available at ); see also Charlotte A. Tschider, Deus ex Machina: Regulating Cybersecurity and Artificial Intelligence for Patients of the Future, 5 S AVANNAH L. R EV . 177, 189 (2018) (noting that six different companies had approved devices in 2017). 56 Drew Simshaw et al., Regulating Healthcare Robots: Maximizing Opportunities While Minimizing Risks, 22 R ICHMOND J.L. & T E CH . 1, 18-19 (2016). 57 Gerke, supra note 54, at 7. 58 Id. at 8. 59 Id. at 7 (Arterys). 60 Id. at 8 (OsteoDetect). UP PORT ORT S OFTWARE OFT WARE : D RAFT G UIDANCE UIDANC E FOR I NDUSNDUS 61 U. S. F OOD & D RUG A DMIN ., C LINICAL D ECISION S UPP TRY AND F OOD AND D RUG A DMINISTRATION S TAFF TAF F (2019) . ROPOS ED D R EGULATORY EGULAT ORY F RAMEWORK FOR M ODIFICATIONS ODIFICAT IONS TO T O A RTIFI62 U. S. F OOD & D RUG A DMIN ., P ROPOSE CIAL I NTEL NTE LLIGENCE LIGENCE /M ACHINE AC HINE L EARNING (AI/ML)-B ASED S OFTWARE OFT WARE AS A M EDICAL D EVICE (S A MD): IS CUSS USSION ION P APER AND R EQUEST FOR F EEDBACK EEDBAC K (2019). D ISC 63 U. S. F OOD & D RUG A DMIN ., A RTIFICIAL RT IFICIAL I NT ELLIGE ELLIGENCE NCE /M ACHINE AC HINE L EARNING (AI/ML)-B ASED AS ED S OFTOFT WARE AS A M EDICAL EDIC AL D EVICE (S A MD) A CTION P LAN (2021).  





Geistfeld, Regulation of Artificial Intelligence in the US

142

A. Status of Artificial Intelligence Regulation in the United States

Department of Defense Authorization Act, 1984,64 which appropriated money for the ‘fifth-generation artificial intelligence computers.’65 Although the Act does not define ‘artificial intelligence,’ the context of its use as an adjective suggests that the term was believed at the time to apply to any nonhuman manner of computation. This interpretation is substantiated by the use of the term in an Illinois health statute passed in 1987, which requires that the government protect the ‘security of health data including … [data] processed by a computer or other type of artificial intelligence.’66 Similarly, the Washington legislature in 1994 defined ‘virtual reality’ as ‘any computer or electronic artificial-intelligence-based technology that creates an enhanced simulation,’ although this portion of the statute was vetoed by the governor and never enacted.67 In contrast, the term ‘artificial intelligence’ was not included in the High-Performance Computing Act of 1991, which focused on the need for advances in computer science and technology, indicating that there was no common understanding of the term.68 As this prior usage indicates, if the definition of AI is broadened, the universe of laws and regulations expands to the limit in which the concept covers ‘automation’ generally. As one paper describes it, the concept of automation ‘refers to: (1) the mechanization and integration of the sensing of environmental variables through artificial sensors, (2) data processing and decision making by computers, and (3) mechanical action by devices that apply forces on the environment or information action through communication to people of information processed.’69 In the realm of automation more generally, the federal and state governments have adopted rules with respect to phone calls,70 contracts,71 and the use of such

64 Pub. L. No. 98-94, 97 Stat. 614 (1983). 65 Pub. L. No. 98-94, Title II, § 205(b), 97 Stat. 624. 66 410 I LL . C OMP . S TAT . A NN . 520/6(e) (West 1987) (emphasis added). 67 1994 Wash. Legis. Serv. 7 § 802 (West) (section vetoed) (emphasis added). 68 Pub. L. No. 102-194, 105 Stat. 1594 (1991). It is worth noting that it is possible it was understood by some as the National Defense Authorization Act for Fiscal Year 1991 passed a couple months prior included grants to schools that were ‘national recognized center[s] conducting artificial intelligence research and education in the areas of natural language and speech processing and task oriented computer animation.’ Pub. L. No. 101-510, Title II, part E, § 243(B)(1), 104 Stat. 1485 (1990). However, given the close proximity of the two acts, the fact that ‘artificial intelligence’ did not appear in the advanced technology law still strongly martials in favor of the view that AI had not taken on the specialized technical definition commonly attached to it today. 69 Jones, supra note 42, at 84. 70 Id. at 97-98. 71 The Uniform Electronic Transactions Act, which has been adopted by some states, defines an ‘electronic agent’ as ‘a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part T AT . A NN . tit. 12a, § 15-101 (West 2019); without review or action by an individual.’ E.g., O KLA . S TAT Geistfeld, Regulation of Artificial Intelligence in the US

5. Application of Existing Statutes to Artificial Intelligence

143

technology on airplanes and in boats. For example, the Coast Guard has adopted regulations requiring that any vessel guided by autopilot must still remain under the direct supervision of the head officer or captain of the ship who has the ability ‘to immediately establish manual control.’72 The Federal Aviation Administration (FAA) has also instituted regulations that handle the question of autopilots in airplanes, which are more detailed than those of the Coast Guard, but still require human action: ‘[t]he pilot in command of an aircraft is directly responsible for, and is the final authority as to, the operation of that aircraft.’73 This includes a responsibility to not ‘operate an aircraft in a careless or reckless manner so as to endanger the life or property of another.’74 Although, air carrier pilots can ‘operate an aircraft without a second in command, if it is equipped with an operative approved autopilot system,’75 such autopilots cannot be operated at low altitudes76 and must be airworthy.77 To satisfy these requirements, a pilot must have the ability to use ‘quick disengagement controls’ that are ‘readily accessible to each pilot while operating the control wheel (or equivalent),’ and when utilized under normal conditions do not cause more than a minor ‘transient response [to] … the airplane’s flight path.’78 In light of the growing reliance on automated flight controls,79 a recent report by the Inspector General attempted to evaluate the effectiveness of pilot training,

N.J. S TAT . A NN . § 12A:12-1 (West 2013); 73 P A . S TTAT AT . AND C ONS . S TAT . A NN . § 2260.101 (West 2007); C OLO . R EV . S TAT . A NN . §  24-71.3-101 (West 2002); A LA . C ODE § 8-1A-1 (2001); A RK . C ODE A NN . § 25-32-101 (West 2001); L A . S TAT T AT . A NN . §   9:2601 (2001); N.C. G EN E N . S TAT . A NN . § 66-311 (West 2001); T E NN . C ODE A NN . §  47-10-101 (West 2001). 72 46 C.F.R. § 28.875(c) (2020) (commercial fishing industry vessels); id. § 35.20-45 (tank vessels); id. § 78.19-1 (passenger vessels); id. § 97.16-1 (cargo and other miscellaneous vessels); id. § 109.585 (mobile offshore drilling units); id. § 122.360 (small passenger vessels); id. § 131.960 (offshore supply vessels); id. § 140.670 (towing vessels); id. § 167.65-35 (public nautical school ships); id. § 185.380 (small passenger vessels under 100 gross tons). 73 14 C.F.R. § 91.3 (2020). 74 Id. § 91.12(a)-(b). 75 Id. § 135.105(a). 76 Id. § 125.329. 77 Id. § 25.1329. 78 Id. § 25.1329(a), (d). 79 See, e.g., Oriana Pawlyk, The Air Force is Now Accepting Bids to Build R2D2-Like ‘Skyborg’ CopiC OM (May 22, 2020), (Air Force flight operations).

Geistfeld, Regulation of Artificial Intelligence in the US

144

A. Status of Artificial Intelligence Regulation in the United States

and what hazards might be posed by automated technology.80 While noting that the ‘industry and flying public have benefited from increased amounts of highly reliable automation,’81 the report ultimately concluded that pilots need to be able to maintain strong manual flying skills.82 Thus, within the flight context, there is still a strong belief that human presence is a necessary and essential component of the industry moving forward.83 Although it has not reached complete automation, drones or unmanned aircraft systems (UASs) represent an initial foray by the FAA into regulating automation and the separation between technology and the humans that utilize them. The extent of federal regulations on the topic is found in part 107 of the Code of Federal Regulations.84 As a result of the FAA Modernization and Reform Act of 2012,85 FAA regulations are responsible for ‘the registration, airman certification, and operations’ of drones in the U. S.86  

80 O F FIC FICE E OF THE I NSP NS PECTOR ECTOR G EN ., D EP E P ’ T OF T RANSP ., E NHANCED FAA O VE RSIGHT C OULD OUL D R EDUC E DUC E H AZARDS A SSOCIATED SS OCIAT ED WITH I NCREASED NCRE ASED U S SE E OF F LIGHT D EC K A UTOMATION (2016), . 81 Id. at 3. 82 Id. at 14 (‘These improvements,’ which include increased training and inspection guidance, ‘can help ensure that air carriers create and maintain a culture that emphasizes pilots’ authority and manual flying skills.’). 83 Ryan Calo, Robots in American Law 20-22 (Univ. of Wash. Sch. of Law Research Paper No. 2016-04, 2016) [hereinafter Calo, Robots], (describing Brose v. United States, 83 F. Supp. 373 (N.D. Ohio 1949), which rejected the government’s view that it could be absolved from liability when the autopilot was engaged, because it was ultimately the pilot’s responsibility to remain vigilant at all times to avoid accidents). 84 14 C.F.R. § 107. 85 Pub. L. No. 112-95, Title III, Subtitle B, §§ 332-33, 126 Stat. 11, 73-76 (2012) (mandating that the Department of Transportation and FAA create rules to incorporate UASs into the National Airspace System). 86 14 C.F.R. § 107.1(a). Given the National Airspace is federally regulated, and state level efforts to do so would be preempted, see generally 49 U.S.C. § 41713, the focus of this section is on federal regulation. However, it is worth noting that some states have also attempted to restrict drone use. See, e.g., 2013 Idaho Sess. Laws 328 (West) (codified at I DAHO C ODE A NN . § 21-213 (West 2013)) (restricting drone use for surveillance to those properly issued by a warrant, in emergencies, or if an individual retains an easement over a property, and granting individuals subject to violations of these laws a civil cause of action); Freedom from Drone Surveillance Act, 2013 Ill. Legis. Serv. 98-569 (West) (codified at 725 I LL . C OMP . S TAT . A NN . 167/1 (West 2014)) (limiting use of drones by law enforcement to emergency scenarios and when a warrant is issued, and outlining specific responsibilities for storing and deleting information collected by such use). Geistfeld, Regulation of Artificial Intelligence in the US

6. Application of State Tort Law to Artificial Intelligence

145

These systems are defined as ‘aircraft operated without the possibility of direct human intervention from within or on the aircraft’ and so are still dependent upon a human operator.87 Similar to the regulations of autopilot use,88 the regulations mandate that there be a ‘remote pilot in command [who] is directly responsible for and is the final authority as to the operation’ of the drone.89 However, civilian operators are rather limited in the scope of their operations as the ‘entire flight’ must be operated within the line of sight of either the ‘remote pilot in command, the visual observer … or the personal manipulating the flight control,’90 no person is able to operate ‘more than one unmanned aircraft at the same time,’91 and these drones ‘may [not] operate … over a human being’ if they are not part of the operation or are not adequately covered.92 The regulations enable individuals to seek a waiver of these requirements,93 but only ‘if the Administrator finds that a proposed … operation can safely be conducted.’94 Most recently, the FAA published a notice of its intent to require drones to have ‘remote identification’ capabilities in order to further integrate their piloting into the existing framework for managing the airspace.95

6. Application of State Tort Law to Artificial Intelligence In the absence of either federal or state statutes or regulations that dictate otherwise, state tort law will govern cases in which AI technologies cause injury.96 Indeed, both the Louisiana and Tennessee statutes described above expressly contemplate this outcome. In general, someone who has been physically harmed by AI will have four potential tort claims: (1) If the operator negligently deployed the AI, then the victim can recover based on proof of fault, causation, and damages.

87 14 C.F.R. § 107.3. 88 See supra notes 71-72 and accompanying text. 89 14 C.F.R. § 107.19(a)-(b). 90 Id. § 107.31(a). 91 Id. § 107.35. 92 Id. § 107.39. 93 Id. § 107.205(c), (e), (f). 94 Id. § 107.200(a). 95 Remote Identification of Unmanned Aircraft Systems, 84 Fed. Reg. 72,438 (proposed Dec. 31, 2019) (to be codified at 14 C.F.R. pts. 1, 47, 48, 89, 91, and 107). 96 For general overview of the laws discussed below, see Mark A. Geistfeld, T ORT L AW : T HE E SSE NTIAL S (2008); Mark A. Geistfeld, P RINCIPLES OF P RODUC TS L IABILITY (3d ed. 2020). SENTIALS Geistfeld, Regulation of Artificial Intelligence in the US

146

A. Status of Artificial Intelligence Regulation in the United States

(2) If the AI determines the physical performance of a product, the victim can recover from the manufacturer under strict products liability by proving that the product contained a defect that caused the injury. Alternatively, if the AI only provides a service that was otherwise reasonably used by the operator, then the victim must prove that the AI performed in an unreasonably dangerous manner because of negligence by the manufacturer or other suppliers of the AI. (3) The owner can be subject to negligence liability for negligent entrustment of the AI to the party who negligently caused the victim’s injury, or under limited conditions, the owner can be vicariously liable for the user’s negligence. (4) Finally, the victim can recover from the operator and potentially the owner under the highly restrictive rule of strict liability that would apply only if the AI technology is abnormally dangerous despite the exercise of reasonable care. A plaintiff can pursue any or all of these claims when available. If there are multiple tortfeasors, the liabilities will be apportioned among them based on the rules of comparative responsibility that virtually all states have adopted. Apportionment might be quite difficult when the components of an AI system come from various suppliers and it is unclear which component caused the AI to perform in the manner that caused injury. Plaintiffs in all states bear the burden of proving negligence, which will be hard to do in many cases of AI-caused injury. For example, negligent deployment could be at issue when injured patients claim that medical professionals committed malpractice by using AI. Medical malpractice in the U. S. involves departure from customary procedures, so there might be uncertainty about how physicians should reasonably use AI technologies before customary practices have been established. These problems of proof might seem to be ameliorated in the realm of defective products governed by strict liability, but plaintiffs in almost all states must prove that the product is defective. Proof of defect is often functionally equivalent to proof of negligence and could be problematic in the AI context. For example, the ‘black box’ nature of machine-learning algorithms can make it hard to determine whether the AI is defective in some respect or otherwise deployed in an unreasonably dangerous manner. These problems have generated a lively debate among U. S. legal scholars about how existing liability rules might apply to AI-caused injuries, and whether fundamental reforms are required to adequately resolve these matters.97  



97 For early examples, see Matthew U. Scherer, Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies, 29 H ARV . J.L. & T ECH . 353 (2016); W. Nicholson Price II, Black-Box Medicine, 28 H ARV . J. L. & T ECH . 419 (2015). Geistfeld, Regulation of Artificial Intelligence in the US

1. Current Regulatory Framework Governing Liability and Insurance

147

7. Conclusion In addition to the subjects surveyed in this section, AI technologies have factored into regulations involving other issues like facial recognition and privacy. But as illustrated by the statutes and regulations described in this section, both the federal and state governments so far have only regulated AI in a piecemeal fashion instead of adopting comprehensive measures governing the safe performance of AI technologies, the associated liability rules, and mandated insurance requirements. Until these statutes or regulations are enacted, state tort law will determine liabilities involving bodily injury and property damage caused by AI technologies. Indeed, the regulatory efforts so far often retain responsibility for human actors, who will then be subject to the tort duty to exercise reasonable care while using AI. Although the tort rules are well established, it is an open question whether they will clearly resolve the host of issues posed by this emerging technology.

B. Status of Autonomous Vehicle Regulation in the United States Aside from the federal regulation of medical devices that incorporate AI, the safety performance of AI in motor vehicles has garnered the most attention at both the federal and state levels.

1. Current Regulatory Framework Governing Liability and Insurance Issues for Motor Vehicle Crashes The traditional method of compensating the victims of a motor vehicle crash in the U. S. involves the use of negligence liability through the tort system. Most crashes are caused by driver negligence and routinely result in tort litigation. Consequently, virtually every state requires, under its financial responsibility laws, that the owner of a motor vehicle has a minimum level of insurance protection to pay for liabilities owed to third parties due to negligent operation of the vehicle. The required policy limits are modest (such as $25,000 per person for bodily injuries and $10,000 for property damage). Injuries suffered by those in the vehicle, and damage to the vehicle itself, are also covered by the (first-party) insurance purchased by the owner. Non-economic losses (pain and suffering) are not covered by these policies. In most states, the owner voluntarily purchases this insur 

Geistfeld, Regulation of Artificial Intelligence in the US

148

B. Status of Autonomous Vehicle Regulation in the United States

ance and can select higher policy limits in exchange for the payment of higher premiums. In states with no-fault insurance, the owner of a motor vehicle is obligated to purchase such coverage in a specified amount to cover economic loss (typically medical expenses and lost wages). The required policy limits are modest ($50,000 per person, with many states adopting even lower limits). For the injuries covered by mandatory insurance, the owner’s legal responsibility produces an economic result that is indistinguishable from a tort rule that makes the owner strictly liable for those harms – the owner in both instances pays for the injuries in question. The regulatory requirements for insuring commercial trucks and vehicles differ from those governing automobiles. For example, federal law requires commercial trucks engaged in interstate commerce to have liability insurance for at least $5 million to cover the transport of hazardous materials other than oil.98 Taxicabs, by contrast, are regulated at the local (county or city) level, with insurance requirements varying across the country.99 These entities do not face liability rules different from those applicable to automobiles, although the commercial conveyance of passengers can subject the operator to a more demanding standard of care in some states, including California. Federal law also governs the safety performance of motor vehicles. Pursuant to its statutory authority, the National Highway Traffic Safety Administration (NHTSA) can specify minimum performance standards for motor vehicles.100 As required by statute, these standards ‘shall be practicable, meet the need for motor vehicle safety, and be stated in objective terms.’101 To help ensure that these standards do not impede innovation, NHTSA cannot prescribe how manufacturers must meet the performance requirements. Manufacturers instead certify that their

98 49 C.F.R. § 387.9. 99 In New York City, for example, the owners of licensed taxicabs must have third-party liability insurance of at least $100,000 per person and $300,000 per accident, as compared to the minimal insurance requirements of $25,000/$50,000 for ordinary motor vehicles. 100 NHTSA’s legislative purpose is to ‘reduce traffic accidents and deaths and injuries resulting from traffic accidents.’ 49 U.S.C. § 30101 (2012). To do so, NHTSA is authorized to ‘prescribe motor vehicle safety standards for motor vehicles and motor vehicle equipment in interstate commerce.’ Id. § 30101(1). ‘Motor vehicle safety’ for this purpose is defined as the ‘performance of a motor vehicle or motor vehicle equipment in a way that protects the public against unreasonable risk of accidents occurring because of the design, construction, or performance of a motor vehicle, and against unreasonable risk of death or injury in an accident and includes nonoperational safety of a motor vehicle.’ Id. § 30102(a)(8). A ‘motor vehicle safety standard’ is ‘a minimum standard for motor vehicle or motor vehicle equipment performance.’ Id. § 30102(a)(9). The regulations that NHTSA adopts are incorporated into 49 C.F.R. § 501 (2016). 101 49 U.S.C. § 30111(a) (2012). Geistfeld, Regulation of Artificial Intelligence in the US

2. Federal Regulation of Autonomous Vehicles

149

products satisfy the mandated performance standards.102 Due to the supremacy of federal law over state law, these safety standards preempt (wholly displace) any state tort claims that would frustrate the regulatory objectives.103 Within this statutory scheme, ‘NHTSA’s authority is broad enough to address a wide variety of issues affecting the safety of vehicles equipped with [automated driving] technologies and systems.’104

2. Federal Regulation of Autonomous Vehicles In Congress, there has been considerable bipartisan support for federal legislation that would create a comprehensive framework for regulating autonomous vehicles. Meanwhile, the federal regulators at NHTSA have only provided soft guidance to the industry – recommendations for best practices involving the development of this emergent technology. In 2017, the U. S. House of Representatives passed a bill – without opposition – that would establish a regulatory framework for accelerating the development and safe use of automated driving technologies.105 A Senate committee then unanimously recommended passage of a similar bill, which ultimately died on the Senate floor.106 Committees from both the House and Senate have since been working together, and have issued at least thirteen sections of a newly proposed bill that so far largely tracks the provisions of the previously proposed legislation. This legislation, however, has stalled because of the pandemic and the 2020 presidential election. Although federal legislation governing autonomous vehicles has yet to be enacted, the existing bills provide strong indications of the final statutory framework.107 Consistent with prior legislative practices dating back to the 1990s, the statute will probably require NHTSA to promulgate safety standards regarding the  

102 Id. § 30115. 103 Geier v. American Honda Motor Co., 529 U. S. 861, 868–75 (2000). 104 Stephen P. Wood et al., The Potential Regulatory Challenges of Increasingly Autonomous MoANT A C LARA L. R E EV V . 1423, 1501 (2012). tor Vehicles, 52 S ANTA 105 The bill was first unanimously passed (54-0) by the House Energy and Commerce Committee, and the ‘next day, it passed the whole House on a voice vote, meaning no records of individual votes were cast as there was no significant opposition.’ Summary of H.R. 3388: SELF DRIVE RAC K , Act, G OV T RACK (last updated Oct. 18, 2017). 106 S. R EP . N O . 115-187, at 1 (2017). 107 For more extensive discussion, see Mark A. Geistfeld, The Regulatory Sweet Spot for AutonoORE ST T L AW R EVIEW 337 (2018). mous Vehicles, 53 W AKE F ORES  

Geistfeld, Regulation of Artificial Intelligence in the US

150

B. Status of Autonomous Vehicle Regulation in the United States

performance of autonomous vehicles. Recognizing that the rule-making process takes some time to initiate, the proposed bills all require NHTSA to begin that process within a prescribed time period (3-years being the most common requirement). Once NHTSA has initiated the process, it will probably take at least another year before it adopts such standards. The standards that NHTSA might adopt cannot yet be predicted. For present purposes, however, the issue of greatest relevance involves the impact that these standards will have on the state regulation of autonomous vehicles. In a recent report, NHTSA made the following policy statement: Conflicting State and local laws and regulations surrounding automated vehicles create confusion, introduce barriers, and present compliance challenges. U. S. DOT will promote regulatory consistency so that automated vehicles can operate seamlessly across the Nation. The Department will build consensus among State and local transportation agencies and industry stakeholders on technical standards and advance policies to support the integration of automated vehicles throughout the transportation system.108  

As this policy statement indicates, federal regulations will ultimately determine many of the safety performance standards required of autonomous vehicles. If an autonomous vehicle does not comply with an applicable federal safety standard, it will be defective in this respect and subject the manufacturer to strict tort liability under state law for physical harms caused by the defect. In the absence of federal regulation, state law will resolve the liability issues. Requirements for insurance and related matters fall outside the jurisdiction of NHTSA and will remain within the province of state law.

3. State Legislation Expressly Addressing Autonomous Vehicles As of June 1, 2020, thirty-five states and the District of Columbia have enacted statutes expressly addressing autonomous vehicles. Fourteen of the enacted statutes have provisions pertaining to the operation of autonomous vehicles on public roads. The statutes largely focus on more basic issues related to the introduction and deployment of autonomous vehicles on public roads. While most statutes do not address liability issues, several address the related issues of duty and insur-

108 U. S. Dept. of Transp., Automated Vehicles 3.0: Preparing for the Future of Transportation (Oct. 2018), .  

Geistfeld, Regulation of Artificial Intelligence in the US

3. State Legislation Expressly Addressing Autonomous Vehicles

151

ance. State legislation also establishes safe-operation requirements aimed at preventing accidents in the first place. In at least eleven states, governors have issued executive orders related to autonomous vehicles, which typically promote the testing and potential deployment of the technology. State legislation that expressly addresses autonomous vehicles can be categorized in terms of those provisions that address (1) operating autonomous vehicles without a human operator; (2) operating autonomous vehicles only with a human operator; (3) operating autonomous vehicles on public roads; (4) determining liability in the event of a crash; (5) regulating duty in the event of a crash; (6) requiring insurance; (7) requiring minimal risk conditions; (8) granting special privileges; and (9) requiring research. The statutory texts of the provisions referred to are cited in Annex I.

(a) Operating Autonomous Vehicles Without a Human Operator Ten states have enacted legislation that allows autonomous vehicles to operate without a human operator in the vehicle. In some states, in order to operate an autonomous vehicle without a human operator in the vehicle, there must be a remote human operator capable of taking control of the car in case of emergency. In other states, autonomous vehicles can operate without the involvement of any human operators.

(b) Operating Autonomous Vehicles Only with a Human Operator Some states allow fully automated driving only if there is a human operator in the driver’s seat of the vehicle, who could take control of the vehicle in case of an emergency.

(c) Operating Autonomous Vehicle on Public Roads While some statutes only impliedly allow autonomous vehicles on public roads, others explicitly address the subject. Of those that expressly allow autonomous vehicles on public roads, some states have additional requirements for operating autonomous vehicles on public roads (such as requiring a human operator in the front seat), while others allow operation on public roads without any further mandates.

Geistfeld, Regulation of Artificial Intelligence in the US

152

B. Status of Autonomous Vehicle Regulation in the United States

(d) Determining Liability in the Event of a Crash Few statutes expressly address the determination of liability in the event of a crash. In fact, only Tennessee specifies a unique liability framework for autonomous vehicle crashes: ‘(a) Liability for accidents involving an ADS-operated vehicle shall be determined in accordance with product liability law, common law, or other applicable federal or state law. Nothing in this chapter shall be construed to affect, alter, or amend any right, obligation, or liability under applicable product liability law, common law, federal law, or state law. (b) When the ADS is fully engaged, operated reasonably and in compliance with manufacturer instructions and warnings, the ADS shall be considered the driver or operator of the motor vehicle for purposes of determining (1) Liability of the vehicle owner or lessee for alleged personal injury, death, or property damage in an incident involving the ADS-operated vehicle; and (2) Liability for non-conformance to applicable traffic or motor vehicle laws.’ Tenn. Code Ann. § 55-30-106.

Additionally, Utah notes that an automated driving system is ‘responsible’ for compliant operation, while indirectly suggesting that it would be responsible for a crash caused by noncompliant operation except under limited conditions: ‘If a vehicle with an engaged level three ADS issues a request to intervene, the ADS is responsible for the compliant operation of the vehicle until disengagement of the ADS. (b) If a vehicle with an engaged level four or five ADS issues a request to intervene, the ADS is responsible for the compliant operation of the vehicle until or unless a human user begins to operate the vehicle. (3) The ADS is responsible for compliant operation of an ADS-dedicated vehicle.’ Utah Code Ann. § 11 41-26-104.

Other statutes either do not address liability issues, or state that traditional rules of liability govern autonomous vehicle crashes – the same outcome that occurs in states without any legislation whatsoever.

Geistfeld, Regulation of Artificial Intelligence in the US

4. Application of State Tort Law to Autonomous Vehicles

153

(e) Regulating Duty in the Event of a Crash Several state statutes include provisions that require automated driving systems and human operators or passengers to take certain actions in the event of a crash. Generally, these provisions establish and clarify that, in the event of a crash, there is a duty to contact law enforcement and to stay on the scene of a crash until law enforcement arrives. While this does not directly relate to liability, these provisions could be relevant for determining damages and establishing independent tort (or even criminal) causes of actions when an automated driving system or human operator violates the duty to remain at the scene of an accident.

(f) Requiring Insurance Several state statutes require autonomous vehicle testing entities to procure insurance for their vehicles. These provisions differ in both the types (e.g. liability, indemnity, etc.) and the amount (ranging from $1.5 million to $ 5 million) of insurance that they require.

4. Application of State Tort Law to Autonomous Vehicles In the event that an autonomous vehicle crashes, the owner’s or operator’s responsibilities for the injury costs would be governed by the insurance requirements discussed in subsection 1 above. Owners and operators could also be subject to negligence liability for improperly deploying fully autonomous vehicles. As explained above, plaintiffs bear the burden of proving the owner’s/operator’s negligence in such cases, which will be hard to do in many cases of AI-caused injury (notably due to the ‘black box’ nature of machine-learning algorithms). However, assuming that the vehicles are programmed so that they only operate within their intended operating domain, cases relying on the owner’s or operator’s negligence liability are likely to be rare. Given the current state of vehicle automation, liability will therefore most likely be focused on the manufacturer and other commercial suppliers of the vehicle and its component parts.109 In many cases, the liabilities not relying on negligence-based inquiries are easily resolved. For example, if the tort plaintiff can establish that a coding error

109 For comprehensive discussion of how state tort law is likely to apply to the crash of an autonomous vehicle, see Mark A. Geistfeld, A Roadmap for Autonomous Vehicles: State Tort Liability, AL IFORNIA L AW R EVIEW EVIE W 1611 (2017). Automobile Insurance, and Federal Safety Regulation, 105 C ALIFORNIA Geistfeld, Regulation of Artificial Intelligence in the US

154

B. Status of Autonomous Vehicle Regulation in the United States

caused the operating system to freeze or crash, in turn causing the autonomous vehicle to crash, the performance would be a ‘malfunction’ that subjects the manufacturer to strict tort liability in the vast majority of states. Outside of easy cases of malfunction, however, it will often be difficult to determine whether the vehicle contains a defect that subjects the manufacturer to strict liability. The most complex liability issue involves cases in which the fully functioning operating system causes an autonomous vehicle to crash. The manufacturer would incur liability if the crash were caused by a defect attributable to the design of the operating system that causes the vehicle to perform in an unreasonably dangerous manner. Each vehicle with the same operating system executes the dynamic driving task in the same manner. Consequently, the entire fleet is effectively guided by a single driver – the operating system that determines how this class of autonomous vehicles executes the dynamic driving task. Premarket testing supplies the driving experience that machine learning uses to improve the safety performance of an autonomous vehicle before it is commercially distributed in the market. The operating system would be defectively designed if it were not adequately tested and unable to make the necessary safety improvements via machine learning. In the vast majority of states, this question is resolved by the risk-utility test or its substantive equivalent, the modified consumer expectations test.110 Under this test, a design is defective if there is a reasonable alternative design with a disutility (or increased cost) that is less than the associated reduction of risk (or safety benefit of the design modification). Though not labelled as such, the test is effectively a negligence-based inquiry. Application of this test raises a number of difficult issues. To prove that the design of a product is defective, the tort plaintiff must identify a reasonable alternative design that is within the ‘state of the art’ – a requirement that could permit the plaintiff to compare the autonomous vehicle’s safety performance to other (reasonably designed) autonomous vehicles that have already been commercially distributed by other manufacturers. This baseline is problematic because it would provide a considerable competitive advantage for the ‘first movers’ in the commercial distribution of autonomous vehicles that have already benefitted from extensive real-world driving experience. Instead, the appropriate baseline more likely involves a comparison of the autonomous vehicle’s safety performance with conventional motor vehicles. Such a benchmark depends on underlying criteria for evaluating the relative safety performance of an autonomous vehicle. What are the necessary road conditions? How many miles should be driven on freeways

110 See id. at 1635-47. Geistfeld, Regulation of Artificial Intelligence in the US

5. Conclusion

155

and in urban conditions? How many total miles must be logged by an operating system to generate sufficiently reliable crash data? What other metrics are required for adequately measuring safe performance? These issues are not clearly resolved by widely adopted tort rules and could easily invite protracted and costly litigation. Unless these issues are conclusively resolved by federal regulations, they will be governed by tort law in a manner that is likely to subject manufacturers and other suppliers to considerable legal uncertainty. If the tort plaintiff fails to prove a defect, compensation of damage caused by autonomous vehicles hinges on whether the damage would be covered by the owner’s or operator’s insurance (discussed in section 1 above). As is true for AI in general, these problems have generated a lively debate among U. S. legal scholars about how existing liability rules might apply when autonomous vehicles crash, and whether fundamental reforms are required to adequately resolve these matters. To date, scholars have reached ‘the shared conclusion’ that elimination of a human driver will shift responsibility onto manufacturers as a matter of products liability law, with most tort litigation involving claims for design or warning defects. Beyond these general conclusions, ‘existing predictions part ways.’111  

5. Conclusion Although the majority of states have enacted legislation directly regulating autonomous vehicles, very few have addressed the conditions for determining liability in the event of a crash. These statutes instead largely address the preconditions for operating autonomous vehicles on public roads, including rules that govern operators and technical features of the automated driving systems. At this point, it is still possible to forecast how autonomous vehicles will affect issues of liability and insurance at a very general level. The transition to autonomous vehicles will reduce the incidence of crashes caused by driver error, shifting the emphasis from third-party liability insurance procured by the owner to the first-party insurance (also procured by the owner) covering the vehicle and its occupants. A greater proportion of injury costs will probably be shifted to manufacturers, because the autonomous vehicle executes the dynamic driving task instead of a human driver. Under existing laws, manufacturers will be liable for crashes caused by defects in the operating system, in addition to those caused by

111 Id. at 1619 (documenting the range of scholarly perspectives about liability and the need for reform). Geistfeld, Regulation of Artificial Intelligence in the US

156

C. Annexes

defects in the hardware of the vehicle – the only source of liability they now face for conventional motor vehicle crashes. Although manufacturers will bear a greater proportion of injury costs, whether their total amount of liability will increase relative to current levels depends on the extent to which autonomous vehicles reduce the total number of crashes.

C. Annexes Annex I: State Regulations of Autonomous Vehicles

1. State Legislation Allowing Autonomous Vehicles to Operate without a Human Operator in the Vehicle –

Alabama ‘Notwithstanding any other provision of law, an automated commercial motor vehicle may operate in this state without a conventional driver physically present in the vehicle …’ Ala. Act No. 2019-496, § 3.



Florida ‘[A]n autonomous vehicle or a fully autonomous vehicle equipped with a teleoperation system may operate without a human operator physically present in the vehicle when the teleoperation system is engaged.’ § 316.1975, Fla. Stat.



Georgia ‘A person may operate a fully autonomous vehicle with the automated driving system engaged without a human driver being present in the vehicle, provided that such vehicle…’ GA Code Ann. § 40-8-11.

Geistfeld, Regulation of Artificial Intelligence in the US

Annex I: State Regulations of Autonomous Vehicles



157

Iowa ‘A driverless-capable vehicle may operate on the public highways of this state without a conventional human driver physically present in the vehicle …’ Iowa Code § 321.515.



Louisiana ‘[A]n autonomous commercial motor vehicle may operate in this state without a conventional driver physically present in the vehicle …’ LSA-R.S. § 400.8.



Nevada ‘A fully autonomous vehicle may be tested or operated on a highway within this State with the automated driving system engaged and without a human operator being present within the fully autonomous vehicle …’ NRS § 482A.070.



New Hampshire ‘A testing entity may test ADS-equipped vehicles on public roadways of this state only if the testing entity has been approved for testing by the department after submitting the information required pursuant to this section.’ NH RSA § 242:1.



North Dakota ‘An autonomous vehicle with automated driving systems engaged does not require a human driver to operate on the public highway …’ N.D.C.C. § 39-01-01.2.



Tennessee ‘An ADS-operated vehicle may drive or operate on streets and highways in this state with the ADS engaged without a human driver physically present in the vehicle if the vehicle meets the following conditions …’ Ten. Code Ann. § 55-30-103.

Geistfeld, Regulation of Artificial Intelligence in the US

158



C. Annexes

Texas ‘A licensed human operator is not required to operate a motor vehicle if an automated driving system installed on the vehicle is engaged.’ Texas Transportation Code § 545.453.



Utah ‘A motor vehicle equipped with a level four or level five ADS may operate in driverless operation on a highway in this state.’ Utah Code Ann. § 41-26-103.

2. State Legislation Allowing Automated Driving only with a Human Operator –

California ‘The driver shall be seated in the driver’s seat, monitoring the safe operation of the autonomous vehicle, and capable of taking over immediate manual control of the autonomous vehicle in the event of an autonomous technology failure or other emergency…[e]xisting law authorizes the operation of an autonomous vehicle on public roads for testing purposes by a driver who possesses the proper class of license for the type of vehicle operated if specified requirements are satisfied.’ Cal. Veh. Code § 38750.



Vermont ‘An automated vehicle tester shall not test an automated vehicle on a public highway unless (1) The operator is (A) seated in the driver’s seat of the automated vehicle …’ 23 V.S.A. § 4203.

Geistfeld, Regulation of Artificial Intelligence in the US

Annex I: State Regulations of Autonomous Vehicles

159

3. State Legislation Allowing Autonomous Vehicles on Public Roads –

California ‘Existing law authorizes the operation of an autonomous vehicle on public roads for testing purposes by a driver who possesses the proper class of license for the type of vehicle operated if specified requirements are satisfied.’ Cal. Veh. Code § 38750.



Iowa ‘A driverless-capable vehicle may operate on the public highways of this state without a conventional human driver physically present in the vehicle …’ Iowa Code § 321.515.



New Hampshire ‘A testing entity may test ADS-equipped vehicles on public roadways of this state only if the testing entity has been approved for testing by the department after submitting the information required pursuant to this section.’ NH RSA § 242:1.



New York ‘[T]he New York state commissioner of motor vehicles may approve demonstrations and tests consisting of the operation of a motor vehicle equipped with autonomous vehicle technology while such motor vehicle is engaged in the use of such technology on public highways within this state for the purposes of demonstrating and assessing the current development of autonomous vehicle technology …’ N.Y. U.C.C. Law § 55.



North Dakota ‘An autonomous vehicle… may operate on the public highways of this state in full compliance with all vehicle registration, title, insurance, and all other applicable requirements under this title.’ N.D.C.C. 39-01-01.2.

Geistfeld, Regulation of Artificial Intelligence in the US

160



C. Annexes

Vermont ‘An automated vehicle shall not be operated on public highways for testing until the Traffic Committee as defined in 19 V.S.A. § 1(24) approves a permit application for automated vehicle testers that defines the scope and operational design domain for the test and demonstrates the ability of the automated vehicle tester to comply with the requirements of this section.’ 19 V.S.A. § 4203.



Washington ‘In order to test an autonomous motor vehicle on any public roadway under the department’s autonomous vehicle self-certification testing pilot program, the following information must be provided by the self-certifying entity testing the autonomous motor vehicle …’ R.C.W. 46.30.

4. State Legislation Addressing Liability in the Event of a Crash –

Tennessee ‘(a) Liability for accidents involving an ADS-operated vehicle shall be determined in accordance with product liability law, common law, or other applicable federal or state law. Nothing in this chapter shall be construed to affect, alter, or amend any right, obligation, or liability under applicable product liability law, common law, federal law, or state law. (b) When the ADS is fully engaged, operated reasonably and in compliance with manufacturer instructions and warnings, the ADS shall be considered the driver or operator of the motor vehicle for purposes of determining (1) Liability of the vehicle owner or lessee for alleged personal injury, death, or property damage in an incident involving the ADS-operated vehicle; and (2) Liability for non-conformance to applicable traffic or motor vehicle laws.’ Tenn. Code Ann. § 55-30-106.

Geistfeld, Regulation of Artificial Intelligence in the US

Annex I: State Regulations of Autonomous Vehicles



161

Utah ‘If a vehicle with an engaged level three ADS issues a request to intervene, the ADS is responsible for the compliant operation of the vehicle until disengagement of the ADS. (b) If a vehicle with an engaged level four or five ADS issues a request to intervene, the ADS is responsible for the compliant operation of the vehicle until or unless a human user begins to operate the vehicle. (3) The ADS is responsible for compliant operation of an ADS-dedicated vehicle.’ Utah Code Ann. § 11 41-26-104.



Louisiana ‘The provisions of this Part shall not be construed to repeal, modify, or preempt any liability that may be incurred pursuant to existing law applicable to a vehicle owner, operator, manufacturer, component part supplier, or retailer including any law that may apply to jurisdiction for any bodily injury or property damage claims arising out of this Part. All choice of law conflicts with respect to bodily injury or property damage claims shall be resolved in accordance with Louisiana law.’ LSA-R.S. § 400.8.

5. State Legislation Regulating Duty in the Event of a Crash –

Alabama ‘When an accident occurs involving an automated commercial motor vehicle, the requirements of Chapter 10, Title 32, Code of Alabama 1975, shall be deemed satisfied if the vehicle remains on the scene of the accident and the vehicle, owner, a person on behalf of the owner, or operator promptly contacts appropriate law enforcement entities and communicates the information required by Chapter 10, Title 32, Code of Alabama 1975.’ Ala. Act No. 2019-496, § 5.



Georgia ‘Notwithstanding the provisions of this chapter to the contrary, when an accident involves a fully autonomous vehicle with the automated driving system

Geistfeld, Regulation of Artificial Intelligence in the US

162

C. Annexes

engaged, the requirements of subsection (a) of Code Sections 40-6-270, 40-6271, 40-6-272, 40-6-273, and 40-6-273.1 shall be deemed satisfied if such fully autonomous vehicle remains on the scene of such accident as required by law and such fully autonomous vehicle or operator promptly contacts a local law enforcement agency and communicates the information required by this chapter.’ GA Code Ann. § 40-6-279.



Iowa ‘In the event of an accident in which a system-equipped vehicle is involved, the vehicle shall remain at the scene of the accident and the operation of the vehicle shall otherwise comply with sections 321.261 through 321.273 where applicable and to the extent possible, and the vehicle’s owner or a person on behalf of the vehicle’s owner shall promptly report the accident to law enforcement authorities. If a system-equipped vehicle fails to remain at the scene of an accident or the operation of the vehicle fails to otherwise comply with sections 321.261 through 321.273 where applicable and to the extent possible as required by this section, the vehicle’s failure shall be imputed to the vehicle’s owner, and the vehicle’s owner may be charged and convicted of a violation of sections 321.261 through 321.273, as applicable.’ Iowa Code § 321.517.



Louisiana ‘If an accident occurs involving an autonomous commercial motor vehicle while the automated driving system is engaged, the autonomous commercial motor vehicle shall remain at the scene of the accident and the operator or any person on behalf of the operator of the autonomous commercial motor vehicle shall comply with the provisions of R.S. 32:398 relative to contacting the appropriate law enforcement agency and furnishing all relevant information.’ LSA-R.S. § 400.5.



Nevada ‘Any person responsible for the testing of an autonomous vehicle shall report to the Department, within 10 business days after a motor vehicle crash, any motor vehicle crash involving the testing of the autonomous vehicle which results in personal injury or property damage estimated to exceed $750. The DeGeistfeld, Regulation of Artificial Intelligence in the US

Annex I: State Regulations of Autonomous Vehicles

163

partment shall prescribe by regulation the information which must be included in such a report.’ NRS § 482A.5.8.



North Carolina ‘Duty to Stop in the Event of a Crash. – If all of the following conditions are met when a fully autonomous vehicle is involved in a crash, then the provisions of subsections (a) through (c2) and subsection (e) of G.S. 20‑166 and subsections (a) and (c) of G.S. 20‑166.1 shall be considered satisfied, and no violation of those provisions shall be charged…’ N.C. Gen. Stat. § 20-401.



Tennessee ‘(2) With respect to an ADS-operated vehicle, as defined by § 55-30-102, the requirements of subsection (a) are satisfied if the motor vehicle’s owner, or a person on behalf of the motor vehicle’s owner, promptly contacts a law enforcement officer or agency to report the accident and the ADS-operated vehicle remains on the scene of the accident as otherwise required by law.’ Tenn. Code Ann. § 55-10-105.



Texas ‘In the event of an accident involving an automated motor vehicle, the automated motor vehicle or any human operator of the automated motor vehicle shall comply with Chapter 550.’ Texas Transportation Code § 545.455.



Utah ‘In the event of a crash involving a vehicle with the ADS engaged (a) the ADS-equipped vehicle shall remain on the scene of the crash when required to do so under Section 41-6a-401, consistent with the vehicle’s ability to achieve a minimal risk condition as described in Section 41-26103; and (b) the owner of the ADS-equipped vehicle, or a person on behalf of the vehicle owner, shall report any crashes or collisions consistent with Chapter 6a, Part 4, Accident Responsibilities. (2) If the owner or person on behalf of the owner is not on board the vehicle at the time of the

Geistfeld, Regulation of Artificial Intelligence in the US

164

C. Annexes

crash, the owner shall ensure that the [required] information is immediately communicated or made available to the persons involved or to a peace officer…’ Utah Code Ann. § 12 41-26-105.

6. State Legislation Requiring Insurance –

Alabama ‘The automated commercial vehicle is covered by motor vehicle liability coverage in an amount not less than two million dollars ($2,000,000).’ Ala. Act No. 2019-496, § 3.



Georgia ‘A person may operate a fully autonomous vehicle with the automated driving system engaged without a human driver being present in the vehicle, provided that such vehicle… is covered by motor vehicle liability coverage equivalent to 250 percent of that which is required under (i) Indemnity and liability insurance equivalent to the limits specified in Code Section 40-1-166; or (ii) Self-insurance pursuant to Code Section 33-34-5.1 equivalent to, at a minimum, the limits specified in Code Section 40-1-166…’ GA Code Ann. § 40-8-11.



Iowa ‘Before a system-equipped vehicle is allowed to operate on the public highways of this state, the owner shall obtain financial liability coverage for the vehicle. A system-equipped vehicle shall not operate on the highways of this state unless financial liability coverage is in effect for the vehicle and unless proof of financial liability coverage is carried in the vehicle pursuant to section 321.20B.’ Iowa Code § 325.516.



Louisiana ‘[A]n autonomous commercial motor vehicle may operate in this state without Geistfeld, Regulation of Artificial Intelligence in the US

Annex I: State Regulations of Autonomous Vehicles

165

a conventional driver physically present in the vehicle if the autonomous commercial motor vehicle meets all of the following criteria… [i]s covered by motor vehicle liability coverage in an amount not less than two million dollars.’ LSA-R.S. § 400.3.



Nevada ‘Before a person begins testing an autonomous vehicle on a highway within this State, the person must 1. Submit to the Department proof of insurance or self-insurance acceptable to the Department in the amount of $5,000,000; or 2. Make a cash deposit or post and maintain a surety bond or other acceptable form of security with the Department in the amount of $5,000,000.’ NRS § 482A.060.



New Hampshire ‘Before an ADS-equipped vehicle may operate on public roads in this state, an owner of such a vehicle shall submit proof of financial responsibility satisfactory to the department that the ADS-equipped vehicle is covered by insurance or proof of self-insurance that satisfies the requirements of RSA 264.’ NH RSA § 242:1.



North Dakota ‘An autonomous vehicle must be capable of operating in compliance with… insurance, and all other applicable requirements under this title.’ N.D.C.C. § 39-01-01.2.



Tennessee ‘An ADS-operated vehicle may drive or operate on streets and highways in this state with the ADS engaged without a human driver physically present in the vehicle if the vehicle meets the following conditions … (4) (A)(i) The vehicle is covered by primary automobile liability insurance in at least five million dollars ($5,000,000) per incident for death, bodily injury, and property damage …’ Ten. Code Ann. § 55-30-103.

Geistfeld, Regulation of Artificial Intelligence in the US

166



C. Annexes

Texas ‘An automated motor vehicle may not operate on a highway in this state with the automated driving system engaged unless the vehicle is… covered by motor vehicle liability coverage or self-insurance in an amount equal to the amount of coverage that is required under the laws of this state.’ Texas Transportation Code § 545.452.



Vermont ‘An automated vehicle tester shall not test an automated vehicle on a public highway unless… the automated vehicle tester… submits to the Commissioner, in a manner and form directed by the Commissioner, proof of liability insurance, self-insurance, or a surety bond of at least five million dollars for damages by reason of bodily injury, death, or property damage caused by an automated vehicle while engaged in automated vehicle testing.’ 23 V.S.A. § 4203.



Washington ‘No entity may test an autonomous motor vehicle on any public roadway under the department’s autonomous vehicle self-certification testing pilot program unless (a) The entity holds an umbrella liability insurance policy that covers the entity in an amount not less than five million dollars per occurrence for damages by reason of bodily injury or death or property damage, caused by the operation of an autonomous motor vehicle for which information is provided under the autonomous vehicle self-certification testing pilot program…’ R.C.W. 46.30.

7. State Legislation Requiring Minimal Risk Condition –

Alabama ‘The automated commercial vehicle can achieve a minimal risk condition if a failure occurs rendering the vehicle unable to perform the dynamic driving

Geistfeld, Regulation of Artificial Intelligence in the US

Annex I: State Regulations of Autonomous Vehicles

167

task relevant to its intended operational design domain or if the vehicle exits its operational design domain.’ Ala. Act No. 2019-496, § 2.



Iowa ‘The vehicle is capable of achieving a minimal risk condition if a malfunction of the automated driving system occurs that renders the system unable to perform the entire dynamic driving task within the system’s intended operational design domain, if any.’ Iowa Code § 325.515.



North Dakota ‘The autonomous vehicle is capable of achieving a minimal risk condition if a system failure occurs that renders the automated driving system unable to perform the entire dynamic driving task relevant to the vehicle’s intended operational design domain.’ N.D.C.C. § 39-01-01.2.



Nevada ‘If the autonomous vehicle is a fully autonomous vehicle, the fully autonomous vehicle is capable of achieving a minimal risk condition if a failure of the automated driving system occurs which renders the automated driving system unable to perform the dynamic driving task relevant to its intended operational design domain.’ NRS § 482A.080.



Utah ‘[W]hen operated by an ADS, if a system failure occurs that renders the ADS unable to perform the entire dynamic driving task relevant to the intended operational design domain of the ADS, the ADS will achieve a minimal risk condition or make a request to intervene…’ Utah Code Ann. § 10 41-26-103.

Geistfeld, Regulation of Artificial Intelligence in the US

168

C. Annexes

8. State Legislation Granting Special Privileges –

Arkansas ‘Vehicles equipped with driver-assistive truck platooning systems, may follow other vehicles closer than allowed under subsection (a) of this section and subdivision (b)(1) of this section.’ Ark. Code Ann. § 27-51-305.



Florida ‘This section does not prohibit the use of an electronic display used in conjunction with a vehicle navigation system; an electronic display used by an operator of an autonomous vehicle as defined in s. 316.003(3); or an electronic display used by an operator of a vehicle equipped and operating with driver-assistive truck platooning technology, as defined in s. 316.003.’ § 316.1975, Fla. Stat.



Georgia ‘A fully autonomous vehicle with the automated driving system engaged or the operator of a fully autonomous vehicle with the automated driving system engaged [are exempted from the driver’s license requirement].’ GA Code Ann. § 40-5-21.



Nevada ‘The original manufacturer or developer of an automated driving system that has been modified by an unauthorized third party is not liable for damages to any person injured due to a defect caused by the modification of the automated driving system by the third party unless the defect that caused the injury was present in the automated driving system as originally manufactured or developed.’ NRS § 482A.090.



North Carolina ‘It is unlawful for any parent or legal guardian of a person less than 12 years of age to knowingly permit that person to occupy a fully autonomous vehicle in

Geistfeld, Regulation of Artificial Intelligence in the US

Annex I: State Regulations of Autonomous Vehicles

169

motion or which has the engine running unless the person is under the supervision of a person 18 years of age or older.’ N.C. Gen. Stat. § 20-401.



North Dakota ‘An on-demand autonomous vehicle network may connect passengers to autonomous vehicles without human drivers in compliance with subdivision a of subsection 3 of section 39-01-01.2 exclusively, or subdivision b of subsection 3 of section 39-01-01.2 as part of a digital network that also connects passengers to human drivers who provide transportation services, consistent with applicable law.’ N.D.C.C. § 8-12-02.



Pennsylvania ‘Exception – Nonlead vehicles in a platoon [group of close-following autonomous vehicles]] shall not be subject to section 3310 (relating to following too closely).’ Pa. C.S. § 3317.



Tennessee ‘(g) (5)(B) If the vehicle is operated by an ADS and: (a) If no parent or legal guardian is present at the time of the violation, the human person accompanying the child is solely responsible for compliance with this subsection (g); (b) If no parent or guardian is present at the time of the violation and more than one (1) human person accompanies the child, each person is jointly responsible for compliance with this subsection (g); or (c) If no human person accompanies the child, the parent or legal guardian of the child is responsible for compliance with this subsection (g) …’ Tenn. Code Ann. § 55-9-602.

‘(1) Except as otherwise provided in subdivision (2), the operator of a passenger motor vehicle under this part shall not be fined for the failure of any passenger over sixteen (16) years of age to wear a safety belt; and (2) For purposes of an ADS-operated vehicle and when the ADS is engaged, neither the operator Geistfeld, Regulation of Artificial Intelligence in the US

170

C. Annexes

nor the owner shall be fined for the failure of any passenger, regardless of age, to wear a safety belt.’ Tenn. Code Ann. § 55-9-606.



Texas ‘A political subdivision of this state or a state agency may not impose a franchise or other regulation related to the operation of an automated motor vehicle or automated driving system.’ Texas Transportation Code § 545.451.



Utah:m ‘No local agency, political subdivision, or other entity may prohibit the operation of a vehicle equipped with a driving automation system, an ADS, or an on-demand autonomous vehicle network, or otherwise enact or keep in force a rule or ordinance that would impose a tax, fee, performance standard, or other requirement specific to the operation of a vehicle equipped with a driving automation system, an ADS, or an on-demand autonomous vehicle network in addition to the requirements of this title.’ Utah Code Ann. § 11 41-26-108.

9. State Legislation Requiring Further Research –

Connecticut: ‘There is established a task force to study fully autonomous vehicles. Such study shall include, but need not be limited to, (1) an evaluation of the standards established by the National Highway Traffic Safety Administration regarding state responsibilities for regulating fully autonomous vehicles, (2) an evaluation of laws, legislation and regulations proposed or enacted by other states to regulate fully autonomous vehicles, (3) recommendations on how the state should regulate fully autonomous vehicles through legislation and regulation, and (4) an evaluation of the pilot program …’ Conn. Gen. Stat. 17-69 § 18. Geistfeld, Regulation of Artificial Intelligence in the US

Annex I: State Regulations of Autonomous Vehicles



171

Maine: ‘That the Commission on Autonomous Vehicles, referred to in this resolve as ‘the commission,’ is established to coordinate efforts among state agencies and knowledgeable stakeholders to inform the development of a process to allow an autonomous vehicle tester to demonstrate and deploy for testing purposes an automated driving system on a public way.’ Maine L.D. 1724 § 1 (128th Leg. 2018).



New Hampshire: ‘The department of safety, division of motor vehicles, shall establish a pilot program to test automated vehicle technologies on public roads within the state. The pilot program shall commence 90 days following the effective date of this section.’ NH RSA § 242:1.



New York: ‘The commissioner of motor vehicles shall, in consultation with the superintendent of state police, submit a report to the governor, the temporary president of the senate, the speaker of the assembly, and the chairs of the senate and assembly transportation committees on the demonstrations and tests authorized by section one of this act. Such report shall include, but not be limited to, a description of the parameters and purpose of such demonstrations and tests, the location or locations where demonstrations and tests were conducted, the demonstrations’ and tests’ impacts on safety, traffic control, traffic enforcement, emergency services, and such other areas as may be identified by such commissioner.’ N.Y. U.C.C. Law § 55.



North Carolina: ‘There is hereby created a Fully Autonomous Vehicle Committee within the Department of Transportation.’ N.C. Gen. Stat. § 20-401.



North Dakota: ‘The department of transportation, in collaboration and consultation with the autonomous vehicle technology industry, shall study the use of vehicles equipped with automated driving systems on the highways in this state and

Geistfeld, Regulation of Artificial Intelligence in the US

172

C. Annexes

the data or information stored or gathered by the use of those vehicles. The study must include a review of current laws dealing with licensing, registration, insurance, data ownership and use, and inspection and how they should apply to vehicles equipped with automated driving systems. The department of transportation shall report its findings and recommendations, together with any legislation required to implement the recommendations, to the sixty-sixth legislative assembly.’ N.D. House Bill 1202 (2017).



Oregon: ‘The Task Force on Autonomous Vehicles is established… [t]he task force shall develop recommendations for legislation to be introduced during the next odd-numbered year regular session of the Legislative Assembly regarding the deployment of autonomous vehicles on highways.’ Or. House Bill 4063 (2018).



Pennsylvania: ‘The Highly Automated Vehicle Advisory Committee is established within the department… and may undertake any of the following… [e]ngaging in continued research and evaluation of connected and automated systems technology necessary to ensure safe testing, deployment and continued innovation in this Commonwealth.’ Pa. C.S. § 8503.

Annex II: State Regulation that Indirectly References AI

1. Appropriating Money/Expanding Research Beyond the specific legislation tailored to the use of AI, many states have consistently appropriated money or used tax incentives to increase the research and innovation surrounding AI use within the state. Most recently, the Utah legislature enacted the Emerging Technology Talent Initiative,1 which focuses on research at

1 2020 Utah Legis. Serv. 361 (West) (codified at U TAH C ODE A NN . § 53B-26-301 (West 2020)). Geistfeld, Regulation of Artificial Intelligence in the US

Annex II: State Regulation that Indirectly References AI

173

universities within Utah in ‘Deep technology,’ including ‘artificial intelligence.’2 The legislation establishes a procedure for how school programs can apply for the funding,3 and appropriates $5 million for the next fiscal year for that purpose.4 Similar educational programs have been initiated in other jurisdictions across the country.5 In addition to appropriating money for research within educational institutions, a significant number of states have also sought to make education in AI part of the core standards state and local public schools must meet.6

2 U TAH C ODE A NN . § 53B-26-301(2)(b)(ii), (3). 3 Id. § 53B-26-302. 4 2020 Utah Legis. Serv. 361. 5 E.g., C AL . E DUC . C ODE § 75008(c) (West 2018) (‘The Research and Development Unit shall focus on using technology, data science, behavioral science, machine learning, and artificial intelligence to build out student supports.’); id. §§ 92985.5-86 (creating the California Brain Research through Advancing Innovative Neurotechnologies Initiative through the University of California, because, among other things, it ‘has the potential to be a major driver of new industries and jobs in … artificial intelligence’); Florida Technology Development Act, 2002 Fla. Sess. Law Serv. 2002-265 § 1 (West) (creating the technology centers at universities to expand research in ‘advanced, and innovative technologies’ in light of the legislature’s view that ‘the development of high-technology industries … including artificial intelligence/human-centered computing … is critical to the long-term economic vitality’ of the state); 1999 La. Sess. Law Serv. 535 (West) (appropriating $86,000 for the ‘Artificial Intelligence Project’ at the Louisiana State Law Institute, which is part of Louisiana State University); 1996 Ohio Legis. Serv. Ann. 179 § 92-.10 (West) (appropriating $448,187 to support the ‘Centers for Artificial Intelligence’ at ‘Wright-Patterson Air Force Base, … Wright State University, … Sinclair Community College, … [and] Miami Valley Research Institute’); 1989 Tex. Sess. Law Serv. 414 § 1 (West) (codified at T EX . G OV ’ T C ODE A NN . §   483.002 (W EST 1989)) (creating the Texas Manufacturing Institute which is responsible for ‘encourag[ing] the development … [in] the areas of … artificial intelligence applications’). 6 E.g., 19 T E X . A DMIN . C ODE § 126.52(c)(10)(A) (2019) (creating a cybersecurity capstone course for high school students that expects students to ‘describe the integration of artificial intelligence L A . A DMIN . C ODE r. 290-3-3.09(2)(a)(1)(ii), (b)(4)(xxiv) and machine learning in cybersecurity’); A LA (2019) (requiring computer science teachers to demonstrate knowledge on the effects of artificial intelligence on society as well as demonstrate a capability to teach students to ‘implement an artificial intelligence algorithm’); W. V A . C ODE R. § 126-44N-5 (2019) (outlining the curriculum of a proposed high school course called ‘Computer Science in the Modern World’ that requires understand the ‘major applications of artificial intelligence’); id. § 126-44M-5 (outlining a curriculum for professionals in information technology which includes a specialized course on ‘Artificial Intelligence’); A RIZ . A DMIN . C ODE § R7-2-615(T)(2)(c)(ii) (2019) (requiring schools to have ‘six semester hours in computer science electives which may include … artificial intelligence’ in order to be KL A . A DMIN . C ODE § 210:15-3-209(b) granted a computer science endorsement from the state); O KLA (4)(A)(i)-(ii) (2019) (setting computer science standards to be completed before the end of high school to include knowledge of ‘artificial intelligence algorithms’ and the ability to ‘develop E V . A DMIN . C ODE § 389.505(10)(d) (2018) (requiring students be compe[such] an … algorithm’); N EV tent by the end of high school to understand the ‘potential impacts of artificial intelligence on society’); 2018 Mich. Legis. Serv. 227 § 297a, h (West) (appropriating $36.485 million to ‘creat[e] comGeistfeld, Regulation of Artificial Intelligence in the US

174

C. Annexes

In addition, state legislation has tied funding considerations to proposals involving AI both in its effects on the labor pool7 as well as its ability to improve medical testing.8 Local governments have also used tax incentives and development zones to help spur the growth of AI technology companies in their respective jurisdictions. For example, in order to attract companies to operate within its jurisdiction, the District of Columbia in 2016 created a property tax rebate for technology companies that purchased or leased nonresidential buildings within the District to use as the business’ office.9 This tax incentive builds on the District of Columbia’s previous tax credit system that enabled ‘Qualified High Technology Companies’ such as those that utilize AI10 to decrease their tax burden based on the cost of relocating the office11 and the number of employees hired.12 A similar tax credit was instituted in New York for the purposes of encouraging technology companies to locate within the jurisdiction.13 Other tax incentives include giving

petencies and earn[] credentials in high-demand fields’ which are defined as including ‘machine learning and artificial intelligence’); 19 T EX . A DMIN . C ODE § 130.409(b)(3) (2017) (creating a ‘Robotics II’ course which includes exploration of ‘artificial intelligence and programming in the robotic and automation industry’); M ONT . A DMIN . R. 10.58.528(1) (c)(viii) (2014) (requiring that students in computer science ‘demonstrate[e] knowledge of and the ability to construct artificial intelligence and robotic applications’); 19 T EX . A DMIN . C ODE §§ 126.38(c)(6)(Q), .40(c)(2)(g), .48 (c)(4)(J) (2011) (creating high school courses in ‘Game Programming and Design,’ ‘Robotics Programming and Design,’ and ‘Web Game Development’ all of which require students to develop skills in ‘artificial intelligence’ and include its use in the games or robots created during the course). 7 C AL . G OV ’ T C ODE § 53083.1(d)(6) (West 2020) (requiring reporting of ‘net job loss or replacement due to the use of automation, artificial intelligence, or other technologies’ after a ‘warehouse distribution center’ has been given ‘an economic development subsidy’). ALTH H –G EN . § 13-4004(c)(3)(iii) (West 2019) (requiring applicants to the ‘Pro8 M D . C ODE A NN ., H EEALT fessional and Volunteer Firefighter Innovative Cancer Screening Technologies Program’ to disclose whether the proposed testing scheme ‘employs innovative or novel technologies, such as … artificial intelligence’). 9 Creative and Open Space Modernization Amendment Act of 2016, 2016 D.C. Sess. L. Serv. 21-160 (West) (codified at D.C. C ODE A NN . § 47-4665 (West 2016)). 10 New E-Conomy Transformation Act of 2000, 2000 D.C. Sess. L. Serv. 13-256 (West) (codified at D.C. C ODE A NN . § 47-1817.01 (West 2001)). 11 D.C. C ODE A NN . § 47-1817.02. 12 Id. § 47-1817.03; see also id. § 47-1817.04-.05 (tax credits for disadvantaged employees). 13 New York State Emerging Industry Jobs Act, 1998 N.Y. Sess. Laws 56 § 31-32 (McKinney) (codified at N.Y. P UB . A UTH . L AW § 3102-e (McKinney 1998)) (granting ‘emerging technology companies,’ such as those that utilize AI, eligibility for a tax credit based on the number of employees hired). Geistfeld, Regulation of Artificial Intelligence in the US

Annex II: State Regulation that Indirectly References AI

175

tax credits to film industries that utilize AI in filming and producing14 as well as exempting sales tax on products purchased for AI research.15 The final example of state efforts to expand innovation in the field of AI is found in the attempts of state governments to expand their own internal use of AI. For example, New York permits agencies to enter into contracts that will enable them to gain access to the research the United States Air Force has done in AI.16 Massachusetts appropriated money to utilize AI in its Department of Revenue.17 Finally, Texas has codified its intention that ‘[e]ach state agency and local government shall, in the administration of the agency or local government, consider using next generation technologies, including cryptocurrency, blockchain technology, and artificial intelligence.’18

2. Investigative Commissions In addition to encouraging research and appropriating money for specific projects, a handful of state legislatures have initiated investigative committees to better understand the effects AI will have on the future of their state’s economy. The first state to do so was Vermont, which created an Artificial Intelligence Task Force in 2018.19 The final report of the committee was completed in January of 2020, which outlined several recommendations in its findings.20 The report noted

14 G A . C OMP . R. & R EGS . § 159-1-1.02 (2013) (defining eligible companies as those that utilize ‘Game Engine[s]’ which is defined as a ‘software system … used to create … an interactive game … include[ing] … artificial intelligence’). 15 23 V A . A DMIN . C ODE § 10-210-765(D) (2005) (granting sales tax exemptions for ‘research and development into advanced automation (including artificial intelligence and computer vision)’). 16 1996 N.Y. Sess. Laws 474 § 188 (McKinney) (‘Notwithstanding any inconsistent provision of law, any state agency, department, board, bureau, division, commission, committee, office, or other entity of the state is authorized to enter individually or collectively into agreements or contracts with the New York state science and technology foundation or the New York state technology enterprise corporation in order to access the resources of the United States’ air force’s Rome laboratory in the areas, including but not limited to, telecommunications, communications networks, information processing and data bases, software, artificial intelligence, electromagnetics, signal processing, photonics, and electronic reliability.’). 17 1996 Mass. Legis. Serv. 294 § 2 (West) (appropriating $220,000 ‘for the application of artificial intelligence technology of the department of revenue’). 18 T EX . G OV ’ T C ODE A NN . § 2054.601 (West 2019). 19 2018 Vt. Legis. Serv. 137 (West). NT ELLIGE LLIGENCE NCE T ASK F ORCE , F INAL R EPORT EP ORT (2020), . Geistfeld, Regulation of Artificial Intelligence in the US

176

C. Annexes

the potential for economic growth and new opportunities for innovation,21 but also the potential for job disruptions and invasions of civil liberties.22 Among the six recommendations articulated,23 the most salient was the task force’s first opinion that the state should not ‘promulgat[e] … new, specific State regulations of A.I. at this time.’24 Their conclusion was based on the view that a ‘permanent commission’ with a ‘Code of Ethics’ would be a better approach to crafting regulations that fit the needs of the community.25 The commission, however, did conclude that once those two recommendations were fully implemented, future legislative action would be necessary since innovation is bound to create ‘applications of A.I. that will require regulation.’26 A similar task force was created by the City of New York in 2018 to analyze the use of ‘automated decision systems’ by city agencies.27 The task force released its report in November of 2019, and also outlined its goals for setting up a regulatory structure to handle the unique challenges of AI within the city.28 The report recommended three major goals for the city in moving forward with AI, which included creating a centralized regulatory structure to identify, prioritize, and encourage AI use,29 expanding public education on the use of AI,30 and implementing an internal framework to encourage disclosure about AI use and be responsive to any citizen concerns.31 The report also included a lengthy discussion about the appropriate definition of an Automated Decision System, noting that the law’s definition could include ‘computerized tools … simply because they are computerized and guide decision-making.’32 Because the members ‘did not reach a consensus on another definition,’ the report sought instead to outline ‘characteristics that make a system or tool qualify’ as well as how they should be 21 Id. at 9-11. 22 Id. at 12-15. 23 The other recommendations included creating a permanent commission, id. at 16, creating a code of ethics, id. at 17, establishing monetary incentives for innovation, id. at 19-20, providing education programs, id. at 21, and ensuring training was available to displaced workers, id. at 22. 24 Id. at 16. The task force also acknowledged that ‘applications of A.I, are currently being regulated.’ Id. (referencing current regulations in the realm of autonomous vehicles and facial recognition software); see infra Part V. RTIF IC IAL I NTELLIGENCE NT ELLIGE NCE T ASK F ORCE , supra note 110, at 16. 25 VT A RTIFICIAL 26 Id. 27 New York City Automated Decision Task Force, N.Y.C. Local L. 49 (2018). OMATE D D ECISION S YS Y S . T ASK F ORC E , N OV .   2019 R E EPORT PORT (2019), . 29 Id. at 18-21. 30 Id. at 22-23. 31 Id. at 23-25. 32 Id. at 26. Geistfeld, Regulation of Artificial Intelligence in the US

Annex II: State Regulation that Indirectly References AI

177

‘prioritize[ed].’33 The task force also left open the door for further investigation in other areas that were not covered by the law’s initial mandate.34 Two other states have set up committees that are currently engaging in discussions about artificial intelligence. The Alabama legislature created its commission in 2019.35 The commission is mandated to ‘advise the Governor and the Legislature on all aspects of the growth of artificial intelligence’ in specific sectors and in particular ‘their effect on society and the quality of life.’36 It also tasked the members with ‘consider[ing] whether the Legislature should establish a permanent commission on artificial intelligence.’37 Although the final report was to be made ‘no later than May 1, 2020,’38 no such report has been made to date, likely as a result of the COVID-19 pandemic. The State of New York also created a commission on ‘artificial intelligence, robotics and automation’ in 2019.39 This commission was tasked with evaluating, among other things, the ‘current law,’40 other ‘state policies [and] … regulatory structure[s],’41 the rules of ‘criminal and civil liability,’42 and the impact on labor,43 privacy,44 and national security.45 The initial act required a report by the end of this year,46 but the state legislature renewed the commission for another year.47 Washington created a Future of Work Task Force, which was primarily tasked with analyzing how the labor pool is likely to change in the future, but included considerations of the effects of AI on jobs.48 The task force released its report in

33 Id.; see also infra Part II. OMATE D D ECISION S Y YS S . T ASK F ORCE , supra note 118, at 30 (expressing a desire for future re34 A UT OMATED search on storing requirements, defining which systems are most relevant, and engaging with other stakeholders in the private sector and at the state and federal levels). 35 Establishing the Alabama Commission on Artificial Intelligence and Associated Technologies, 2019 Ala. Legis. Serv. 269 (West). 36 Id. § (a). 37 Id. 38 Id. § (g). 39 New York State Artificial Intelligence, Robotics, and Automation Commission, 2019 N.Y. Sess. Laws 110 § 1 (McKinney). 40 Id. § 1(a). 41 Id. § 1(b). 42 Id. § 1(c). 43 Id. § 1(d). 44 Id. § 1(e). 45 Id. § 1(f). 46 Id. §§ 5, 6. 47 2020 N.Y. Sess. Laws 58 Subpart B, Item III § 1. AS H . R EV . C ODE A NN . § 28C.25.010 (West 48 2018 Wash. Legis. Serv. 294 (West) (codified at W ASH 2018)). Geistfeld, Regulation of Artificial Intelligence in the US

178

C. Annexes

December of 2019.49 The report outlined the concern that AI will create disruption,50 but also noted that the increased use in technology can ‘free[] employees to focus on other tasks requiring more creative and critical thinking skills.’51 The task force also noted that this development opens the door for increased education and training programs that can ‘creat[e] new, higher-paying jobs’ rather than displace them.52 As a result of its research, the task force recommended a ‘workerimpact audit’ within the Washington State government so that policy makers could serve as a microcosm for measuring the effects AI and other technology will have across the economy.53 This work-force focused commission has been modelled by both New Jer54 sey, and California.55 New Jersey’s task force is responsible for ‘determin[ing] which technologies will impact work,’56 ‘explor[ing] the ways … [such] technologies might improve workplace conditions, create different jobs,’57 and ‘produce … [a] policy roadmap … to prepare for the future of work.’58 The executive order creating the task force did not have an expiration date, and the group has yet to release any reports,59 although they have commissioned specific research for review.60 California’s commission has a very similar focus to that of Washington and New Jersey.61 The governor of California required that the commission report

49 F UTURE UT URE OF W ORK T ASK F ORC E , 2019 P OLICY R E PORT : E XPLORING XPL ORING AND D EVELOPING EVEL OPING P OLICIES FOR S HARED P ROSPERITY AMONG W ASHINGTON ’ S B US USINESSE INESSE S , W ORKERS , AND C OMMUNITIES (2019), < https://www.wtb.wa.gov/wp-content/uploads/2019/12/Future-of-Work-2019-Final-Report.pdf>. 50 Id. at 24-25. 51 Id. at 42. 52 Id. at 46. 53 Id. at 49-50. 54 Establishment of the Future of Work Task Force, 50 N.J. Reg. 2187(b) (Nov. 5, 2018). 55 Establishment of a Future of Work Commission, Cal. Exec. Order N-17-19 (2019). There also appears to be a Future of Work task force instituted in Indiana as well, but the webpages dedicated to the task force on the Indiana state government’s website are no longer availRTIFIC IAL I NTELLIGENC E : A R OADMAP F FOR OR C ALIFORNIA 11 (2018), able. L ITTLE H OOVER C OMM ’ N , A RTIFICIAL . 56 50 N.J. Reg. 2187(b) § 5(a). 57 50 N.J. Reg. 2187(b) § 5(c). 58 50 N.J. Reg. 2187(b) § 5(e). TAT E OF N.J. , (last visited June 17, 2020). TAT E OF N.J. , (last 60 Task Force Resources, S TATE visited June 17, 2020). 61 Cal. Exec. Order N-17-19 § 3(a)-(j) (mandating the commission, inter alia, evaluate ‘emerging technologies,’ the ‘costs and benefits on workers, employers, the public, and the state,’ as well as ‘strategies for engaging employers in the creation of good, high-wage jobs of the future’). Geistfeld, Regulation of Artificial Intelligence in the US

Annex II: State Regulation that Indirectly References AI

179

its progress by May 1, 2020,62 but their efforts have been put on hold in light of the COVID-19 pandemic.63 In 2018, the Governor of Michigan made alterations to the Governor’s Talent Investment Board by renaming it the Michigan Future Talent Council, and providing new responsibilities for its members.64 One of these tasks included ‘[m]onitor [ing] labor market changes … with attention paid to emerging fields such as cybersecurity, artificial intelligence and machine learning, automation, and mobility.’65 However, before releasing any new information in the interim,66 the Governor, as a result of changes in administration, passed a new executive order again renaming the committee, and removing any reference to AI or automation from its responsibilities.67 California’s Little Hoover Commission, an independent state oversight committee, engaged in a similar investigation into the use of AI and issued a report in 2018 that mirrored the final disposition of the other investigative committees.68 The committee noted the potential power of AI to shape the future of privacy,69 the workforce,70 and in particular the type of governance that the state of California could provide.71 After noting that there has been no concerted effort in California to investigate the potential risks and benefits of AI, the committee recommended that the state create a specific cabinet level position or state agency to help the governor and state legislature formulate effective regulations,72 encourage the use of AI in governance to help make public service more responsive to public needs,73 and engage with the public in future AI efforts.74

62 Id. § 4. 63 Future of Work Commission Meetings, S TATE OF C AL . , (last visited June 17, 2020). 64 2018 Mich. Legis. Serv. Exec. Ord. 2018-13 (West). 65 Id. § IV(A)(6). 66 Michigan Future Talent Council (MFTC), S TATE OF M ICH . , (last visited June 17, 2020). 67 2020 Mich. Legis. Serv. Exec. Ord. 2020-107. OOVE R C OMM ’ N , supra note 146. 68 L ITTLE H OOVER 69 Id. at 29-31. 70 Id. at 32-35. 71 Id. at 26-27. 72 Id. at 15-16. 73 Id. at 16. 74 Id. at 17-19. Geistfeld, Regulation of Artificial Intelligence in the US

180

C. Annexes

3. Outlining Administrative Goals Beyond the creation of specific committees that are focused on particular aspects of how AI is likely to impact states, one state legislature has expressly recognized the potential of AI to create externalities in other areas of governance. In light of the concerns about the impact of inequality, the state of Washington created the Office of Equity in order to ‘promot[e] access to equitable opportunities and resources that reduce disparities.’75 Part of the concerns the legislature was attempting to address included the ‘social ramifications that emerging technology, such as artificial intelligence and facial recognition technology, may have on historically and currently marginalized communities.’76 Similar to other government efforts, mainly observed at the federal level,77 this particular state effort is concerned about the use of AI by the government.

4. Other State Efforts In line with the goals of increasing the use of AI technology in both the economy and governance of states, Michigan is utilizing AI and machine learning to help curb the opioid epidemic.78 In addition, cities such as Pittsburgh, Pennsylvania through the ‘P4 Pittsburgh’ project, are attempting to include AI technology to help spur urban growth, better design the city to respond to emerging needs, and understand how society will adjust to technological change and innovation.79

Annex III: Federal Regulation that Indirectly References AI

1. Appropriating Money/Expanding Research The earliest example of the federal government’s explicit recognition of AI was through the Omnibus Consolidated and Emergency Supplemental Appropriations Act, 1999. This statute gives the Chief Scientist within the Counterdrug Technology Assessment Center, which is part of the Office of National Drug Control Pol-

75 76 77 78 79

2020 Wash. Legis. Serv. 332 § 3(1) (West). Id. § 1. Supra Part I.2.b. L ITTLE H OOVER OOVE R C OMM ’ N , supra note 146, at 11. Id. at 12. Geistfeld, Regulation of Artificial Intelligence in the US

181

Annex III: Federal Regulation that Indirectly References AI

icy, the power to evaluate the ‘technological needs of Federal, State, local, and tribal drug supply reduction agencies, including … data fusion, advanced computer systems, and artificial intelligence.’ The federal government reiterated its interest in AI and its effect on technology in the 21st Century Nanotechnology Research and Development Act. This legislation created the National Nanotechnology Program and requires, in relevant part, the Program to consider ‘ethical, legal, environmental, and other appropriate societal concerns, including the potential use of nanotechnology … in developing artificial intelligence which exceeds human capacity.’ This Act appropriated money for the creation of the Program, while also laying the groundwork for further investment in the area. In light of executive pronouncements about the importance of AI for U. S. domestic and foreign policy, Congress allowed members of the military to engage in ‘arrangements to facilitate expedited access to university technical expertise … in support of Department of Defense missions … [specified] in subsection (e) [among them being artificial intelligence].’ The following year, Congress created the Joint Artificial Intelligence Center (JAIC) to expand the research efforts in AI by mandating the Secretary of Defense to ‘designate a senior official of the Department with principal responsibility for the coordination of activities relating to … artificial intelligence and machine learning.’ JAIC was tasked with the responsibility of creating a ‘strategic plan’ geared towards development and research in AI. Most notably, Congress sought to have the Secretary ‘delineate a definition of the term ‘artificial intelligence’ for use within the Department.’ The law also created a ‘National Security Commission on Artificial Intelligence’ drawn from appointments in the Department of Defense, Department of Commerce, as well as Congress. There were other passing mentions in the law of the Department of Defense’s need to broaden the use of AI in its decision-making processes. This law has led to an outgrowth of multiple Department efforts to research AI and increase the transparency over its use and inner-workings. Congress has further addressed AI in the recently passed National Defense Authorization Act for Fiscal Year 2020. It expanded the length of time of the Joint Artificial Intelligence Center, as well as the National Security Commission on Artificial Intelligence. The other additions in the AI field were related to expanded research as well as investment in the use of AI in Department of Defense policy. As a result of these laws, AI has become an essential pillar of military efforts.  

Geistfeld, Regulation of Artificial Intelligence in the US

182

C. Annexes

2. Outlining Administrative Goals Although the main AI policy goals as articulated by the Obama administration did not come until late into his second term, there were statements inserted into the National Defense Authorization Act for Fiscal Year 2013 that reflected Congress’ initial goals of outlining principles to guide future legislation and rulemaking in the AI realm. Congress articulated its views that ‘technological advances in … artificial intelligence could reduce the number of personnel required to support certain training exercises’ and that the ‘Secretary of Defense should … increase the use of … artificial intelligence for [these] training exercises’ to help cut costs of Department operations. A similar policy statement was inserted into the FAA Reauthorization Act of 2018. Congress specifically called on the president to ‘periodically review the use or proposed use of artificial intelligence technologies within the aviation system’ and evaluate the need to ‘plan’ for its introduction into the economy. This review should include an ‘assess[ment of] the impact of automation, digitalization, and artificial intelligence on the FAA workforce.’ Beyond specifically legislated goals, the past two presidential administrations have opined on their stated goals for the use of AI in the U. S. In 2016, President Obama first published the National Artificial Intelligence Research and Development Strategic Plan, which outlined its specific goals for encouraging research into AI, and a desire to protect the rights of individuals affected by its use. The Trump Administration continued on this path with the Executive Order on Maintaining American Leadership in Artificial Intelligence, issued in February of 2019. This order reiterates the goals outlined by the Obama administration, but also aims to improve the business acumen and ability for the U. S. to remain competitive against peer adversaries, both economically and militarily. As a result of the White House having outlined its overarching goals for the use of AI, a significant number of agencies have begun to issue their own guidelines to assist future decision-making and research over AI, particularly to help expand its use in many different industries. This focus on expanding the use of AI is evidenced by the proposals made by the Trump Administration to expand grant programs through the National Science Foundation and to implement this technology in all governmental agencies. There are recent examples of federal departments starting to implement AI technology into their daily operations, in line with these overarching goals.  



Geistfeld, Regulation of Artificial Intelligence in the US

Annex III: Federal Regulation that Indirectly References AI

183

3. Regulatory Interest Not only has the administration outlined specific goals for the rest of the government to follow when crafting regulations, specific agencies have also started to adopt regulations that anticipate the effects of AI. On the heels of President Trump’s Executive Order, the National Institute of Standards and Technology, under the Department of Commerce, created a report outlining guidance for how regulatory agencies create standards for the research, use, and implementation of AI technology in the government and in the U. S. economy. For example, the U. S. Patent Office requested public comments on the potential patentability of AI systems. The Commerce Department similarly has solicited comments regarding ‘export controls related to emerging technologies such as AI’ over the national security concerns that many have raised about the increased use of AI. This solicitation reflects similar concerns raised in a recent request by the Department of State for comments on the controls necessary for trading computer systems that evaluate and store information. Finally, the Department of Health and Human Services has requested input from the public on how it can utilize AI to improve the mechanisms that match patient data with medical care providers. Other regulatory efforts do not directly regulate AI, but instead refer to ways in which this emerging technology might be relevant moving forward. The United States Copyright Office, while fulfilling its statutory responsibility of modernizing the music licensing process, noted that the non-profit entity authorized to implement the new licensing scheme will ‘explore developments in algorithms, machine learning, and artificial intelligence’ so that such innovations could be leveraged in the future. The Department of Treasury, in explaining its definition of ‘sensitive personal data’ used by the Committee on Foreign Investment in the United States when it evaluates foreign investment, acknowledged the industry concern that excessive regulation might over burden technology such as the investment in AI. The Department of Health and Human Service has indicated its ongoing interest in implementing AI to help effectively provide Medicare health care coverage to the public. In addition, the Department of Homeland Security specifically acknowledged that the lack of a specialized field of work in an individual’s home country, including in AI, may be a reason to find that a non-immigrant worker and their family would suffer substantial harm if they were forced to return to their home country. Finally, the Consumer Financial Protection Bureau indicated that the decision to include procedures for businesses to request modification to an existing No-Action Letter was partially motivated by the growth of financial products that depend on machine learning and AI. There are also several proposed rules in the queue that specifically contemplate the use of AI in a handful of realms. The Department of Health and Human  



Geistfeld, Regulation of Artificial Intelligence in the US

184

C. Annexes

Services is contemplating whether an AI program called ContaCT is a sufficiently new technology to classify for an ‘add-on payment’ under Medicare. The Department of Education is currently proposing updates to its regulations that would provide specific funding to online institutions that utilize AI to actively engage with students academically, rather than passive platforms where students simply log-in. The Patent and Trademark Office has also contemplated improving its processing of patent information by utilizing AI or machine learning.

4. U. S. Trade Agreements  

The final method by which the U. S. government has addressed AI is through its engagement with other foreign countries. In both the United States-Mexico-Canada Agreement (USMCA) and the 2019 United States-Japan Agreement, the U. S. included language that was aimed at encouraging the sharing of data between countries, particularly in a machine-readable format. This falls in line with the goals outlined by the Trump Administration promoting the growth of information and technology innovation in AI, because the technology requires access to large amounts of machine-readable data. In addition, the U. S. has reiterated its goals of protecting the freedom of expression, by including language similar to Section 230 of the Communications Decency Act, which absolves technology companies from liability for the content created by third parties on their platforms, in the USMCA. Although this does not directly implicate AI, it enables the use of the technology to create bots that operate on such platforms.  





Geistfeld, Regulation of Artificial Intelligence in the US

Christiane Wendehorst and Yannic Duller

Safety- and Liability-Related Aspects of Software Table of Contents Guiding Principles for safety and liability with regard to software 189 Chapter I: The Role of Software in Safety and Liability Law 189 Principle 1: ‘Software’ within the meaning of these Principles 189 Principle 2: Commitment to both safety and liberty in software development 189 Principle 3: Products with software elements 190 Principle 4: Add-on software altering the features of other products 190 Principle 5: Software and hardware as components of digital ecosystems 191 Principle 6: Standalone software as a product in its own right and equivalence of hardware and software 191 Chapter II: A Dynamic Concept of Software Safety 192 Principle 7: Software safety for all target groups 192 Principle 8: Software safety throughout lifecycle 192 Principle 9: Software updates 193 Principle 10: No abuse of safety considerations 193 Chapter III: Types of Risks and Responses 193 Principle 11: Safety and liability with regard to individual ‘physical risks’ 193 Principle 12: Safety and liability with regard to other than individual physical risks 194 Principle 13: Intermediated risks 194 Principle 14: Connectivity 195 Principle 15: Autonomy, opacity and the role of ‘Artificial Intelligence’ 195 Principle 16: Distributed ledger technologies 196 Principle 17: Relationship between safety and liability 196 Chapter IV: General Provisions 197 Principle 18: Producers and their representatives 197 Principle 19: Other responsible economic operators 197 Principle 20: Structure of software-relevant regulation 197 1. Introduction 198 2. Software and software-enabled products and services 199 Wendehorst/Duller, Safety- and Liability-Related Aspects of Software https://doi.org/10.1515/9783110775402-002

186

Table of Contents

2.1 Definition of ‘software’ 199 2.1.1 Relationship between ‘software’ and ‘computer program’ 199 2.1.2 Towards a technologically-neutral definition for the safety and liability context 201 2.1.2.1 Conceptual emancipation from the IP context 201 2.1.2.2 Towards a more technologically neutral concept of software 202 2.2 The lifecycle of software 205 2.2.1 Linear development lifecycle models 205 2.2.2 Iterative lifecycle models and open software development 207 2.2.3 The changing role of software maintenance 208 2.2.4 A dynamic notion of safety and risk 212 2.3 Summary of findings 214 3. Risks associated with software 216 3.1 Classification of risks 216 3.1.1 Safety risks and functionality risks (the latter not included in the Study) 216 3.1.2 Physical, pure economic and social risks 217 3.1.3 Collective or systemic safety risks 220 3.1.4 Direct and intermediated safety risks 220 3.1.5 Characteristic, otherwise typical and atypical safety risks 222 3.2 Safety risk matrix 223 3.3 Summary of findings 224 4. Responses 225 4.1 Classification of responses 225 4.1.1 Ex ante responses and ex post responses 225 4.1.2 Old and New Approach to safety legislation 228 4.1.3 Positive (‘PRRP’) vs negative (‘blacklisting’) approaches 230 4.1.4 Horizontal and sectoral responses 231 4.1.5 More or less risk-based (targeted) approaches 232 4.1.5.1 General meaning of ‘risk-based’ 232 4.1.5.2 Risk-based safety: reducing the ‘risk of hitting a case with no risk’ 234 4.1.5.3 Risk-based liability 235 4.1.6 Technologically neutral and technology-specific responses 237 4.1.6.1 General significance 237 4.1.6.2 Physical risks 237 4.1.6.3 Other than physical risks 238 4.2 Risk-response matrix 239 Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

Table of Contents

5.

187

4.3 Summary of findings 240 Analysis of the status quo – is the acquis fit for software? 241 5.1 Product safety 241 5.1.1 General Product Safety Directive 241 5.1.1.1 Notion of ‘product’ and the role of software 242 5.1.1.2 Safety requirements 244 5.1.1.3 Post-market surveillance and other duties 245 5.1.1.4 Addressees of safety-related obligations 246 5.1.1.5 Overall assessment 247 5.1.2 Medical Devices Regulation and In-vitro Diagnostics Regulation 248 5.1.2.1 Notion of ‘medical device’ and the role of software 248 5.1.2.2 Safety requirements 250 5.1.2.3 Post-market surveillance and other duties 253 5.1.2.4 Addressees of safety-related obligations 254 5.1.2.5 Overall assessment with regard to software 254 5.1.3 Radio Equipment Directive 255 5.1.3.1 Notion of ‘radio equipment’ and the role of software 255 5.1.3.2 Essential requirements and delegated Commission acts 256 5.1.3.3 Post-market surveillance and other duties 259 5.1.3.4 Addressees of safety-related obligations 260 5.1.4 Machinery Directive 260 5.1.4.1 Notion of ‘machinery’ and scope of application with regard to software 260 5.1.4.2 Safety requirements 261 5.1.5 Other safety-relevant legislation 263 5.2 Market surveillance 265 5.2.1 Purpose, scope and content 265 5.2.2 Overall assessment with regard to software 266 5.3 Product liability 267 5.3.1 Notion of ‘product’ and the role of software 268 5.3.2 Addressees of product liability 270 5.3.3 Defectiveness at the time of market entry 272 5.3.4 Burden of proof with regard to damage, defect and causation 275 5.4 Other relevant frameworks and initiatives on new technologies 277 5.4.1 New Regulatory Framework for AI 277 5.4.2 Cybersecurity 278

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

188

Table of Contents

5.4.3 Data protection law 281 5.4.4 Digital Services Act 283 5.4.5 Developments with regard to cloud computing 285 5.5 Summary of findings and need for action 286 5.5.1 Best practices in the acquis and need for adaptation 286 5.5.1.1 A technologically neutral notion of software 286 5.5.1.2 Accessory software elements 287 5.5.1.3 Standalone software as a product 287 5.5.1.4 Add-on software 288 5.5.1.5 Risks and types of harm covered 289 5.5.1.6 Post-market surveillance and product monitoring obligations 289 5.5.1.7 Safety and security updates 289 5.5.1.8 Economic operator within the Union and platform operators 290 5.5.2 The special case of AI 291 5.5.2.1 Specific risks posed by AI 291 5.5.2.2 AI liability 293 5.5.3 The special case of distributed ledger technologies (DLT) 296 6. Key options for action at EU level 299 6.1 Options for action with regard to software in general 300 6.2 Options specifically with regard to trustworthy AI 306 6.3 Options specifically with regard to AI liability (mainly for physical risks) 311 7. Key recommendations for action at EU level 315 Key recommendation I: Introduce a new semi-horizontal and risk-based regime on software safety (accompanied by further steps to modernise the safety-related acquis) 315 Key recommendation II: Revise the Product Liability Directive (PLD) 316 Key recommendation III: Introduce new regulatory framework for AI 317 Key recommendation IV: Introduce new instrument on AI liability 317 Key recommendation V: Continue digital fitness check of the whole acquis 318

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

Chapter I: The Role of Software in Safety and Liability Law

189

Guiding Principles for safety and liability with regard to software The following Guiding Principles summarise the main findings of Sections 1 to 5 of this Study that form the basis of the options for action at EU level discussed in Section 6 and the key recommendations in Section 7. They may serve as recommendations for legislators at EU as well as at national level and, where possible under the framework of the applicable legal regime, as guidance for courts.

Chapter I: The Role of Software in Safety and Liability Law Principle 1: ‘Software’ within the meaning of these Principles In a safety and liability context, ‘software’ should be understood as comprising digital content and digital services as defined by Directive (EU) 2019/770. This is to reflect the increasing commutability of software in the more traditional sense of a set of instructions and of ‘mere data’ (e.g. an electronic map). This is also to reflect the functional equivalence of software stored on the user’s device and access to digital infrastructures beyond the immediate physical control of the user (e.g. Software-as-a-Service, SaaS).

Principle 2: Commitment to both safety and liberty in software development (1) Also with regard to software, Europe should maintain its policy that products placed on the market need to be safe, and that safety precautions may be justified even where their overall cost exceeds the monetary dimension of harm prevented by them and any hypothetical compensation. Whether an ALARP or an AFAP approach to risk control is to be preferred depends on the type and gravity of the relevant risks. (2) Any law on safety and liability with regard to software must take into account the characteristics of software development and distribution, such as that software is developed in a dynamic way, often through the collaboration of many different parties from across the globe, that these parties may be players of very different size and motivation, and that distribution can occur worldwide without significant cost or logistical barriers. (3) Nothing in these Principles must be construed as suggesting measures that would mean a disproportionate obstacle for software development and distriWendehorst/Duller, Safety- and Liability-Related Aspects of Software

190

Guiding Principles for safety and liability with regard to software

bution, including open software development and access to software supplied from outside the Union. All measures suggested must be read as having appropriate exceptions, e.g., for open and distributed software development, or as being restricted to appropriate cases, e.g. to where the software exceeds a particular level of risk or where the producer exceeds a particular size.

Principle 3: Products with software elements (1) Where the law currently provides for safety and liability rules for tangible items, those rules need to be adapted so as to fully take into account the characteristics of tangible items ‘with software elements’. In doing so, they should seek to achieve, as far as appropriate, consistency with the treatment of ‘digital elements’ within the meaning of Directive (EU) 2019/771. (2) Tangible items ‘with software elements’ include tangible items that incorporate, or are inter-connected with, software in such a way that the absence of that software would prevent the items from performing the functions ascribed to them by their producer or a person within the producer’s sphere, or which the user could reasonably expect, taking into account the nature of the items and the description given to them by the producer or a person within the producer’s sphere. (3) The producer that has to ensure safety and may become liable for tangible items needs to ensure safety and assume liability for such digital elements within the meaning of paragraphs (1) and (2) as are described as being suitable for the tangible items by the producer or a person within the producer’s sphere, irrespective of whether such software elements are supplied by the producer or by a third party. In the event of doubt, the software elements shall be presumed to be described as being suitable.

Principle 4: Add-on software altering the features of other products Where software is intended to be loaded upon hardware in cases other than those covered by Principle 3, or combined with other software, in a way altering the functionality of that hardware or other software, the producer of the first software becomes responsible for the safety of the combined product insofar as the relevant functionality is concerned. Where conformity assessment procedures are foreseen for the hardware or other software, that producer must conduct these procedures as if that producer had placed the combined product on the market.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

Chapter I: The Role of Software in Safety and Liability Law

191

This is without prejudice to the responsibility of the producer of the hardware or other software under Principle 5.

Principle 5: Software and hardware as components of digital ecosystems (1) Software components must be safe in all digital environments they will foreseeably be run in. Producers of hardware and software must, therefore, be subject to more far-reaching obligations under Principles 3 and 4, ensure safety-preserving compatibility with other software and hardware or, where the combination of components would pose a safety risk, provide for appropriate warnings and other appropriate action including, where necessary in the light of the gravity of the risk and the relevant target group, for technical features that prevent the components from being run together. (2) Where the producers of different components of digital ecosystems market their products as being recommended to run together, and where it is clear that one component must have caused harm in a way triggering liability, while it is unduly difficult for the victim to prove which of the components ultimately caused the harm, the producers of all components that might have caused the harm should be jointly and severally liable to the victim.

Principle 6: Standalone software as a product in its own right and equivalence of hardware and software (1) Where the law currently provides for safety and liability rules for products in general, those rules should be extended to, or be interpreted as covering, standalone software within the broad meaning of Principle 1 (i.e. including, for instance, SaaS). This is without prejudice to creating new rules specifically for software, which may be the preferable option. (2) Where the law currently provides for sectoral safety and liability rules for particular items, those rules should be extended to, or be interpreted as covering, standalone software if the latter can be functionally equivalent to the items primarily covered. (3) Paragraphs (1) and (2) should apply to off-the-shelf and customised software alike to the same extent as the law currently applies to both off-the-shelf and customised tangible items.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

192

Guiding Principles for safety and liability with regard to software

Chapter II: A Dynamic Concept of Software Safety Principle 7: Software safety for all target groups (1) Software must be safe for the different degrees of diligence and digital literacy and skills that can be expected from the group that will foreseeably be using the software (whether or not intended for them), including in particular, where applicable, vulnerable user groups. This includes the way safety updates are offered to or imposed upon users and the extent to which hardware or software allows different software to be run in conjunction with it. (2) Where it is impossible to design software in a way that it is safe for all groups, reasonable measures must be taken to ensure the software only reaches user groups for which it is safe.

Principle 8: Software safety throughout the lifecycle (1) Software must be and remain safe throughout its whole lifecycle, from its first being put into circulation (even if called a ‘beta version’ or similar) to regular updates to its going out of service. (2) Product monitoring with regard to software throughout its lifecycle is essential and must comprise monitoring of the whole ‘digital landscape’ including the emergence of new malware and new hardware and software that is likely to be used together with or interact with the software and might create safety risks. (3) The duration of the software lifespan must be communicated by the producer in a transparent manner and should not be lower than what a user can reasonably expect, taking into account the type of software and all other circumstances. The lifespan of software elements within the meaning of Principle 3 must be adjusted to the tangible item’s lifespan, and early obsolescence of the software element is to be avoided for reasons of sustainable economy and consumer protection. (4) When the lifespan of the software has come to an end and the software is no longer safe, the producer must take appropriate action to prevent further use of the software.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

Chapter III: Types of Risks and Responses

193

Principle 9: Software updates (1) Any software that comes with connectivity, or is designed to run on connected hardware, should, where appropriate in the light of safety considerations, be updated over-the-air in a manner that ensures safety throughout the product’s lifecycle. (2) Where a software update changes the original performance, the functioning or the intended use of the software, safety assessment procedures that had to be conducted for the original software may have to be repeated for the updated software. The extent to which assessment should be repeated depends on the significance of the change. No repeated assessment is required for minor software revisions, such as bug fixes, security patches, or minor usability or efficiency enhancements. Where software updates are supplied on a regular or continuous basis, assessment of the update generation and provision system may be more appropriate.

Principle 10: No abuse of safety considerations (1) Safety measures, such as updates, must not be abused as a pretext for pursuing the producer’s or third parties’ own commercial interests, such as by unnecessarily adding data collection functions for commercial purposes or consuming storage space and/or requiring extra computing power to accelerate obsolescence of hardware. (2) Safety concerns must not be abused as a pretext for creating unnecessary lock-in for users of hardware or other software.

Chapter III: Types of Risks and Responses Principle 11: Safety and liability with regard to individual ‘physical risks’ (1) As regards individual ‘physical risks’ associated with software (such as personal injury caused by an autonomous vehicle or medical robot), these should generally be subjected to traditional types of safety and liability regimes, which must be adapted to the specificities of software. This does not preclude the creation of new regimes of a traditional type (e.g. new safety standards and strict liability regimes for AI). (2) Where the law currently provides for safety and liability rules addressing risks of Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

194

Guiding Principles for safety and liability with regard to software

(i) personal injuries, those rules should be extended to, or be interpreted as covering, risks of causing psychological harm that amounts to a recognised state of illness (e.g. depression); (ii) damage to property, those rules should be extended to, or be interpreted as covering, risks of causing harm to the user’s digital environment, including any data controlled by the user. (3) Where, under traditional types of safety and liability regimes, responsibilities lie on the operator of a product, the law should pay due account to the fact that a party that continuously provides software updates and possibly further software elements may exercise a degree of control over the product that justifies considering that party to be the relevant operator (backend operator) for safety and liability purposes (instead of the person deploying the product, i.e. the frontend operator). Where there exists a registration system for safety and liability purposes (as is often the case, e.g., for motor vehicles), the producer may nominate a party as backend operator, provided this does not jeopardise a victim’s chances of liquidating any damages that may be due; unless this is the case, the producer shall be deemed to be the backend operator.

Principle 12: Safety and liability with regard to other than individual physical risks (1) Legal regimes concerned with safety and liability should likewise address, in an appropriate manner, risks other than those covered by Principle 11, in particular safety risks such as with regard to cybersecurity, integrity of networks, data protection or fraud prevention, ‘pure economic risks’ (such as economically harmful recommendations given to consumers), and ‘social risks’ (often also called ‘fundamental rights risks’, i.e. risks of harm of a primarily non-material nature caused to individuals, groups or the public at large, such as discrimination, exploitation, manipulation, humiliation or oppression). (2) Risks mentioned in paragraph (1) may also be addressed by legal frameworks not primarily associated with ‘safety’ or ‘liability’ (such as anti-discrimination law or the law of unfair commercial practices).

Principle 13: Intermediated risks (1) Safety and liability regimes should cover risks that are not direct but intermediated (i.e. caused by the free decision of the victim or a third party, but instigated by the software). The way another person foreseeably reacts to the softWendehorst/Duller, Safety- and Liability-Related Aspects of Software

Chapter III: Types of Risks and Responses

195

ware is part of the operation of that software and must be considered for the relevant concepts of safety and liability. This includes recommender systems and any foreseeable automation bias. (2) By way of exception from paragraph (1), such intermediated risks, which are not created by the functionality of software, but merely by content displayed (such as by recommendations given in a video or e-book), should not be covered by safety and liability regimes. (3) Strict liability must duly take into account the fact that the risk that materialised was an intermediated risk, such as by way of appropriate defences.

Principle 14: Connectivity Where software involves network connectivity or is designed to run on connected hardware, this adds significantly both to the risk of the product, such as cybersecurity and privacy risks, and to the possibilities the software producer has to monitor the software and manage any safety risks associated with it. The additional risk and additional degree of control justify stricter measures at all levels, i.e. concerning ex ante safety assessment and safety measures, post-market surveillance and appropriate action, as well as liability where harm occurs.

Principle 15: Autonomy, opacity and the role of ‘Artificial Intelligence’ (1) So-called ‘Artificial Intelligence’ (AI) entails a gradual aggravation of particular problems associated with safety and liability, which is due the opacity of code (‘black-box effect’) and the reduced predictability of operations (‘autonomy’). It is of vital importance to legislate essential requirements and develop specific harmonised standards for the safety of AI. (2) Harm caused by AI often cannot be traced back to human fault, and it is much more difficult to establish defects because it may be difficult to delineate between the harm caused by a defective AI and harm that was unavoidable when using a certain AI system. Strict liability for AI is therefore an appropriate response in cases of high-risk applications. The definition of what counts as ‘high-risk’ should not only focus on physical risks but also on pure economic and social risks. (3) Irrespective of whether or not an application counts as a ‘high-risk’ application, a person deploying AI should be liable for malfunctions of the AI to the same extent as that person would be liable for the acts or omissions of a human auxiliary. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

196

Guiding Principles for safety and liability with regard to software

(4) Should particular AI legislation be enacted, the definition of AI chosen to describe the scope should be as technology-neutral as possible, focusing on the sophistication of tasks to be performed rather than on any technical detail (such as machine learning).

Principle 16: Distributed ledger technologies (1) Safety with regard to software running on distributed ledgers is subject to at least the same rules on safety and liability as other software (e.g. the same rules as would apply to multiplayer gaming software on a central server). (2) For software running on distributed ledgers, new and specific safety requirements concerning governance structures are required, which would allow for effective control of individual and collective safety risks such as fraud, money laundering, financing of criminal acts as well as for easy identification of a responsible party and measures ensuring enforceability of legal rights and obligations. (3) Where it is impossible or unreasonably difficult to identify the producer of software running on a distributed ledger, any economic operator that exercises a sufficient degree of control over the network, such as by holding a master key, may be held responsible in the same way as a producer. Single nodes with no particular control rights are only responsible for their own activities.

Principle 17: Relationship between safety and liability (1) The person that is responsible for product safety should normally also be the person that becomes or may become liable where a safety risk materialises and causes harm. This also applies to strict liability where there is reason to assume that the bulk of risks that ultimately cause harm are product safety risks. (2) Where the person responsible for product safety cannot demonstrate that it observed all applicable harmonised safety standards, it has to bear the burden of proof with regard to the non-existence of a defect or of fault, where relevant. This is without prejudice to more far-reaching alleviations of the burden of proof for the benefit of the victim.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

Chapter IV: General Provisions

197

Chapter IV: General Provisions Principle 18: Producers and their representatives (1) Without prejudice to the responsibility of other parties for their own activities within the supply chain, obligations with regard to safety and liability are to be fulfilled by the producer of the product with software elements within the meaning of Principle 3, the producer of add-on software within the meaning of Principle 4, or the producer of the standalone software within the meaning of Principles 5 and 6. (2) A party that has not developed or reconditioned and released the software or other product but that presents itself as the producer by affixing its name, trade mark or other distinctive mark, is to be considered the producer. (3) Producers of software within the meaning of Principles 4 to 6 not established in the Union should have to nominate an authorised representative established within the Union, subject to Principle 2. This authorised representative should be jointly and severally liable with the producer for all obligations with regard to safety and liability and must be equipped with the necessary mandates and financial means or insurance.

Principle 19: Other responsible economic operators Distributors, fulfilment service providers, and information society service providers must support efforts to ensure safety and to enforce liability claims. Failure to do so may lead to liability of the relevant economic operator itself. This is without prejudice to Principle 16(3).

Principle 20: Structure of software-relevant regulation (1) Whether software-related legislation should have a broad scope, covering all or a very broad range of products or activities (often called ‘horizontal’ legislation) or a narrow scope, only covering particular types of products and activities (often called ‘sectoral’ legislation), is determined by a number of factors including (i) whether the safety requirements or liability imposed are suitable, necessary and proportionate for a broad range or only a narrow range of products or activities, taking into account that any requirements or liability

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

198

1. Introduction

are likely to enhance overall safety and foster user trust, but also to create additional costs and hinder SMEs from entering the market; and (ii) considerations of clarity, consistency and practicality of the law. (2) Regulation on safety and liability with regard to software should be riskbased, i.e. avoid imposing measures for cases where these measures are not justified by the gravity of the risk to be mitigated or eliminated. (3) Safety requirements for software should make use of a combination of legislation, which sets out the main decisions, of delegated acts, and of harmonised standards to be prepared with industry participation.

1. Introduction Software lies at the very heart of modern products and services that form part of wider digital ecosystems, and debates about the safety and liability related aspects of software date from the second half of the 20th century. With the new ‘digital revolution’ we have witnessed over the past decade, which has significantly affected almost every aspect of how we live, work and innovate, the debate has taken on a new dynamic. At the same time, the term ‘software’ has lost much of the focal role it still played in the 20th century, and the debate about software has been overshadowed by emerging related debates, such as that on: Artificial Intelligence (AI) and robotics1, the Internet of Things (IoT)2, the data economy and data strategy3, digital content and services4, and the platform economy5. In each of these contexts, ‘safety’ and ‘liability’ have been mentioned in one way or another. However, boundaries between the debates are more and more becoming blurred, and the conceptual value of terms such as AI is being increasingly called

1 Commission, ‘White Paper on Artificial Intelligence – A European approach to excellence and trust’ COM(2020) 65 final. 2 Commission Staff Working Document, ‘Advancing the Internet of Things in Europe’ SWD(2016) 110 final. 3 Commission, ‘Building A European Data Economy’ COM(2017) 9 final; Commission, ‘A European strategy for data’ COM/2020/66 final. 4 See Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services OJ L 2019/136, 1 and Directive (EU) 2019/771 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the sale of goods, amending Regulation (EU) 2017/2394 and Directive 2009/22/EC, and repealing Directive 1999/44/EC, OJ L 2019/136, 28. 5 Commission, ‘Online Platforms and the Digital Single Market Opportunities and Challenges for Europe’ COM/2016/0288 final. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.1 Definition of ‘software’

199

into question. Without a doubt, AI qualifies as software, but not all software qualifies as AI. We may therefore be well advised to look at safety and liability in a somewhat more holistic way and from the perspective of ‘software’.

2. Software and software-enabled products and services 2.1 Definition of ‘software’ 2.1.1 Relationship between ‘software’ and ‘computer program’ As yet, there is no generally recognised definition of what amounts to ‘software’ even though the term is widely used, including in EU legislation.6 The habit in common parlance to use the term ‘software’ interchangeably with ‘computer program’, which is also reflected in policy communications7 and legal literature,8 adds to the existing lack of clarity. Legislators have refrained from defining software mainly because a detailed and technical definition runs the risk of being too exclusive and not flexible enough for future technological developments, while a very general definition would not provide much benefit. The interchangeable use of the terms ‘software’ and ‘computer program’ is also reflected in legislative texts. Directive 2009/24/EC (Computer Programs Directive, CPD)9 for example, which – confusingly enough – is also commonly referred to as the ‘Software Directive’, uses both terms in its recitals10, without defining either of the two. In refraining from providing a definition, the European

6 E.g. Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (recast), OJ L 2006/157, 24; Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, OJ L 2017/117, 1; Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council Directive 96/98/EC, OJ L 2014/257, 146. 7 E.g. Commission, ‘Proposal for a Council Directive on the legal protection of computer programs’ COM(88) 816 final, OJ C 91/1989, 4. 8 E.g. Peter Chrocziel in Martinek/Semler/Flohr (eds), Handbuch des Vertriebsrechts (4th edn, Beck 2016), Axel Von dem Busche and Tobias Schlelinski in Leupold/Glossner (eds), Münchener Anwaltshandbuch IT-Recht (2nd edn, Beck 2011) Teil 1 para 5. 9 Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on the legal protection of computer programs (Codified version), OJ L 2009/111, 16. 10 Cf. Recital 10 and Recital 11 Directive 2009/24/EC. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

200

2. Software and software-enabled products and services

legislator followed a recommendation by IT experts when the Directive was initially proposed in 1989 that any definition would become obsolete due to technological changes.11 This view apparently also prevailed during the revision of the original Directive. Recital 7 of the CPD clarifies that, for the purposes of the Directive, the term ‘computer program’ shall include programs in any form, including those which are incorporated into hardware, and that it also includes preparatory design work leading to the development of a computer program. The most recognised definition of ‘computer software’ is arguably to be found in Section 1 (iv) of the 1977 WIPO Model Provisions, according to which the term means any or several of the items referred to in (i) to (iii): (i) ‘computer program’ means a set of instructions capable, when incorporated in a machine-readable medium, of causing a machine having informationprocessing capabilities to indicate, perform or achieve a particular function, task or result; (ii) ‘program description’ means a complete procedural presentation in verbal, schematic or other form, in sufficient detail to determine a set of instructions constituting a corresponding computer program; (iii) ‘supporting material’ means any material, other than a computer program or a program description, created for aiding the understanding or application of a computer program, for example problem descriptions and user instructions. While a great majority in literature follows the general approach of the WIPO model rules of considering software to be a broader term than computer program,12 the exact definitions vary. Some authors define software as computer program including associated data or files (such as constants, configuration files; as well as embedded elements like templates, pictures, music, etc.) and software documentation, which is not necessary for the actual use of the software but for troubleshooting, and for adapting and expanding the software.13 Others mention, instead of the software documentation, the corresponding operating manual.14

11 Article 1 Proposal for a Council Directive on the legal protection of computer programs, COM (88) 816 final. 12 Just to name a few, Jochen Marly, Praxishandbuch Softwarerecht (7th edn, Beck 2018); Thomas Dreier in Dreier/Schulze (eds) UrhG § 69a (3rd edn, Beck 2008); Sebastian Brüggemann, ‘Urheberrechtlicher Schutz von Computer- und Videospielen’ (2015) 31 CR 697. 13 Michael Sonntag in Jahnel/Mader/Staudegger (eds), IT-Recht (Verlag Österreich 2019) 13. 14 Axel Von dem Busche and Tobias Schlelinski in Leupold/Glossner (eds), Münchener Anwaltshandbuch IT-Recht (2nd edn, Beck 2011) Teil 1 para 3, 4. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.1 Definition of ‘software’

201

2.1.2 Towards a technologically-neutral definition for the safety and liability context As is the case with many other terms, it may not be very helpful to rely on one and the same definition of ‘software’ for any possible context. Rather, there is a core meaning of the term, which should – for the sake of consistency and clarity – be understood along the lines of the definition of ‘computer program’ in the 1977 WIPO Model Provisions. In addition, depending on the context, further items may be included in (or excluded from) the notion of ‘software’.

2.1.2.1 Conceptual emancipation from the IP context From an intellectual property point of view, it makes sense to include preparatory design work, and possibly even other preparatory work, in the definition, as has been done by the CPD. However, preparatory design work does not have any impact on safety and liability that would be independent of the impact of the final computer program. In other words, if a safety risk inherent in the preparatory design work has been removed before the final computer program was released in a way allowing the risk to materialise, this is no longer relevant. And if it has not been removed in time, and the final computer program therefore poses a safety risk, the question of where in the development process that risk was initially created may, at most, be relevant for the internal distribution of responsibility, but preparatory design work does not have any particular role in this context and stands on the same footing as, for instance, any step in coding or training. In the safety and liability context, preparatory design work should therefore be excluded from the notion of ‘software’. This is also why the ‘program description’ within the meaning of Section 1 (ii) of the WIPO Model Provisions, or the ‘algorithm’ as such (i.e. the general description of how a problem can be solved, without necessarily being implemented in a specific programming language) are more interesting from an IP perspective than from a safety and liability perspective. However, given that they can hardly be distinguished from the program and its functionality itself, they should be included also for safety and liability purposes. The elements mentioned in Section 1 (iii) of the WIPO Model Provisions, i.e. other ‘supporting material’ created for aiding the understanding or application of a computer program, do have an impact on the safety of the computer program. This applies irrespective of whether these elements are in electronic form or not. The ‘supporting materials’ comprise not only operating manuals, but also libraries and related non-executable data, such as constants, configuration data or other embedded elements (e.g. databases, templates, pictures, music). They Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

202

2. Software and software-enabled products and services

should be included in the notion of ‘software’ for purposes of a safety and liability context. This shifts the problem from the question of what is included in the notion of ‘software’ in a safety and liability context to the question of what is excluded, or whether possibly software is just everything that is not hardware. Arguably, what is excluded from a rather traditional notion of software is data as such, like video files as far as they neither feature embedded computer programs nor qualify as supporting materials for a computer program, or mere input data.

2.1.2.2 Towards a more technologically neutral concept of software While it is possible to formulate a more traditional definition of ‘software’, which still somehow centres around the notion of ‘computer program’, also for the safety and liability context, this is not the same as saying that this traditional definition is the best definition for a safety and liability context. It may be helpful at this point to cast a glance at some related concepts that have recently been introduced by EU legislation, i.e. the terms ‘digital content’, ‘digital services’ and ‘digital elements’ as well as ‘digital environment’ and the related term of ‘integration’, introduced by Directives (EU) 2019/770 (Digital Content and Services Directive, DCSD)15 and 2019/771 (Sale of Good Directive, SGD).16 We find these concepts in a contractual context and with a view mainly to conformity requirements and remedies for lack of conformity. When looking at the definitions, it is helpful to also consider some provisions on scope:

15 Directive (EU) 2019/770 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the supply of digital content and digital services OJ L 2019/ 136, 1. 16 Directive (EU) 2019/771 of the European Parliament and of the Council of 20 May 2019 on certain aspects concerning contracts for the sale of goods, amending Regulation (EU) 2017/2394 and Directive 2009/22/EC, and repealing Directive 1999/44/EC, OJ L 2019/136, 28. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.1 Definition of ‘software’

203

Article 2 DCSD Definitions For the purposes of this Directive, the following definitions apply: (1) ‘digital content’ means data which are produced and supplied in digital form; (2) ‘digital service’ means: (a)

a service that allows the consumer to create, process, store or access data in digital form; or

(b) a service that allows the sharing of or any other interaction with data in digital form uploaded or created by the consumer or other users of that service; (3) ‘goods with digital elements’ means any tangible movable items that incorporate, or are inter-connected with, digital content or a digital service in such a way that the absence of that digital content or digital service would prevent the goods from performing their functions; (4) ‘integration’ means the linking and incorporation of digital content or a digital service with the components of the consumer’s digital environment in order for the digital content or digital service to be used in accordance with the requirements for conformity provided for by this Directive; … (9) ‘digital environment’ means hardware, software and any network connection used by the consumer to access or make use of digital content or a digital service; Article 3 DCSD Scope … (2) This Directive shall also apply where the digital content or digital service is developed in accordance with the consumer’s specifications. (4) This Directive shall not apply to digital content or digital services which are incorporated in or inter-connected with goods within the meaning of point (3) of Article 2, and which are provided with the goods under a sales contract concerning those goods, irrespective of whether such digital content or digital service is supplied by the seller or by a third party. In the event of doubt as to whether the supply of incorporated or inter-connected digital content or an incorporated or inter-connected digital service forms part of the sales contract, the digital content or digital service shall be presumed to be covered by the sales contract. (5)

This Directive shall not apply to contracts regarding: (a)

the provision of services other than digital services, regardless of whether digital forms or means are used by the trader to produce the output of the service or to deliver or transmit it to the consumer;

(b) electronic communications services as defined in point (4) of Article 2 of Directive (EU) 2018/1972, with the exception of number-independent interpersonal communications services as defined in point (7) of Article 2 of that Directive; …

Comparing the concepts used by the DCSD and the concepts of ‘software’ developed in a computer sciences context and/or an IP law context, it transpires that, in the contractual context: Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

204





– –



2. Software and software-enabled products and services

whether or not something qualifies as a ‘computer program’ is irrelevant – all kinds of digital data are included, i.e. also pure video files, etc., and come under the term ‘digital content’. a certain category of digital services is treated in absolutely the same manner as digital content; this category can be characterised as (i) consisting in the provision of some kind of digital infrastructure; and (ii) to be utilised by the customer for an activity related to data; but (iii) NOT qualifying as electronic communication services. it is irrelevant whether digital content or a digital service is ‘off-the-shelf’ or customised. whether digital content or a digital service is to be considered a standalone content or service or whether it is a ‘digital element’ of something else is to be decided considering (i) objective criteria of functional nexus; and (ii) the will of relevant parties concerned; but (iii) NOT to who exactly initiates the supply. the three components of ‘hardware’, ‘software’ and network connectivity are considered to make a complete ‘digital environment’.

The question arises whether ‘digital content’ and ‘digital services’ could be more attractive to delineate the boundaries of what constitutes ‘software’ in the context of safety and liability. One obvious advantage of this would be better coherency and consistency within the acquis. Another obvious advantage would be that the definition is much more technologically-neutral — in particular, it would not be necessary to find out whether particular digital content contains ‘sets of instructions’ or ‘merely data’, considering that more sophisticated datasets (such as a more sophisticated video file) will often contain small sets of instructions. To a certain extent, instructions and data can functionally replace each other. If a smart vehicle is to change direction, someone could either have coded the instruction to turn right when changing from Main Street to Baker Street, or (more probably) there could be the instruction to change direction at a turn of the road, with a separate electronic map telling the system that Baker Street is perpendicular to Main Street. That electronic map could be provided together with the set of instructions, or it could be supplied and marketed separately. For the purposes of safety and liability, it seems to be unnecessarily complicated and burdensome to differentiate between all these different cases, and it would not be appropriate to have different rules for them. The only point where the contractual context necessarily differs from the noncontractual context is the notion of what counts as a ‘digital element’ of something else (e.g. of smart goods). While the DCSD looks, in addition to the more obWendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.2 The lifecycle of software

205

jective point of functional nexus, at whether or not the ‘digital element’ was supplied under the same contract as the goods,17 this criterion cannot be relevant in the non-contractual context. Rather, in the non-contractual context, the criterion must be whether the producer of the goods has, explicitly or implicitly, declared that precisely this ‘digital element’ is suitable for precisely these goods. It is thus argued that the definitions of ‘digital content’ and ‘digital services’ in the DCSD should be taken as describing ‘software’ within the context of safety and liability, and that, with the minor modification just mentioned, the definition of ‘digital element’ of another product should be understood along the lines of the DCSD.

2.2 The lifecycle of software 2.2.1 Linear development lifecycle models Like so many other phenomena, software has its own characteristic lifecycle. After the first business computer was installed at General Electrics in 1954, developers started to create lists of Dos and Don’ts that would increase the efficiency of the development process and reduce the likelihood of errors. These lists evolved into models that specify sequences of activities for the development of software (so-called Software Development Lifecycles, SDLCs).18 The first of these conceptual frameworks was already developed in 1956 and is called the waterfallmodel. It consists of consecutive stages, each of which needs to be concluded before moving to the next one, that began with the planning and ended with the implementation of software.19 It soon became clear that bugs and errors might only emerge after the software has been put on the market. To address this, an additional stage was added: the maintenance phase.20

17 Article 2(3) Directive (EU) 2019/770. 18 Gerald D Everett and Raymond McLeod, Software Testing: Testing across the Entire Software Development Life Cycle (Wiley-Interscience 2015), 30. 19 Nicholas David Birrell and Marty Ould, A practical handbook for software development (Cambridge University Press 1985). 20 Walter Royce, ‘Managing the Development of Large Software Systems: Concepts and Techniques’ (1970) Technical Papers of Western Electronic Show and Convention (Wescon) 382. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

206

2. Software and software-enabled products and services

Figure 1: The cascade model of software development (source: N. D. Birrell and M. A. Ould’ A practical handbook to software development (Cambridge University Press 1985))

While today various different development models are used (see 2.3.), software maintenance has remained an integral part of software development, as it is necessary to ensure the safety and functionality of the software and that users’ expectations and requirements are met. The importance of the maintenance phase is increasing, as software is becoming more and more complex. Due to the size and degree of complexity of modern software, it is even said that maintenance is the continued development because large software is never really finished.21

21 Pierre Bourque and Richard E Fairley, Guide to the Software Engineering Body of Knowledge (SWEBOK): Version 3.0 (IEEE Computer Society Press 2014), 5-3. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.2 The lifecycle of software

207

2.2.2 Iterative lifecycle models and open software development In the past decades, numerous development lifecycle models have emerged.22 Today, many of the models used are not linear in nature, like the waterfall model, but follow an iterative approach (e.g. agile software development and evolutionary programming).23 This means that design, coding and testing activities overlap and even done simultaneously. Agile software development, which has gained particular importance over the last two decades, is characterised by a close involvement of customers and by reacting to changing requirements even in late stages of the development instead of following a strict plan. Furthermore, an emphasis is put on face-to-face interactions between the people involved in the project.24 Agile software development methods are often seen as less bureaucratic and more flexible alternatives to the traditional linear models, as documentation is only prepared where absolutely necessary.25 To deal with the increased demand for fast and continuous maintenance services, agile software development methods have even been adapted in the maintenance phase.26 Another important example of an iterative development approach is open source software development. The underlying idea of this approach is that the source code is made publicly available, and anyone can make meaningful contributions to the development of the software.27 In this way, the software can be coded, tested and debugged in parallel by multiple contributors.28. Due to the distributed nature of open software development, the division of tasks deviates significantly from traditional software development. Patches and updates are pro-

22 For an overview see Nayan Ruparelia, ‘Software Development Lifecycle Models’ (2010) 35 ACM SIGSOFT Software Engineering Notes 8; Shriram Vasudevan, Subashri Jeyakumar G and Prashant Nair, Software Engineering (Alpha Science Publishers 2017), 1.18 23 Pierre Bourque and Richard E Fairley, Guide to the Software Engineering Body of Knowledge (SWEBOK): Version 3.0 (IEEE Computer Society Press 2014), 3-5. 24 ‘Manifesto for Agile Software Development’ available at accessed 4 November YEAR?; Torgeir Dingsøyr and others, ‘A Decade of Agile Methodologies: Towards Explaining Agile Software Development’ (2012) 85 Journal of Systems and Software 1213. 25 Tsun Chow and Dac-Buu Cao, ‘A Survey Study of Critical Success Factors in Agile Software Projects’ (2008) 81 Journal of Systems and Software 961; Rashina Hoda, James Noble and Stuart Marshall, ‘Agile Undercover: When Customers Don’t Collaborate’, Agile Project Management (2010). 26 Pierre Bourque and Richard E Fairley, Guide to the Software Engineering Body of Knowledge (SWEBOK): Version 3.0 (IEEE Computer Society Press 2014), 5-8. 27 Srinarayan Sharma, Vijayan Sugumaran and Balaji Rajagopalan, ‘A Framework for Creating Hybrid Open-Source Software Communities’ (2002) 12 Information Systems Journal 7. 28 Luyin Zhao and Sebastian Elbaum, ‘Quality Assurance under the Open Source Development Model’ (2003) 66 Journal of Systems and Software 65. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

208

2. Software and software-enabled products and services

vided by contributors who are scattered across the globe and work either individually or in temporary teams on particular problems.29 Newly submitted code is integrated by so-called maintainers, who ensure that the contributions are in line with the general vision of the project and meets its standards. Before the updates are released to the general public, the new features are made available to the development community (so-called alpha versions), who test the new version for bugs and provide feedback. The maintainers can then decide whether to add the new code to the main version of the software or return it to the contributor for revision. Other than in agile software development, communication between the involved actors primarily takes place via online communication tools. Contributors are usually encouraged to provide only minor changes, as it is easier to detect flaws in the code and to understand the effect the patch may have on the software. To ensure transparency, most projects require a ‘signed-off-by’ line, which contains the full name and email address of the contributor of the new code.30

2.2.3 The changing role of software maintenance Software maintenance aims at sustaining the functionality of software throughout its lifecycle by modifying the deployed software while maintaining its integrity. Most of the maintenance activities are performed during the post-delivery stage, but some measures, such as the planning and logistics of post-delivery operations, already need to be taken pre-delivery.31 The ability to provide quick and reliable updates is vital for the competitiveness of software vendors.32 However, the maintenance of software can also pose significant challenges for software enterprises. Finding a fault in several million lines of source code is a challenge in itself. On top of that, businesses might be reluctant to dedicate large amounts of resources to the maintenance of software, and would rather invest in new software projects. This is because, other than the initial development of software, maintenance of software does not necessarily have a clear return on investment,

29 Srinarayan Sharma, Vijayan Sugumaran and Balaji Rajagopalan, ‘A Framework for Creating Hybrid Open-Source Software Communities’ (2002) 12 Information Systems Journal 7 30 Ibrahim Haddad and Brian Warner, ‘Understanding the Open Source Development Model’ (The Linux Foundation 2011). 31 Pierre Bourque and Richard Fairley, Guide to the Software Engineering Body of Knowledge (SWEBOK): Version 3.0 (IEEE Computer Society Press 2014), 5-2. 32 Keith Bennett and Václav Rajlich, ‘Software Maintenance and Evolution: A Roadmap’, Proceedings of the Conference on The Future of Software Engineering (Association for Computing Machinery 2000), 75, 76. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.2 The lifecycle of software

209

since modifications are often provided without remuneration.33 Furthermore, with ongoing maintenance activities, the initial source code will become even more complex and may lose its structure and coherence due to various changes by different software engineers.34 Therefore, maintenance is very cost intensive and constitutes a large part of the overall development lifecycle costs. Software maintenance is not only about correcting faults, but also about improving design, ensuring interoperability with different hardware software and system features, or implementing new functions.35 According to the international standard for software maintenance ISO/IEC/IEEE 14764:2016,36 four types of maintenance can be distinguished: – Corrective maintenance: reactive modification (or repairs) of a software product performed after delivery to correct discovered problems. Includes emergency maintenance, which is defined as an unscheduled modification performed to temporarily keep a software product operational pending corrective maintenance. – Preventive maintenance: modification of a software product after delivery to detect and correct latent faults in the software product before they become operational faults. – Adaptive maintenance: modification of a software product performed after delivery to keep a software product usable in a changed or changing environment. For example, the upgrade of the operating system might render changes to the software necessary. – Perfective maintenance: modification of a software product after delivery to provide enhancements for users, improvement of program documentation, and recoding to improve software performance, maintainability, or other software attributes. These four types can be classified as ‘correction maintenance’ and ‘enhancement maintenance’ and further categorised into ‘proactive’ and ‘reactive’ measures (see graphic below)

33 Pierre Bourque and Richard E Fairley, Guide to the Software Engineering Body of Knowledge (SWEBOK): Version 3.0 (IEEE Computer Society Press 2014), 5-4 – 5-7. 34 MS Krishnan, Tridas Mukhopadhyay and Charles Kriebel, ‘A Decision Model for Software Maintenance’ (2004) 15 Information Systems Research 396, 397. 35 Keith Bennett and Václav Rajlich, ‘Software Maintenance and Evolution: A Roadmap’, Proceedings of the Conference on The Future of Software Engineering (Association for Computing Machinery 2000), 75, 76. 36 Available at accessed 19 September 2020. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

210

2. Software and software-enabled products and services

The maintenance activities of software developers are made available to users in the form of updates and upgrades. Traditionally a distinction is made between ‘updates’ (also referred to as ‘patches’ or ‘minor versions’), which provide fixes for bugs and minor improvements, and ‘upgrades’ (also referred to as major versions), which introduce a reworked version of the software and add new functions or change existing ones significantly. Three decades ago, updates and upgrades were rather scarce. Usually, software remained unchanged until it was replaced by a completely new version after a couple of years.37 Today, software, especially on connected devices, is updated and upgraded on a constant basis. The fact that upgrades and updates are no longer provided manually by inserting a set of floppy disks but are now often downloaded and installed automatically – sometimes even without being noticed by the user – is only one reason why the number of updates and upgrades has increased significantly over the past decades. Another explanation for the high number of updates is the degree of complexity of software. The source code of modern software can have several million lines, which renders mistakes and bugs so probable that their occurrence borders on certainty.38 Hence, we often find the statement that software cannot be developed without errors.39 On the other hand, updates have to address the ele-

37 Thomas Riehm, ‘Updates, Patches etc. – Schutz nachwirkender Qualitätserwartungen’ in Schmidt-Kessel/Malte Kramme (eds), Geschäftsmodelle in der digitalen Welt (JWV 2017), 202. 38 Diane Rowland, ’Liability for Defective Software’ (1991) 22 Cambrian L Rev 78, 78; Michael Sonntag in Jahnel/Mader/Staudegger (eds), IT-Recht (Verlag Österreich 2019), 25. 39 E.g. Bundesamt für Sicherheit in der Informationstechnik, Sicherheitsbericht (BSI 2015), 10. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.2 The lifecycle of software

211

vated safety risks caused by increased connectivity and the integration of software in more and more products. Most devices no longer operate in a closed network but are connected to the internet and are thus potential targets for cyberattacks. Furthermore, risks are no longer limited to loss of data, privacy intrusions or the disclosure of trade secrets, but can also have physical manifestations, if, for example, hackers take over the control of a car by hacking its onboard system.40 To ensure the safety and functioning of software in today’s fast-changing digital environment, companies are changing their traditional approach of providing small updates on a regular basis and larger upgrades that introduce new major versions with additional and improved features and ensure the compatibility with the latest hardware and operating systems, every few years.41 Enabled by the possibility to deliver software updates over-the-air (OTA), companies are providing small fixes and new features regularly and instantly, rather than compiling possible improvements over the course of months and years and then providing them all at once with one large update. For example, Microsoft changed its approach of providing a new Windows version with a whole range of new functions, a revised interface, and new standard programs every few years. With Windows 10, Microsoft introduced ‘Windows as a Service’ and provides so-called ‘feature updates’ that add new features twice a year and monthly ‘quality updates’ that deliver security and non-security fixes.42 Similarly, Google provides for its Internet browser Chrome a major release roughly every six weeks and smaller updates every two to three weeks.43 Both upgrades and updates are downloaded automatically in the background and are installed once the browser is closed and reopened. Users are only notified of a new update if they have not closed the browser and an already downloaded update is pending to be installed.44 This new approach to software updates allows companies to react better and quicker to emerging errors and new developments.

40 Andy Greenberg, ‘Hackers Remotely Kill a Jeep on the Highway – With Me in It’ (WIRED) accessed 20 October 2020. 41 Thomas Riehm, ‘Updates, Patches etc. – Schutz nachwirkender Qualitätserwartungen‘ in Schmidt-Kessel/Malte Kramme (eds), Geschäftsmodelle in der digitalen Welt (JWV 2017), 204. 42 accessed 19 September 2020. 43 accessed 19 September 2020. 44 accessed 19 September 2020. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

212

2. Software and software-enabled products and services

The lifecycle of software that follows the traditional approach of periodic major versions that replace older versions is considerably different from software that follows the approach outlined above of small and regular updates that are pushed onto the devices. Traditionally, after a software version is put on the market, it enters the maintenance phase. The developer still provides small fixes and improvements for the current version but is simultaneously already working on a completely new version. After the new software version is released, the support for the old software is available for a certain amount of time, before it is eventually discontinued (i.e. end of the maintenance phase). Connected software is considered obsolete with the end-of-support, because, without security updates, the continued use of the software poses a serious security risk.45 An example for this traditional lifecycle is the older Windows versions: two years after the release of Windows Vista, Microsoft put Windows 7 on the market; the mainstream support for Vista ended three years after that. The lifespan of software becomes less clear if software is no longer replaced by a reworked version, but changes are made by periodically providing small updates that are pushed to the devices automatically. Users are not provided with a final version of software that is maintained and eventually replaced but with a software that is under constant change.

2.2.4 A dynamic notion of safety and risk The risk profile of software is becoming more dynamic due to this shift from sporadic updates and upgrades to a constant stream of changes. This dynamic nature of software is also what distinguishes it from traditional products. Any safety and liability legislation applicable to software needs to take into account that the software of two weeks ago might not be the same as today. Leading by example in this regard is the new Cybersecurity Act, which, inter alia, clarifies that data should be protected during the entire lifecycle of an ICT product, service or process and that they are provided with up-to-date software and with a mechanism for secure updates.46

45 Peter Sandborn, ‘Software Obsolescence—Complicating the Part and Technology Obsolescence Management Problem’ (2008) 30 IEEE Trans on Components and Packaging Technologies 886. 46 Article 51 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act), OJ L 2019/151, 15. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.2 The lifecycle of software

213

The dynamic nature of software and the changing environment in which it operates also has to be mirrored by dynamic certification schemes. Under the traditional approach, only the version that was assessed by the relevant certification body holds the certificate. Therefore, once the software is updated, the certificate is no longer valid. While the underlying rationale of this approach, that every change in the source code may introduce new vulnerabilities, is highly legitimate, it fails to sufficiently account for the regularity with which updates are provided. Re-certification after every update may not only be overly burdensome for developers, but may also lead to the unsatisfying situation that, after a security update has been provided, users need to choose between a potentially unsafe certified version and an uncertified patched version.47 Certification schemes for software need to find the right balance between ensuring the safety of patched versions without rendering updates and recertification unreasonably burdensome. The current draft EUCC cybersecurity candidate scheme48 prepared by the European Cybersecurity Agency (ENISA) is based on the paradigm of assurance continuity and may serve as an example for a dynamic approach to software certification. Assurance continuity defines an approach to ‘minimise redundancy in ICT Security evaluation, allowing a determination to be made as to whether independent evaluator actions need to be re-performed’.49 In other words, where changes to certified ICT products are made, such as patches to certified versions, or the same version operates in a different environment, the evaluation work that has already been done does not need to be repeated under all circumstances. For minor changes that only have little effect on the assurance level, the certification can be maintained by a partial evaluation that only focusses on those changes that may affect the assurance level. Where changes have a major impact on security, a more thorough re-evaluation is necessary. However, any results from an earlier evaluation that are still valid are reused. The EUCC also foresees a procedure of re-assessment for situations where no changes have been made, but the threat environment has significantly changed since the initial certification.50

47 See Javier Tallón, Patch Management in ISO/IEC15408 & ISO/IEC18045, Request for a new study period, available at accessed 19 October 2020, 7. 48 Common Criteria based European cybersecurity certification scheme (EUCC), available at

accessed 19 October 2020. 49 Ibid Annex 11. 50 Ibid. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

214

2. Software and software-enabled products and services

2.3 Summary of findings There is no point in searching for the ‘one right answer’ concerning the relationship between the terms ‘software’ and ‘computer program’, or ‘software’, ‘product’, and ‘service’. Rather, we should search for a concept and definition of ‘software’ where the specific characteristics of the subject matter at hand – in this case safety and liability – best match the scope of the definition, while still remaining within the boundaries of possible literal meanings of ‘software’. In other words, in analysing issues of safety and liability with regard to ‘software’, we should not primarily look at how the term ‘software’ is understood by computer scientists for coding purposes, or by IP lawyers for determining the scope of copyright protection, but at how the term should be understood to make sense with regard to safety and liability. It is held that, in a safety and liability context, ‘software’ should be understood as comprising digital content and digital services as defined by the DCSD. This is to ensure consistency of the acquis and to reflect the increasing commutability of software in the more traditional sense on the one hand and of ‘mere data’ (e.g. an electronic map) as well as of the provision of digital infrastructures beyond the immediate control of the user (e.g. SaaS) on the other hand. An important argument for including all SaaS schemes is also that, with software maintenance increasingly becoming a continuous process, the divide between provision of updates and provision of software-related services becomes blurred, and the principles of technological neutrality and functional equivalence mandate the inclusion of software irrespective of the location of storage. For similar reasons, the concept of tangible items ‘with software elements’ should be understood in a manner that is consistent with the treatment of ‘digital elements’ within the meaning of the SGD. Tangible items ‘with software elements’ should therefore include tangible items that incorporate, or are inter-connected with, software in such a way that the absence of that software would prevent the items from performing the functions ascribed to them by the producer or a person within the producer’s sphere, or which the user could reasonably expect, taking into account the nature of the items and the description given to them by the producer or a person within the producer’s sphere. One immediate implication of this is that the producer has to ensure safety and assume liability for such digital elements as are described as being suitable for the tangible items by the producer or a person within the producer’s sphere, irrespective of whether such software elements are supplied by the producer or by a third party; in the event of doubt, the software elements should be presumed to be described as being suitable.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

2.3 Summary of findings

215

It further follows from insights into the software lifecycle that software must be and remain safe throughout its whole lifecycle, from its first putting into circulation (even if called a ‘beta version’ or similar) to regular updates to its going out of service. Product monitoring with regard to software throughout its lifecycle is essential and must comprise monitoring of the whole ‘digital landscape’, including the emergence of new malware and new software that is likely to interact with the product and might cause the product to become unsafe. The duration of the software lifespan in terms of the software maintenance phase must be communicated by the producer in a transparent manner and should not be lower than what a user can reasonably expect, taking into account the type of software and all other circumstances. The dynamic concept of safety brings with it additional challenges. Where a software update changes the original performance, the functioning or the intended use of the software, safety assessment procedures that had to be conducted for the original software may have to be repeated for the updated software. The extent to which assessment should be repeated depends on the significance of the change. Where software updates are supplied on a regular or continuous basis, dynamic safety assessments, such as assessment of the update generation and provision system, may be more appropriate.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

216

3. Risks associated with software

3. Risks associated with software 3.1 Classification of risks 3.1.1 Safety risks and functionality risks (the latter not included in the Study) Both safety and liability legislation are responses to risks that are created by particular objects or by human behaviour. As to the classification of risks that are potentially associated with software, there is a fundamental divide between safety risks and functionality risks. Safety risks comprise all risks of harm caused by the software or other product to people or to assets different from the software or other product itself. For example, where an autonomous vehicle hits a pedestrian, this is clearly a safety risk, as is a security gap in online banking software that allows criminals to initiate unauthorised payment transactions. By contrast, functionality risks comprise risks of the software or other product not performing properly, i.e. the user not getting ‘good value for money’. Thus, if the autonomous vehicle stops running, or if the online banking software fails to process payment transactions in an appropriate manner, these are functionality risks. Functionality risks have, traditionally, been included in a seller’s or supplier’s liability for lack of conformity, such as under the DCSD and SGD, and have been excluded from non-contractual liability for property damage, such as under Directive 85/374/EEC (Product Liability Directive, PLD).51 They should be dealt with by contract law, by regimes fixing minimum quality standards (such as under the new Green Deal with regard to durability) and possibly by unfair competition law. They will, however, not be dealt with in this Study. Where a software or other product causes harm to itself, e.g. where an autonomous vehicle hits a rock and is wrecked in the accident, this is somewhat a borderline case.52 On the one hand, there is usually one component which causes harm to the other components, which makes the situation somewhat si-

51 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 1985/210, 29. 52 Cf. Piotr Machnikowski in Machnikowski (ed), European Product Liability: An Analysis of the State of the Art in the Era of New Technologies (Intersentia 2019), para 150; Gerhard Wagner, ‘Produkthaftung für autonome Systeme’ (2017) 217 AcP 707, 727; Gert Brüggemeier, Tort Law in the European Union (2nd edn, Wolters Kluwer 2018), para 386 and Bernhard Koch ‘Product Liability 2.0 – Mere Update or New Version?’ in Lohsse/Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 104; Gerald Spindler, ‘Roboter, Automation, Künstliche Intelligenz, Selbst-Steuernde Kfz – Braucht das Recht Neue Haftungskategorien?’ (2015) 31 CR 766, 773. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

3.1 Classification of risks

217

milar to that where the software or other product causes harm to other assets of the same owner.53 On the other hand, however, self-infliction of damage or selfdestruction of a software or other product primarily reduces the functionality of the software or other product itself, and at the end of the day it is close to impossible to draw a line between the case where a software or other product simply stops functioning (e.g. because it is of poor quality) and the case where the software or other product causes harm to itself. Self-infliction of damage and self-destruction should therefore be qualified as functionality risks, which is in line with Article 9 of the PLD, which restricts liability under the PLD to damage to, or destruction of, any item of property other than the defective product itself. However, with the increased connectivity of devices, software does not necessarily come preinstalled with the hardware but is downloaded by the user. What counts as ‘one and the same’ product may therefore be difficult to define. Where software and hardware are from different suppliers and are neither sold nor marketed together, damage caused by a defect in the software to the hardware device can hardly be considered self-inflicted damage. To determine whether software and hardware are components of the same product or are independent products, inspiration can be drawn from the DCSD and SGD. In order to ensure consistency within the acquis, similar criteria should be applied as are used for deciding whether or not software counts as a ‘digital element’ of another item54 (see supra 2.2). Both types of risks – safety risks as well as functionality risks – frequently materialise at the same time (e.g. a defect of the autonomous vehicle causes an accident, resulting both in a pedestrian’s personal injury and in the destruction of the vehicle).

3.1.2 Physical, pure economic and social risks Safety risks can be further divided into different categories, depending on the type of harm that is or might be caused. Traditionally, death, personal injury, and damage to property have played a special role within safety and liability frameworks. These special risks can be described as ‘physical’ risks. Physical risks continue to play their very

53 See e.g. Bernhard A Koch ‘Product Liability 2.0 – Mere Update or New Version?’ in Lohsse/ Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 104. 54 Article 2(5)(a) Directive (EU) 2019/771. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

218

3. Risks associated with software

special role also in the digital era, but the concept must be understood more broadly and include not only death, personal injury, and damage to property in the traditional sense, but also damage to data and to the functioning of other algorithmic systems. Where, e.g., the malfunctioning of software causes the erasure of important customer data stored by the victim in some cloud space, this should have the same legal effect as the destruction of a hard disk drive or of paper files with customer data (which is not to say that all data should automatically be treated in exactly the same way as tangible property in the tort liability context55). Likewise, where tax management software causes the victim’s customer management software to collapse, this must be considered a physical risk, irrespective of whether the customer management software was run on the victim’s hard disk drive or somewhere in the cloud within an SaaS scheme. While this is unfortunately still disputed under national tort law,56 any attempt to draw a line between data stored on a physical medium owned by the victim and data stored otherwise seems to be completely out-dated and fails to recognise the functional equivalence of different forms of storage. Pure economic risks57 are economic risks that are not just the result of the realisation of physical risks. E.g., where medical software causes a surgery to fail, resulting in personal injury and consequently in hospitalisation and loss of earnings, the costs of hospitalisation and the loss of earnings is an economic harm that simply results from the personal injury. This is not a ‘pure’ economic risk. Where, on the other hand, a harmful recommendation is given by AI to consumers, resulting in these consumers buying overpriced products, the financial loss caused is not in any way connected with the materialisation of a physical risk, which is why the risk of causing such financial loss qualifies as a pure economic risk. Traditionally, the threshold for the law to provide compensation for pure economic loss (as the result of the materialisation of pure economic risks) is very

55 Christiane Wendehorst, ‘Liability for Pure Data Loss’ in Karner/Magnus/Spier/Widmer (eds), Essays in Honor of Helmut Koziol (Jan Sramek 2020), 62; Francesco Mezzanotte, ‘Liability for Digital Products’ in De Franceschi/Schulze (eds), Digital Revolution – New Challenges for Law (Nomos, Beck 2019), 181. 56 See Christiane Wendehorst, ‘Liability for Pure Data Loss’ in Karner/Magnus/Spier/Widmer (eds), Essays in Honor of Helmut Koziol (Jan Sramek 2020), 225; MüKoBGB/Wagner, 8. Aufl. 2020, BGB § 823 para 245 ff; Louisa Specht, Konsequenzen der Ökonomisierung informationeller Selbstbestimmung (Karl Heymanns Verlag 2012), 230; Florian Faust, ‘Digitale Wirtschaft – Analoges Recht: Braucht das BGB ein Update?’, in Ständige Deputation des Deutschen Juristentages (ed), Verhandlungen des 71. Deutschen Juristentages – Band I – Gutachten Teil A (Beck 2016), A48. 57 See Article 2:102(4) Principles of European Tort Law (PETL); Cees van Dam, European Tort Law (Oxford University Press 2006), 169.  

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

3.1 Classification of risks

219

high. Pure economic loss is not covered by the PLD, and national tort law systems are usually rather reluctant to grant such compensation.58 Social risks (often also called ‘fundamental rights risks’) include discrimination, exploitation, manipulation, humiliation, oppression and similar undesired effects that are – at least primarily – non-economic (non-pecuniary, non-material) in nature but that are not just the result of the materialisation of a physical risk either (as the latter would be dealt with under traditional regimes of compensation for pain and suffering, etc.). Such risks have traditionally been dealt with primarily by special legal regimes, such as data protection law, anti-discrimination law or, more recently, law against hate speech on the Internet and similar legal regimes.59 There is also a growing body of more traditional tort law that deals specifically with the infringement of personality rights.60 While the fundamental rights aspect of social risks is in the foreground, it should not be overlooked that these risks can be linked to economic risks, either for the affected individual or for society as a whole (e.g. HR software that favours male applicants creates a social risk by discriminating against female applicants, but also leads to adverse economic effects for the affected women). Physical risks

Pure economic risks

Social risks

Death, personal injury or property damage

Economic risks that do not just Non-economic risks that do not follow from the materialisation just follow from the of a physical risk materialisation of a physical risk

Adverse psychological effects can be either physical risks, where the effect is a diagnosed illness according to WHO criteria (such as depression), or social risks,

58 Gert Brüggemeier, Tort Law in the European Union (2nd edn, Wolters Kluwer 2018), para 385; Wininger/Koziol/Koch/Zimmermann (eds), Digest of European Tort Law Volume 2: Essential Cases on Damage (De Gruyter 2011), 383 et seq. 59 Article 82(1) Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 2016/119, 1; Article 8(2) Council Directive 2004/113/EC of 13 December 2004 implementing the principle of equal treatment between men and women in the access to and supply of goods and services, OJ L 2004/373, 37; Netzwerkdurchsetzungsgesetz, 1. September 2017 (BGBl. I S. 3352). 60 For an overview, see Gert Brüggemeier, Aurelia Colombi Ciacchi and Patrick O’Callaghan, Personality Rights in European Tort Law (Cambridge University Press 2010). Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

220

3. Risks associated with software

where the effect is not a diagnosed illness, but, e.g., mere stress or anxiety. It is not always easy to draw a line between the two.61

3.1.3 Collective or systemic safety risks Risks are individual when the potential harm almost exclusively affects one or several individuals (e.g. the victim of an accident, and perhaps her family) as contrasted with collective risks that primarily affect society or the economy as a whole (e.g. manipulation of voting behaviour) and are more than just the sum of individual risks. Some of these risks may be described as ‘systemic’ risks because they may seriously affect larger systems, such as the integrity of networks. Collective risks are difficult to classify into physical, pure economic and social risks, and many of these risks have elements of each category. For example, software used for trading securities may cause a stock exchange and possibly the whole economy to break down, which affects servers (and thus property, indicating a physical risk), leads to huge financial losses (a pure economic risk) and potentially a public loss of trust in the trading system (a social risk). Individual risks that affect a large number of people may also become collective risks, e.g. the manipulation of a large number of consumers may have effects on our economy as a whole. This is why also the added effect of many physical risks may become collective risks. For instance, where a chatbot software spreads misinformation concerning the COVID-19 crisis in the Internet, causing one million individuals in a given country to fall ill, this may amount to more than just one million times the harm caused to one individual, as it may mean a collapse of the medical system and possibly of the economy as a whole. Collective risks must be considered on the safety side, and they call for special responses on the liability side, such as strong collective redress mechanisms (e.g. by consumers), administrative and criminal sanctions, etc.

3.1.4 Direct and intermediated safety risks Another important differentiation is that between direct and intermediated risks. Risks are intermediated when the harm is caused by the free decision of a third party, which was, however, in some way instigated or facilitated by the software or other product. Risks that are not intermediated count as direct risks. For instance,

61 Cees van Dam, European Tort Law (Oxford University Press 2006), 147. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

3.1 Classification of risks

221

where a medical recommender system suggests a particular diagnosis and treatment to the doctor in charge, it is ultimately the doctor who makes the decision as to which is the right diagnosis and treatment, and it is the doctor who, if this diagnosis is wrong and the treatment causes harm to the patient, has directly caused the harm. However, it is also clear that the recommender system has created some sort of risk by suggesting the wrong diagnosis and treatment to the doctor. Also the risk of harm caused by cyber security risks is usually an intermediated risk, because it normally materialises only where malicious third parties intervene. The example of cyber security risks serves to illustrate that intermediated risks must, at least in principle, be included in safety and liability regimes. If such risks were not included simply because they materialise only where a third party intervenes, the victim would normally go uncompensated as the hacker will often remain unidentified. The situation is more difficult with recommender systems, as it is usually an identified human actor who takes the full responsibility for the ultimate decision. All sorts of factors may influence human decisions, and we need to delineate relevant and irrelevant intermediated risks. For instance, the doctor’s husband may have caused the doctor to make the wrong decisions by breaking up the relationship and inflicting emotional stress on the doctor, but it is clear that the husband cannot be liable for the harm thus caused to the patient (and national tort law would avoid this result, using different lines of argumentation, including that this effect was too remote and that there was no specific duty on the part of the husband to protect the health of his wife’s patients). While this may be a rather clear-cut case, it is less clear whether the provider of an online medical database, which contains faulty information on the symptoms of a particular disease, thus prompting the doctor to make the wrong decision, can become liable for the harm thus caused to an individual patient. There is, at the moment, a case pending before the CJEU, and the Court will hopefully clarify in a preliminary ruling whether this type of scenario may lead to liability under the PLD.62 Arguably, a line needs to be drawn between functionality and mere content. Where a medical recommender system suggests a particular diagnosis and treatment, this suggestion is generated by the system and its specific functionality. This can be compared to the functionality of a traditional medical device, such as a thermometer – if a thermometer falsely indicates that the patient’s body temperature is normal, while really the body temperature is 41 degrees Celsius, there

62 In the OGH judgment of 21 January 2020, 1 Ob 163/19f, the Austrian Supreme Court referred the question of whether a physical copy of a daily newspaper that contains an incorrect health tip, the compliance with which causes damage to health, can constitute a defective product, to the CJEU. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

222

3. Risks associated with software

would be not the glimpse of a doubt that this potentially falls under the PLD, and the same should hold true for strict liability. Where, however, a medical journal publishes scientific articles online, and one of the articles includes wrong information, the functional equivalent would be a printed book and this would be merely a question of content displayed. Subject to what the CJEU will ultimately rule, the latter should not be part of safety and liability for software. Having said this, intermediated risks cannot merely be dealt with in exactly the same manner as direct risks. For instance, where the producer is strictly liable, it must be able to rely on defences, e.g. there should not be liability in the case of recommender systems where it was entirely unreasonable for the person that made the decision to rely on the recommendation and such a use of the recommender system was not within the range of possible forms of use which the producer had to take into account. Direct risks

Risks that do not count as intermediated risks

Intermediated risks

Risks that materialise only in combination with the free decision of a third party, which was, however, instigated or facilitated in some way

3.1.5 Characteristic, otherwise typical and atypical safety risks Finally, there is also the differentiation between typical and atypical risks. A risk may be qualified as a characteristic risk where it is characteristic of the specific intended function, operation or use of a software or other product, e.g. a smart watering system floods the premises. A risk is not characteristic, but still otherwise typical, where that risk is not characteristic of the specific intended function, operation or use of a software or other product, but at least of the wider class of software or other products. An important example for such otherwise typical risks are cyber security risks. For instance, due to a security gap, someone might hack the smart watering system, thereby gaining access to the entire smart home framework, deactivating the alarm system and committing a burglary. This is not characteristic of watering systems, because the same effect might have been achieved if the burglars had hacked a smart water kettle or fridge, but still this is a risk which is typical for software or software-enabled products that are connected. An atypical risk is a risk that is neither characteristic of the specific intended function, operation or use of the software or other product nor of the broader class of products, e.g. a person cuts their finger due to a sharp edge on the watering system’s handle. While flooding the premises is characteristic of anything that includes Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

3.2 Safety risk matrix

223

water, and in particular the spreading of water, a person could cut their finger with thousands of different things that have handles, ranging from a bag to a vacuum cleaner. The question whether a risk is typical or atypical is absorbed by other elements in the context of fault liability, such as reasonable foreseeability,63 probability64 or scope of the rule.65 Where the elements of fault liability are fulfilled, there is no reason why the atypical nature of the risk should give rise to an exclusion from liability, e.g. where somebody negligently produced the handle in a way that it was foreseeable that other people would cut their fingers, there should be liability. The same holds true for defect liability, including product liability under the PLD, as it is the lack of safety the public at large is reasonably entitled to expect that gives rise to liability of the producer.66 The public expectation certainly encompasses that the handle of a water system is not so sharp that it cuts the user’s hands. However, where there are particular ex ante safety procedures (e.g. certification) or strict liability in the proper sense, i.e. liability that depends neither on fault nor on the existence of a defect, the fact that a risk is atypical should play a role. For example, assuming that the EU legislator qualified AI-driven watering systems as ‘high-risk’ AI-systems (e.g. under the heading ‘autonomous robots’, as suggested by the first draft of an Annex to the relevant EP report67) and therefore attached strict liability to damage caused by AI-driven watering systems, this should definitely not include damage caused by a sharp edge on a handle.

3.2 Safety risk matrix The following risk matrix, which follows from what has been stated above, is restricted to safety risks (excluding functionality risks, which are not within the scope of this Study). It provides illustrations for different categories of risks, depending on whether the risks are primarily of a physical, (purely) economic or social nature, whether they are typical or atypical of the relevant software or other

63 Overseas Tankship (UK) Ltd v Morts Dock and Engineering Co (The Wagon Mound) [1961] AC 388. 64 Barnett v Chelsea and Kensington Hospital Management Committee [1968] 2 WLR 422. 65 BGH, 6.6.1989 – VI ZR 241/88 on the missing causal link between high blood pressure, excitement after an accident and a subsequent stroke. 66 Article 6(1) and Recital 6 Product Liability Directive; C‑503/13 and C‑504/13, Boston Scientific Medizintechnik, EU:C:2015:148, para 37, 38. 67 European Parliament Committee of Legal Affairs, ‘Draft Report with recommendations to the Commission on a Civil liability regime for artificial intelligence’ (2020/2014(INL)). Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

224

3. Risks associated with software

product, and whether they are direct or intermediated. The matrix does not differentiate between individual and collective/systemic risks in order to reduce complexity, and also because most individual risks, when affecting a large number of individuals, evolve into risks that are also collective in nature. Physical risks

Pure economic risks

Social risks

Cleaning robot hits passerby and causes personal injury

Software agent buys overpriced items and causes financial loss

Hate speech by bots causes humiliation and social unrest

Intermediated Recommendation from characteristic medical software causes health problem risk

Price comparison software gives poor recommendations to consumers

HR recommender software scores female applicants lower than male applicants

Intermediated Security gap in smart otherwise heating system typical risk facilitates burglary

Security gap in email software facilitates fraud by third parties

AI system in the workplace leads to stress/anxiety

Sharp handle of robot causes injury

Behaviour of software agent causes lowering of user’s credit score

Online gaming passion of spouse leads to break -up of marriage

Direct typical risk

Atypical risk

3.3 Summary of findings Risks associated with software can be categorised according to a variety of different criteria, many of which are relevant for safety and liability regimes. ‘Physical risks’ are largely death, personal injury, and damage to property. They must be understood broadly, with personal injury, including psychological harm that amounts to a recognised state of illness (e.g. depression), and damage to property, including harm to the victim’s digital environment, including any of the victim’s data. ‘Pure economic risks’ are economic risks that are not the result of the materialisation of physical risks, e.g. economically harmful recommendations given to consumers. ‘Socifal risks’ (often also called ‘fundamental rights risks’) may lead to harm of a primarily non-material nature caused to individuals, groups or the public at large (such as discrimination, manipulation, privacy infringements, humiliation or oppression), which is not just the result of the materialisation of physical risks. Risks may be individual or collective, the latter being risks that affect a multitude of individuals or the public at large, resulting in harm that goes beyond the Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

4.1 Classification of responses

225

added harm of each of the individuals affected. The latter often cannot clearly be classified as physical, purely economic or social, and often have elements of both. For many collective risks, the description as ‘systemic risks’ is adequate. Risks may also be direct or intermediated. Intermediated risks are risks that materialise only in combination with the free decision of a third party, which was, however, instigated or facilitated in some way. Intermediated risks that have their origin in the functionality of software should count as relevant risks in a framework of safety and liability with regard to software, whereas the mere display of content (that then instigates a person to make a harmful decision) should not count as a relevant risk. This means that risks caused by recommender systems are relevant risks, considering, in particular, automation bias. Last but not least, risks may be characteristic, otherwise typical or atypical, depending on whether or not they are characteristic of the specific intended function, operation or use of a software or other product.

4. Responses Both safety and liability are responses to the risk of adverse events, whose materialisation is to be avoided or compensated. The legislator has to decide what counts as an adverse event and as a relevant risk, e.g. only death, personal injury and damage to property, or also pure economic loss, discrimination or infringements of privacy or other personality rights, or even non-material damage (such as stress and anxiety) and any economic loss resulting therefrom. In a second step, the legislator has to define the appropriate response. There are different types of responses, and responses can be classified according to a range of criteria.

4.1 Classification of responses 4.1.1 Ex ante responses and ex post responses Responses are often divided into ex ante responses and ex post responses, but the term ‘ex post responses’ is used with two different meanings, i.e. some understand it as post-market or post-delivery, and others as post-accident.68

68 Cf. Commission, ‘White Paper on Artificial Intelligence – A European approach to excellence and trust’ COM(2020) 65 final and EY, Technopolis and VVA ‘Evaluation of Council Directive 85/ Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

226

4. Responses

A range of safety measures, including mandatory requirements and procedures (risk assessment, certification, etc.), aim at preventing both risks from being created by putting unsafe software or other products into circulation, and harm from being caused. They are thus clearly ex ante measures. Where a product has been placed on the market, but later a risk becomes known, this is ideally detected in the course of post-market surveillance and then triggers appropriate action before the risk materialises. Such measures are ‘ex post’ measures in the sense that they are taken after the product has been put into circulation, but they are ‘ex ante’ measures in the sense that they are taken before harm is caused. We may call them ‘ex post safety measures’, and the former type of safety measures ‘ex ante safety measures’. Where harm has already been caused, further safety measures would no longer be effective in the individual case at hand (albeit of course for the prevention of harm occurring also in other cases), and the question of liability arises. Liability aims at compensating harm that has been caused69 and is therefore an ex post measure, as it comes into play both after the product has been put into circulation and the risk has already materialised. However, even liability may also serve as an incentive for taking precautionary measures in order to avoid liability in the first place, so there is a certain ex ante aspect even to liability.70 Approaches as to the right balance between ex ante safety, ex post safety and liability differ. A purely economic approach, which has been the prevailing approach, e. g., in the U. S., insists that safety measures must only be taken to an extent that the overall cost of these measures is still lower than the overall cost of harm likely to be caused71 (cf., e. g., the ‘Learned Hand Formula’ for ascertaining the appropriate level of care72). Where, however, the cost of precautionary mea 





374/EEC on the approximation of laws, regulations and administrative provisions of the Member States concerning liability for defective products, Final Report’ (Commission 2018), 4; distinguishing between the two meanings Shanta Marjolein Singh, ‘What Is the Best Way to Supervise the Quality of Medical Devices? Searching for a Balance Between Ex-Ante and Ex-Post Regulation’ (2013) 4 European Journal of Risk Regulation 465, 469. 69 See Article 10:101 and Article 10:104 Principles of European Tort Law (PETL); Article VI. – 6:101 Draft Common Frame of Reference (DCFR). 70 Article 10:101 PETL: ‘Damages also serve the aim of pre-venting harm’. However, this effect should not be mistaken for punitive damages, which are not recognised in most European legal systems. 71 On this discussion, Charles Kolstad, Thomas Ulen and Gary Johnson, ‘Ex Post Liability for Harm vs. Ex Ante Safety Regulation: Substitutes or Complements?’ (1990) 80 The American Economic Review 888. 72 This term was coined by Judge Learned Hand, who stated in the case U. S. v. Carroll Towing, 159 F.2d 169, 2d Cir. 1947: ‘[T]he owner’s duty, as in other similar situations, to provide against re 

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

4.1 Classification of responses

227

sures would exceed the overall cost of harm caused, such measures need, or should, not be taken because simply letting harm occur and compensating victims later would serve efficiency.73 Some would go as far as saying that this holds true even where no compensation is made (Kaldor-Hicks formula74). Europe has always taken a different path, for various reasons, including that death, personal injury, and (other) fundamental rights infringements cannot simply be reduced to a monetary figure and that the purely economic approach often fails to take into account the real cost of accidents, e.g. the economic harm caused by a general lack of trust on the part of consumers and other collective harm and social concerns.75 It is in particular in connection with the Medical Device Regulation (below 5.1.2) that there has, however, been a debate about the proper role of economic arguments and of ALARP vs AFAP as the right approach to risk and safety management. The acronym ALARP stands for ‘as low as reasonably practicable’ and is broadly the same as SFAIRP (‘so far as is reasonably practicable’) or ALARA (‘as low as reasonably achievable’).76 This principle claims that the residual risk should be reduced as far as is reasonably practicable by the regulation and management of safety-critical and safety-involved systems. A risk is ALARP when the cost involved in reducing the risk further would be grossly disproportionate to the benefit gained.77 While ALARP clearly demands much more than a simple quantitative comparison of cost, it says it is unreasonable to spend infinite resources in the attempt to reduce a risk to zero. AFAP (‘as far as possible’), on the other hand,

sulting injuries is a function of three variables: (1) The probability that she will break away; (2) the gravity of the resulting injury, if she does; (3) the burden of adequate precautions’. 73 E.g. Martin Weitzman, ‘Prices vs. Quantities’ (1974) October, 41 The Review of Economic Studies 477. 74 The Kaldor-Hicks formula has been developed by Nicholas Kaldor, ‘Welfare Propositions of Economics and Interpersonal Comparisons of Utility’ (1939) 49 The Economic Journal 549.) and JR Hicks, ‘The Foundations of Welfare Economics’ (1939) 49 The Economic Journal 696. It states – in simple words – that an outcome is an improvement if those that are made better off could hypothetically compensate those that are made worse off. 75 See Commission, ‘Communication from the Commission on the precautionary principle’ COM (2000) 1 final. 76 Michael Jones-Lee and Terje Aven, ‘ALARP – What Does It Really Mean?’ (2011) 96 Reliability Engineering & System Safety 877, 877. 77 Edwards v National Coal Board [1949] 1 All ER 743 CA: “‘Reasonably practicable’ is a narrower term than ‘physically possible’ and seems to me to imply that a computation must be made by the owner, in which the quantum of risk is placed on one scale and the sacrifice involved in the measures necessary for averting the risk (whether in money, time or trouble) is placed on the other; and that if it be shown that there is a gross disproportion between them – the risk being insignificant in relation to the sacrifice – the Defendants discharge the onus on them”. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

228

4. Responses

is more demanding. Under this approach, all possible precautionary measures have to be taken unless one of two defined statements are true: Either, the additional precautionary measure does not further enhance the safety of the product, or there is a more effective risk control that cannot be simultaneously executed. So, in short, while under ALARP there may be both technical and economic arguments for not applying a particular risk control mechanism, AFAP would only accept technical arguments.

4.1.2 Old and New Approach to safety legislation Until a certain point in time, EU product safety legislation used to go into great detail with all the technical specifications in the legislation itself. Consequently, any amendments to the technical requirements had to be introduced by way of legislative amendments. The focus on technical specifications led to laborious and timeconsuming procedures, which rendered this so-called ‘Old Approach’ to product safety too inflexible to react to new market developments and too slow to efficiently reduce technical barriers to trade on the internal market.78 Moreover, a missing link between technical regulations and standardisations led to duplications and inconsistencies.79 The path towards what is still called the ‘New Approach’ (although the change occurred decades ago) was paved by the CJEU’s landmark decision Cassis de Dijon.80 The court in Luxembourg held that products which have lawfully been produced and marketed according to the regulations of a Member State should be able to circulate freely within the territory of the EU. Member States can only uphold import restrictions if it is demonstrated that they are justified by ‘mandatory requirements’, such as health, safety, or consumer protection and that the measures are proportionate to achieve those objectives.81 As a reaction to the CJEU’s judgment, the ‘New Approach to Harmonisation and Technical Standards’82 was adopted, according to which, harmonisation measures in the field of product safety should

78 Commission, ‘The ‘Blue Guide’ on the implementation of EU product rules’ (2014). 79 Jacques Pelkmans, ‘The New Approach to Technical Harmonization and Standardization (1987)’ 3 Journal of Common Market Studies 251, 253. 80 C-120/78 Rewe-Zentral AG v Bundesmonopolverwaltung für Branntwein [1979] CJEU, EU: C:1979:42. 81 C-120/78 Rewe-Zentral AG v Bundesmonopolverwaltung für Branntwein [1979] CJEU, EU: C:1979:42, para 8 – 15. 82 Council Resolution of 7 May 1985 on a new approach to technical harmonization and standards OJ C 136/1. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

4.1 Classification of responses

229

only lay down the essential requirements (i.e. mandatory requirements in the words of the Court) that a product needs to meet in order to benefit from the free movement of goods.83 In order for manufacturers to be able to demonstrate compliance with the essential requirements, authorised European standardisation organisations (CEN, Cenlec and ETSI)84 draw up harmonised technical specifications. Compliance with these standards leads to the presumption of conformity with the corresponding essential requirements. However, the harmonised standards have a voluntary character; manufacturers may choose to apply their own or different technical specifications, but they need to demonstrate that these specifications respond to the essential requirements. This separation of essential requirements and technical specification is a key characteristic of the New Approach.85 The current procedure for European standardisation organisations (ESOs) for creating technical specifications is laid down in Regulation 1025/2012. Based on the annual Union work programme for European standardisation, which identifies strategic priorities, the European Commission may request one or several ESOs to draft European standards in support of legislation.86 The Commission has to ensure the consultation of relevant stakeholders, such as industry and consumer associations, civil society and Member States in this process. Before the request is formally transmitted to ESO, it needs to be approved by a majority vote of the Committee on Standards, which is composed of representatives of Member States.87 The mandate is adopted as a Commission Implementing Decision; it includes a deadline and can, in theory, be rejected by the ESO. In the ICT sector, the Commission can decide to recognise privately developed ICT specifications, which can then be referenced in public procurement.88 Private specifications can only be recognised if they fulfil certain criteria, such as being publicly available, ongoing maintenance of the specifications and that they were drawn up in a process that is based on openness, consensus and transparency.89 Private standards play an important role in the manufacturing of ICT products,90 due to the fast changes and developments in this sector. To keep pace with techno-

83 Annex II, V., Council Resolution of 7 May 1985 on a new approach to technical harmonization and standards OJ C 136/1. 84 Annex I, Regulation 1025/2012. 85 Jacques Pelkmans, ‘Mutual Recognition in Goods and Services: An Economic Perspective (2003) Working Paper’ 16 European Network of Economic Policy Research Institutes, 6. 86 Article 10 Regulation 1025/2012. 87 Article 10(2) Regulation 1025/2012 and Article 5 Regulation (EU) No 182/2011. 88 Article 13 Regulation 1025/2012. 89 Annex II No. 13 Regulation 1025/2012. 90 See e.g. German Standardization Panel, ‘Inidcator Report’ (2014), 11. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

230

4. Responses

logical developments, industry does not always wait for formal standardisations to be adopted but draws up its own specifications.91 Next to the already existing standards in the ICT sector,92 the Cybersecurity Act established a framework for the cybersecurity certification of ICT products, processes and services (see below 5.4.2). ENISA has been entrusted with drawing up cybersecurity certification schemes, which may then be implemented by the Commission. According to Article 56, the adopted cybersecurity certification schemes will remain voluntary for now. However, by the end of 2023, the Commission will evaluate whether and to what extent the adopted cybersecurity schemes shall become mandatory.

4.1.3 Positive (‘PRRP’) vs negative (‘blacklisting’) approaches When regulating a particular area, the European legislator may formulate mandatory safety requirements that need to be fulfilled for some activity to be lawful (and possibly attach liability to any harm caused by a failure to implement the requirements). Normally, these mandatory requirements take the form of principles (e. g. the principles for the processing of personal data in Article 5 GDPR), rules (e. g. the requirement of a legal ground under Article 6 GDPR), rights (e. g. the data subject’s rights), and procedures (e. g. a data protection impact assessment, documentation). This ‘PRRP approach’, which has been taken, for example, by the GDPR, has a number of benefits, including: (i) it sends a positive message and is of high symbolic value; and (ii) it is relatively straightforward to formulate. The major downside, however, is that it tends to create a lot of red tape and involves considerable extra costs, potentially being to the detriment of SMEs and enhancing the competitive advantages and market power of the big players. An alternative to this PRRP approach is blacklisting, which is often combined with a general clause. This second regulatory technique has successfully been applied, e.g., to unfair contract terms control in consumer contracts (Directive 93/13/EC93) and unfair commercial practices (Directive 2005/29/EC94). Black 







91 DLA Piper and TU Delft, ‘EU Study on the specific policy needs for ICT standardisation’ (Commission 2007), 15. 92 Available at accessed 6 October 2020. 93 Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts, OJ L 1993/ 95, 29. 94 Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and 2002/65/EC of the European ParWendehorst/Duller, Safety- and Liability-Related Aspects of Software

4.1 Classification of responses

231

listing means the regulator mainly restricts itself to stating what should definitely NOT be done, and possibly combining this with a fall-back option for similar cases in order to prevent circumvention and obvious gaps. The main drawbacks of this approach are: (i) it tends to send a ‘negative message’, i. e. is much more difficult to defend from a PR perspective; and (ii) it is relatively difficult to get the formulation of the blacklisted practices right. A major advantage of this approach, however, is that it hits in a much more targeted manner precisely what should be avoided, e. g. because of its inconsistency with fundamental European values, and leaves full freedom otherwise. It is also easier to adapt to changing developments. This approach may thus be more beneficial for innovation, in particular by SMEs. In the context of AI, for example, the European legislator may wish to consider whether – at least to a certain extent – an ‘unfair AI practices approach’ might be better for an innovation friendly environment in Europe than a comprehensive PRRP framework.95  



4.1.4 Horizontal and sectoral responses Responses are very often divided into horizontal (or: cross-sectoral) and vertical (or: sectoral). This division primarily relates to the scope of application of the relevant response, such as the legislative instrument. Responses whose scope is defined in very broad and generic terms, such as (almost) all products, all contracts, all consumer contracts, all commercial practices, all data, etc. are usually referred to as ‘horizontal’. Responses whose scope is defined more narrowly, in particular by reference to a sector of the economy, are referred to as ‘sectoral’. Sectoral responses would, e.g., apply only to toys, or only to consumer credit agreements, or only to package holidays. At a closer look, however, it appears that there is more of a spectrum of different approaches than a clear divide. For example, the Machinery Directive (below 5.1.4) or Radio Equipment Directive (below 5.1.3) are ‘sectoral’ when compared with the General Product Safety Directive (below 5.1.1), because they apply only to machinery or only to radio equipment, but both of these categories are quite broad, and machinery and radio equipment is used across all sectors of the economy, and for private use as well as for business purposes.

liament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of the Council, OJ L 2005/149, 22. 95 Jens-Peter Schneider and Christiane Wendehorst, ‘Response to the public consultation on the White Paper: On Artificial Intelligence – A European approach to excellence and trust, COM(2020) 65 final’ (ELI 2020). Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

232

4. Responses

Whether software-related legislation with regard to safety and liability should have a broad scope or a narrow scope is determined by a number of different considerations. They include the consideration whether the safety requirements or liability imposed are important for a broad range or only a narrow range of products or activities, taking into account that any requirements or liability are likely to enhance overall safety and foster user trust. For instance, measures against cybersecurity risks are important for more or less all types of software and softwareenabled products and services, because poorly secured software or other products always pose risks of causing harm, no matter whether the object at hand is a smart water kettle or a medical device. However, safety requirements also cause additional costs and may prove to be a market entry barrier, in particular for SMEs. This is why a second consideration must always be whether safety requirements or liability imposed are proportionate for a broad range or only a narrow range of products or activities. For instance, while an AFAP approach may be proportionate for medical devices, ALARP might be much more appropriate for water kettles. Apart from these substantive considerations, there are also more formal considerations of legislative practicality and good drafting. A horizontal instrument makes sense only where more or less the same rules apply across the scope of the whole instrument and there are only relatively few deviations within the scope of application. Conversely, there is not much point in creating a horizontal instrument where, immediately after the introductory provisions, the instrument would have to be broken into different parts with respective different scopes, so that the whole instrument would look more like many sectoral instruments in a row and copied into one single document.

4.1.5 More or less risk-based (targeted) approaches 4.1.5.1 General meaning of ‘risk-based’ The decision between horizontal or sectoral legislation is closely connected (and largely overlapping) with the idea of a risk-based approach to regulation. Calling for a risk-based approach is basically making a proportionality argument, claiming that safety requirements as well as liability should only be imposed to the extent that this is justified by the risk posed.96 Where the risk posed differs sig-

96 See e.g. Henry Rothstein, Olivier Borraz and Michael Huber, ‘Risk and the Limits of Governance: Exploring Varied Patterns of Risk-Based Governance across Europe’ (2013) 7 Regulation & Governance 215; Risk-based regulation is also a key element of the better regulation agenda of the Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

4.1 Classification of responses

233

nificantly across the potential scope of legislation, this is either an argument for taking a sectoral approach right away, or for otherwise taking more targeted action, such as by differentiating between different levels of safety or different regimes of liability within the scope of an instrument.97 An example for an instrument that takes a not very risk-based approach is the General Data Protection Regulation (GDPR).98 While it does refer to risks and safeguards in various places, and while some measures are restricted in scope, it still applies more or less the same basic set of rules across the board, be it data processing activities by a school’s parents’ association or by the operator of a worldwide social network. There are various techniques for achieving a more risk-based approach, such as exclusions from scope (e.g. low-value transactions below a particular threshold amount, vehicles below a particular maximum speed) or setting up specific conditions for specific measures within an instrument (e.g. restricting the duty to conduct an impact assessment to bigger cases). Where a very broad range of different risk classes needs to be tackled, separate risk classification may be an option, which is a technique used by the Medical Device Regulation (below 5.1.2). A similar technique that is currently being discussed in the context of AI liability is a combination of a general legal instrument that provides, in a rather abstract manner, provisions for ‘high-risk’ applications and for other applications.99 What counts as ‘high-risk’ follows from an enumerative list in an Annex, which may be updated at regular intervals by delegated acts or in similar ways.100 The list of ‘high-risk’ applications could either refer to sectors (such as energy or transport) or to types of applications (such as HR software or personal pricing software), or to both.101

OECD and many of its member states, see OECD, ‘Recommendation of the Council on Regulatory Policy and Governance’ (2012). 97 Suggested for example by the Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for AI and other digital technologies’ (Commission 2019). 98 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 2016/119, 1. 99 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL). 100 EP Resolution on AI liability (fn 99) Recommendation to the Commission no 16. 101 See as an example, Annex II and III of Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union, OJ L 2016/194, 1. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

234

4. Responses

4.1.5.2 Risk-based safety: reducing the ‘risk of hitting a case with no risk’ The need to reduce the scope of a precautionary measure that has been found to be generally suitable and necessary for mitigating a risk is not the same in all cases and for all safety responses. Rather, this need depends on the following factors: (a) the gravity of the initial risk which the safety response is suitable to mitigate or eliminate; (b) the likelihood that a safety measure will be unnecessary because it applies in a situation where the initial risk does not exist; (c) the extent to which the measure at hand is a burden for those affected by it (e. g. in terms of cost or market entry barriers); and (d) considerations of clarity, certainty and practicality of the law.  

Where, for instance, legislation restricts itself to very general essential requirements that express a goal everyone will agree on, there is no necessity to reduce the scope, and the scope should not be reduced. For example, we probably all agree that software should be safe and remain safe throughout its lifecycle. Where software is unsafe, this is equivalent to saying there is a risk of harm, so, if we extend this requirement to all software, it is hardly possible we capture a case where there was really no risk. In other words, the value of (b) is zero, (d) is an argument against differentiation anyway, and (a) and (c) can be balanced individually (depending inter alia on whether we take an ALARP or AFAP approach). This requirement must therefore apply to all software within the scope of application of the relevant instrument. The situation is similar where legislation prohibits particular practices we agree we do not want (‘blacklisting’). For instance, legislation might say that discrimination and manipulation by way of software are prohibited, and list more concrete blacklisted practices.102 There is little need to restrict this in scope because the risk lies in the unwanted result – again, there is no danger of capturing a case without risk, so the value of (b) is again zero. The value of (c) is also very low because not being allowed to do really bad things is not a huge burden. However, there may be cases where such blacklisted practices are very difficult to ascertain, so (d) together with (a) may nevertheless be a reason to restrict the scope to particular broader areas. The situation is different where safety legislation provides for very concrete mandatory requirements that prescribe means rather than goals. For instance, the Medical Device Regulation (below 5.1.2) lists a host of very concrete re-

102 See with regard to commercial practices, Directive 2005/29/EC. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

4.1 Classification of responses

235

quirements concerning all sorts of aspects. Such requirements will always capture a number of cases where really no risk at all exists, because not all cases will require the same measures, so the value of (b) is high. The value of (c) is likewise quite high. At the end of the day, there is definitely a need to restrict the scope of the Medical Device Regulation (both externally, i.e. this instrument should never be applied to all products, and internally, i.e. there must be different requirements in accordance with different risk classifications) in the light of the relevant value of (a). The need to restrict the scope is usually highest where not only mandatory requirements in the form of concrete measures are imposed but also burdensome procedures (e.g. impact assessment, third party certification) because, necessarily, this will capture many cases where there was no risk of noncompliance, so the value of (b) is usually very high, and so is often the value of (c). It may therefore be justified or even necessary to restrict such procedures to a rather narrow scope. The relationship between these four (illustrative) types of measures and the scope of application is reflected in the following chart:

Procedures

Regulatory Measure

A risk-based approach to safety

Concrete mandatory requirements prescribing particular means

Blacklisting of unwanted practices

General essential requirements expressing a goal / Principles

Scope

4.1.5.3 Risk-based liability Similar considerations as for safety apply to liability. This may sound surprising at first sight because, where harm has already occurred, a risk has obviously materialised, so it is impossible to capture a case where really no risk existed. However, also liability creates costs for those who are liable, be it the cost of having to pay compensation or the cost of taking out insurance. The need to reduce the scope of liability regimes, which are generally justified according to general prinWendehorst/Duller, Safety- and Liability-Related Aspects of Software

236

4. Responses

ciples of the law of non-contractual liability, may depend, e.g., on the following factors: (a) the gravity of the initial risk created by the software or other product placed on the market by a particular party; (b) unavoidability from the perspective of the addressee of liability, i.e. the likelihood that such a party becomes liable despite having taken optimal safety precautions; and (c) considerations of clarity, certainty and practicality of the law. Considering these factors, it becomes clear that there is never a reason to reduce the scope of fault liability, because where the relevant party is at fault, the value of (b) is automatically zero, and (c) militates against differentiations anyway, so fault liability must apply in all cases. With forms of defect liability that require proof of both a defect and causation (such as product liability), i.e. where it is clear the software or another product placed on the market is defective and has caused the damage, the value of (b) is very low, although there is still a certain likelihood that the relevant party becomes liable despite optimal safety precautions. The scope may therefore be rather broad. Where, however, proof of a defect and causation is not required, the likelihood that the relevant party will become liable even though, e.g., really a different risk has materialised, are quite high, so such liability should have a much narrower scope. This risk is particularly high for strict liability, i.e. where the victim does not even have to prove that there has been a defect. This is why the scope of strict liability needs to be very narrow and restricted to cases where the value of (a) is particularly high and possibly further considerations under (c), such as to foster trust in a new technology, militate in favour of its introduction.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

4.1 Classification of responses

237

Strict liability

Regulatory Measure

A risk-based approach to liability

Liability for defective product (with presumed causation)

Liability for defective product (with proven causation) Fault liability

Scope

4.1.6 Technologically neutral and technology-specific responses 4.1.6.1 General significance Technological neutrality – together with functional equivalence and non-discrimination of digital vs analogous – has always been a cornerstone of good legislation when it comes to grappling with new technological developments.103 The main reason for this is that the development of technology is fast and laws that are too closely tailored to very specific technologies tend to have obvious gaps, or to be otherwise outdated, very soon after they are passed (and sometimes even before they enter into force). Also, addressing very specific technological phenomena tends to enhance fragmentation of the law and the risk of inconsistent results.

4.1.6.2 Physical risks It is in the light of these considerations that physical risks – such as death, personal injury or damage to property – should be addressed by traditional types of safety and liability legislation irrespective of whether they are created by well-

103 UNCITRAL Model Law on Electronic Commerce (1996) with additional article 5 bis as adopted in 1998 ; Principles and Guidelines for the Community’s Audiovisual Policy in the Digital Age, COM(1999) 657 final: ‘This implies a need for technological neutrality in regulation: identical services should in principle be regulated in the same way, regardless of their means of transmission’. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

238

4. Responses

known or by emerging technologies, such as AI.104 This includes concerns related to, for example, the safety of self-driving vehicles, drones or medical and care robots and liability where damage has been caused by the operation of such devices. While the opacity, complexity, unpredictability, etc. that comes with AI may certainly add to the risk created by such devices, the risk as such is of a very traditional type. In other words, other technical developments could have a similar effect on ‘physical’ risks, and the effect on a victim may be the same, irrespective of whether the traffic accident that killed them was caused by AI, or by any other vehicle component. The existing traditional types of legislation, such as the General Product Safety Directive (below 5.1.1), the Product Liability Directive (below 5.3), and sectoral instruments must, however, undergo a ‘digital fitness check’. Also, the emergence of technologies such as AI may induce the EU legislator to take particular action and to harmonise some aspects of traditional types of safety and liability rules. For instance, there may suddenly be good reasons for the extension of the scope of strict liability regimes, or introduction of new strict liability regimes, where AI makes the risk involved rise above the critical threshold (e.g. slow vehicles, such as big cleaning robots in public spaces, may suddenly pose a risk comparable to the risk that used to be posed only by fast vehicles). Also, the emergence of AI may force the legislator to re-consider elements clearly connected with human agency (e.g. extension of vicarious liability for human auxiliaries to harm caused by AI that replaces human auxiliaries). However, within these measures, there is a need to be as technologically neutral as possible, i.e. to refer to specific technologies, such as AI, only where this is absolutely necessary (such as in technical standards for the safety and assessment of self-learning systems, or within a risk classification system), and to choose a definition of AI that is in itself as technologically neutral as possible (e.g. that does not refer to particular forms of machine learning).

4.1.6.3 Other than physical risks The need for technological neutrality differs with regard to other risks. Some pure economic risks, such as manipulation of consumer choice with the help of particular targeted software tools, may best be dealt with in existing legal frameworks. There seems to be a great need for running a ‘digital fitness check’ on the lists of

104 See also Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for AI and other digital technologies’ (Commission 2019). Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

4.2 Risk-response matrix

239

blacklisted practices, such as in the Annexes to the Unfair Commercial Practices Directive or the Unfair Contract Terms Directive.105 Social risks created by software are currently only addressed here and there, such as by data protection and anti-discrimination law, but with many gaps, and not very well suited to tackle the challenges posed by new technologies such as AI. Many risks are in fact characteristic of particular technologies, such as AI (cf., e. g., AI used for purposes such as HR decisions, personalised pricing, personalised news feeds, or predictive policing). This is where AI-specific regulatory components, such as ensuring inclusiveness of training data, ensuring that decisions are explainable, information duties, impact assessment, human oversight, etc. are fully justified, and these issues should be addressed within the New Regulatory Framework for AI (below 5.4.1). However, it is again important to choose a definition of AI that is in itself as technologically neutral as possible (e.g. that is not restricted to software created by machine learning).  

4.2 Risk-response matrix The following risk-response matrix indicates how regulatory responses might differ depending on the type of risk identified in the safety risk matrix (above 3.2). It provides indications as to the general way in which particular categories of risk should be dealt with. The matrix does not directly indicate whether risks should be dealt with in a more horizontal or more sectoral manner, and how exactly to achieve a risk-based approach, etc. This is in order to reduce complexity, but also because different options exist and the legislator must choose between them.

105 Directive 93/13/EEC and Directive 2005/29/EC. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

240

4. Responses

Risk-response-matrix

Traditional types of safety & liability frameworks

Liability attached to mixed types of legislation (UCPD, consumer law, NLFAI)

Liability attached to PRRP or Blacklisting approach in NLFAI

Physical risks

Pure economic risks

Social risks

Cleaning robot hits passerbyy a nd causes and personal injury

Software Soft f ware agent buys overpriced items and causes financial loss

Hate speech by bots causes humiliation and social unrest

To be addressed in any case by safety & liability regimes (but there may be specificities for intermediated risks, e.g. specific defences against strict liability)

Intermediated Recommendation from characteristic medical software soft f ware risk causes health problem

Price comparison soft f ware gives poor software recommendations to consumers

HR recommender soft f ware scores female f male fe software applicants lower than male applicants

To be addressed by safety & liability regimes if risk is sufficiently grave

Intermediated Security gap in smart otherwise heating system typical risk facilitates burglary

Security gap in email software soft f ware facilitates fraud by third parties

AI system in the workplace leads to stress/anxiety

May be omitted for certain safety procedures and for strict liability

Atypical risk

soft f ware Behaviour of software agent causes lowering of user’s credit score

Online gaming passion of spouse leads to 7 break k -up of marriage

Direct (typical) risk

Sharp handle of robot causes injury

Chrisane Wendehorst

4.3 Summary of findings Also the responses to risks can be classified according to different criteria. Generally speaking, a legal system needs to make a decision on how to strike the balance between safety and liability. Europe has always opted against mere considerations of efficiency, but there is still a broad margin of legislative discretion between an ALARP and AFAP approach to risk control. The ‘New Approach’ to safety legislation is, in any case, preferable in order to achieve better coherency and flexibility and a more appropriate division of tasks between the legislator, experts and stakeholders. The differentiations both between horizontal (cross-sectoral) and sectoral (vertical) responses and that between different degrees of risk adaptation are related to the proportionality of a measure. Whether to have legislation with a narrow scope or with a broad scope, and in the latter case, whether various techniques of risk-based differentiation are applied within the scope of the instrument depends on a number of factors. They include the gravity of the initial risk that was the reason to introduce legislation, the burden implied by the measure taken, the likelihood of capturing ‘the wrong cases’, and considerations of practicality, clarity and certainty of the law.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

241

Generally speaking, the law can focus on what must be done (PRRP approach) or on what must not be done or achieved (blacklisting approach). There is always a need for PRRP, but there are arguments for making more use of the blacklisting approach than has been made in the past, as it tends to hit risky conduct in a more targeted manner. Responses to risk should be as technologically neutral as possible. This means that preference should be given to conducting a ‘digital fitness check’ on existing legal frameworks and adapting them where necessary and possible, while new legal frameworks should be introduced preferably where also the risks are of an entirely new nature, such as social risks created by AI. Within any legal framework, definitions, etc. should be as technologically neutral as possible, i.e. a technologically neutral notion of ‘software’ or ‘AI’ is preferable to one that refers to particular technological features (such as machine learning).

5. Analysis of the status quo – is the acquis fit for software? 5.1 Product safety 5.1.1 General Product Safety Directive The General Product Safety Directive (GPSD)106 of 2001 establishes duties for producers and, to a certain extent, for distributors, of consumer products of any kind that are made available in the course of commercial activity. The duties, in essence, require producers and distributors to ensure that items on sale are safe and to take corrective action when that is found not to be the case. In 2013, a Proposal for a Regulation on Consumer Product Safety107 was published that was intended to repeal the GPSD. Work on this was later put on hold. According to the 2020 Commission Work Programme, the GPSD will rather be replaced by a new Proposal that also tackles gaps due to emerging technologies. Being a ‘horizontal’ piece of legislation, the relationship of the GPSD to sectoral instruments108 had

106 Directive 2001/95/EC of the European Parliament and of the Council of 3 December 2001 on general product safety, OJ L 2002/11, 4. 107 Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on consumer product safety and repealing Council Directive 87/357/EEC and Directive 2001/95/EC’ COM (2013) 78 final. 108 E.g. for machinery, radio equipment or toys (see below). Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

242

5. Analysis of the status quo – is the acquis fit for software?

been one of the most controversial issues when the GPSD was negotiated.109 At the end of the day, it was clarified that the GPSD has a ‘safety net’ function. Hence, where a safety issue is addressed by a sectoral instrument, the specific rule prevails.110

5.1.1.1 Notion of ‘product’ and the role of software The GPSD only applies to consumer goods and is therefore narrower in scope than the Product Liability Directive (PLD) (below 5.3). However, in line with general principles of safety law according to which any foreseeable deviant use is to be taken into account, products qualify as consumer products not only where they are intended for consumers (e.g. declared to be for private use only), but also where they can foreseeably be used by consumers. The definition of ‘product’ is as follows: Article 2 For the purposes of this Directive: (a) ‘product’ shall mean any product — including in the context of providing a service — which is intended for consumers or likely, under reasonably foreseeable conditions, to be used by consumers even if not intended for them, and is supplied or made available, whether for consideration or not, in the course of a commercial activity, and whether new, used or reconditioned. This definition shall not apply to second-hand products supplied as antiques or as products to be repaired or reconditioned prior to being used, provided that the supplier clearly informs the person to whom he supplies the product to that effect; ….

In contrast with the PLD, there is no explicit reference to ‘movables’, and no clarification concerning electricity, so software, whether embedded or standalone software, might potentially qualify as a product, while it is clear that the GPSD was mainly drafted with tangible items in mind. Recital 9 only states clearly that the GPSD does not cover services. Safety of the equipment used by service providers (e.g. shampoo applied by a hairdresser) and the equipment on which consumers ride or travel if operated by the service provider belong to service safety and are excluded from the scope of the GPSD.111 However, products that are supplied

109 Duncan Fairgrieve and Geraint Howells, ‘General Product Safety: A Revolution through Reform?’ (2006) 69 The Modern Law Review 59, 62. 110 Article 1(2) Directive 2001/95/EC. 111 Geraint Howells, Christian Twigg-Flesner and Thomas Wilhelmsson, Rethinking EU Consumer Law (Routledge 2018), 267. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

243

or made available to consumers in the context of service provision for use by them are covered (e.g. gym equipment, or supermarket trolley).112 For services, Directive 2006/123/EC,113, which lays down mechanisms that oblige Member States to mutual assistance and to alert each other about detected risks,114 applies. Moreover, it requires providers of services that pose a direct and particular risk to take out insurance.115 While the question whether, and if so, under what circumstances, software qualifies as ‘product’ has been discussed at length in the PLD context (below at 5.3.1), there is surprisingly little discussion when it comes to the GPSD. There are hardly any reasons to doubt that tangible items with embedded software are included, irrespective of whether any safety or lack of safety comes from the software or the hardware, and there has been an alert concerning, e.g., the software of a smart watch for children that can easily be accessed and thus poses a risk to a child’s privacy.116 Things are less clear for standalone software that comes without hardware. The application of the GPSD to software that is accessed over the cloud and not installed on a hardware device is highly unlikely. This is because where a product remains controlled by the supplier, it is considered part of a service and is thus excluded from the GPSD’s scope.117 However, it is questionable whether this distinction can be made with a sufficient degree of certainty in cases such as Software-as-a-Service (SaaS), where there is usually a bundle of offline functionalities provided by digital content on the consumer’s device and online functionalities provided via the internet from a server within the supplier’s control.118 Even the online functionalities usually include a protected cloud storage space used exclusively by the consumer and which is functionally equivalent to storage on the consumer’s hard disk drive. Quite apart from the question of whether the distinc-

112 Duncan Fairgrieve and Geraint Howells, ‘General Product Safety: A Revolution through Reform?’ (2006) 69 The Modern Law Review 59, 61. 113 Directive 2006/123/EC of the European Parliament and of the Council of 12 December 2006 on services in the internal market, OJ L 2006/376, 36. 114 Article 29 and 32 Directive 2006/123/EC. 115 Article 23 Directive 2006/123/EC. 116 RAPEX, Alert Number in the EU Safety Gate Website: A12/0157/19, available at accessed 3 October 2020. 117 Geraint Howells, Christian Twigg-Flesner and Thomas Wilhelmsson, Rethinking EU Consumer Law (Routledge 2018), 267. 118 See e.g. Microsoft Azure accessed 3 October 2020. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

244

5. Analysis of the status quo – is the acquis fit for software?

tion can be made with a sufficient degree of certainty, the question arises whether the distinction is useful in the light of the arguments listed above (2.1.2.2 and 2.2).

5.1.1.2 Safety requirements A product is considered ‘safe’ where, under normal or reasonably foreseeable conditions of use including duration and, where applicable, putting into service, installation and maintenance requirements, it does not present any risk for the safety and health of persons, or only the minimum risks compatible with the product’s use and considered to be acceptable and consistent with a high level of protection. Safety of a product is defined in Article 2(b), stressing inter alia that the whole product lifecycle must be taken into account and that a product must remain safe during that lifecycle, that not only the individual product must be taken into account but also any other products with which the product will foreseeably be used together, and that the whole range of consumer target groups, including vulnerable groups, such as children and elderly people, need to be considered.

Article 2 (b) ‘safe product’ shall mean any product which, under normal or reasonably foreseeable conditions of use including duration and, where applicable, putting into service, installation and maintenance requirements, does not present any risk or only the minimum risks compatible with the product’s use, considered to be acceptable and consistent with a high level of protection for the safety and health of persons, taking into account the following points in particular: (i) the characteristics of the product, including its composition, packaging, instructions for assembly and, where applicable, for installation and maintenance; (ii) the effect on other products, where it is reasonably foreseeable that it will be used with other products; (iii) the presentation of the product, the labelling, any warnings and instructions for its use and disposal and any other indication or information regarding the product; (iv) the categories of consumers at risk when using the product, in particular children and the elderly.

Concerning the conditions under which a product is considered safe, directly applicable EU law is to be considered in the first place. Where such specific law does not exist, a product shall be deemed safe when it conforms to the specific health and safety rules of national law of the Member State in whose territory the product is marketed and that is in conformity with EU law. There is a presumption of conformity where a product conforms with voluntary national standards transposing European standards, the references of which have been published by the Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

245

Commission in the Official Journal of the EU, as far as the risks and risk categories covered by the relevant standards are concerned (for a detailed overview of the procedure, see 4.1.4 above). Where no such rules or standards exist, the conformity of a product to the general safety requirement is assessed individually by taking into account: (a) voluntary national standards transposing other relevant European standards; (b) the standards drawn up in the Member State in which the product is marketed; (c) Commission recommendations setting guidelines on product safety assessment; (d) product safety codes of good practice in force in the sector concerned; (e) the state of the art and technology; (f) reasonable consumer expectations concerning safety.

5.1.1.3 Post-market surveillance and other duties In addition to the requirement that products must be safe, the GPSD also imposes further requirements. For example, products must bear information enabling them to be traced, such as the manufacturer’s identity and a product reference. Article 5 lays down further obligations of the producer, including that producers provide consumers with the relevant information to enable them to assess the risks inherent in a product throughout the normal or reasonably foreseeable period of its use. Most importantly in the context of this study, producers must continuously monitor the safety of their products and adopt measures enabling them to be informed of risks that their products might pose. They must take appropriate action including, if necessary to avoid safety risks, withdrawal from the market, adequately and effectively warning consumers or, as a means of last resort, recall from consumers. Also, distributors shall ensure compliance with safety rules, especially by refraining from supplying dangerous products, and participating in product monitoring. Where producers and distributors know or ought to know that a product that they have placed on the market poses risks to consumers, they shall immediately inform the competent authorities of the Member States. The Commission has issued Guidelines for the Notification of Dangerous Consumer Products.119 The purpose of the notification procedure is to enable the competent authorities to monitor whether the companies have taken appropriate measures to address the risks posed by a product already placed on the market and to order or take additional

119 Commission Decision of 14 December 2004 laying down guidelines for the notification of dangerous consumer products to the competent authorities of the Member States by producers and distributors, in accordance with Article 5(3) of Directive 2001/95/EC of the European Parliament and of the Council, OJ L 2004/381, 63. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

246

5. Analysis of the status quo – is the acquis fit for software?

measures if necessary to prevent harm.120 ‘Isolated incidents’ are excluded from the obligation to notify.121 National enforcement authorities have powers to monitor product safety and take appropriate action against unsafe products. Also, the Commission itself may take rapid EU-wide measures for up to one year (renewable) if a specific product poses a serious risk. The GPSD has introduced a rapid information exchange system, which is managed by the Commission. It enables national authorities to alert their counterparts quickly of any products posing a serious health and safety risk. Implementing Decision (EU) 2019/417122 sets out guidelines for the management of the EU Rapid Information System (RAPEX) on product safety and its notification system. Separate arrangements are in place for food,123 pharmaceuticals124 and medical devices125. When using the rapid alert system, national authorities must provide information that identifies the item and its availability elsewhere in Europe, details of the risks it presents, and any action taken to protect the public.

5.1.1.4 Addressees of safety-related obligations The GPSD uses a broad definition of ‘producer’. It comprises, in the first place, the manufacturer of the product, when it is established in the EU, and any other party (often referred to as ‘quasi-manufacturer’) presenting itself as the manufacturer by affixing to the product its name, trade mark or other distinctive mark, or

120 Ibid Annex, No. 2.2. 121 Ibid Annex, No. 2.1. 122 Commission Implementing Decision (EU) 2019/417 of 8 November 2018 laying down guidelines for the management of the European Union Rapid Information System ‘RAPEX’ established under Article 12 of Directive 2001/95/EC on general product safety and its notification system, OJ L 2019/73, 121. 123 RASFF based on Regulation (EC) No 178/2002 of the European Parliament and of the Council of 28 January 2002 laying down the general principles and requirements of food law, establishing the European Food Safety Authority and laying down procedures in matters of food safety, OJ L 2002/31, 1. 124 Pharmacovigilance based on Regulation (EC) No 726/2004 of the European Parliament and of the Council of 31 March 2004 laying down Community procedures for the authorisation and supervision of medicinal products for human and veterinary use and establishing a European Medicines Agency OJ L 2004/136, 1. 125 Eudamed2 based on Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, OJ L 2017/117, 1. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

247

who reconditions the product.126 Where such a manufacturer is not established in the EU, it can nominate an authorised representative within the EU, which then counts as the producer within the meaning of the GPSD. If there is no authorised representative, the importer is liable instead. Even other professionals in the supply chain may qualify as ‘producers’ insofar as their activities may affect the safety properties of a product. This is why, e. g., a party providing software updates or cloud services required for the functioning of the product could be included in the notion of ‘producer’, as well as any economic operator that changes the product later, before supplying it to consumers in the course of commercial activity. There are no clear provisions as to the boundaries of this notion.  

5.1.1.5 Overall assessment The GPSD provides a very broad, comprehensive and flexible ‘safety net’ for consumer products. This regulatory approach makes it relatively future-proof. Its strength is that it reflects and implements a range of fundamental principles of safety law, notably the idea that not only the use intended or described by the manufacturer is decisive for safety requirements, but also any foreseeable other use, including misuse; the idea that products must remain safe throughout their whole foreseeable lifecycle; the idea that product safety includes safety in conjunction with other elements that will foreseeably be used together with the product; the producer’s obligation of product monitoring and post-market surveillance; the producer’s obligation to take risk-based post-market action if risks become apparent; the idea that also parties who have not manufactured the product but act as ‘quasi-manufacturers’, representatives within the EU, or importers may have the obligations of a producer; and the notion of safety being a multi-stakeholder task that may include also other economic operators within the supply chain that influence safety. However, the GPSD also has a range of drawbacks. While it makes clear that it does not apply to services, it fails to give clear guidance as to the delineation of products and services in the context of software and emerging technologies. More notably even, it fails to cover risks – even risks of causing death or severe personal injury – of products that will not foreseeably be used by consumers (e.g. products exclusively used by industry, but which might injure or even kill workers). While providing for an obligation to react to post-market risks, obligatory reactions explicitly mentioned are all ‘negative’ in nature, ranging from

126 Article 2(e) Directive 2001/95/EC. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

248

5. Analysis of the status quo – is the acquis fit for software?

warnings to a recall, but there is no clear obligation for producers to mitigate or eliminate the risk by way of software updates or other repair measures. Not surprisingly, the GPSD also fails to give clear guidance in the context of software as to the division of obligations and responsibilities between various parties that might fall under the broad notion of ‘producer’.

5.1.2 Medical Devices Regulation and In-vitro Diagnostics Regulation Medical devices and in-vitro diagnostics are subject to sector-specific safety legislation that has taken a very different approach towards software.

5.1.2.1 Notion of ‘medical device’ and the role of software Council Directives 90/385/EEC127 and 93/42/EEC128 had defined ’medical device’ as any instrument, apparatus, appliance, material or other article, whether used alone or in combination, together with any accessories or software for its proper functioning, Already in 2007,129 ‘standalone’ software was explicitly included as a medical device in its own right.130 This approach has been confirmed and further stressed by the Medical Devices Regulation 2017 (MDR)131 as well as the In Vitro Diagnostics Regulation 2017132.

127 Council Directive 90/385/EEC of 20 June 1990 on the approximation of the laws of the Member States relating to active implantable medical devices, OJ 1990/189, 17. 128 Council Directive 93/42/EEC of 14 June 1993 concerning medical devices, OJ L 1993/169, 1. 129 Directive 2007/47/Ec of the European Parliament and of the Council of 5 September 2007 amending Council Directive 90/385/EEC on the approximation of the laws of the Member States relating to active implantable medical devices, Council Directive 93/42/EEC concerning medical devices and Directive 98/8/EC concerning the placing of biocidal products on the market, OJ 2007/247, 21. 130 Ibid Annex II, No. 9(a)(i). 131 Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, OJ L 2017/117, 1. 132 Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU, OJ L 2017/117, 176. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

249

Article 2 MDR: (2) ‘medical device’ means any instrument, apparatus, appliance, software, […] other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes ….

There is no definition of ‘software’ in the MDR itself, but such a definition is suggested in a Commission Guidance document from 2016133 on the qualification and classification of standalone software. According to the Guidance, software is defined as ‘a set of instructions that processes input data and creates output data’. No differentiation is made between software uploaded onto the medical professional’s or patient’s hardware or software provided otherwise, such as within an SaaS scheme. The same definition of ‘software’ is also used by the Medical Device Coordination Group in its Guidance Document on qualification and classification of software in the MDR and IVDR from 2019.134 The 2019 Guidance clarifies that software can directly control a (hardware) medical device (e.g. radiotherapy treatment software), can provide immediate decision-triggering information (e.g. blood glucose meter software), or can provide support for healthcare professionals (e.g. ECG interpretation software).135 Recital 19 MDR clarifies that the qualification of software, either as a device or an accessory, is independent of the software’s location or the type of interconnection between the software and a device, which is why SaaS schemes are arguably included.136 Whether or not software qualifies as a medical device in its own right is mainly a matter of the purpose, i.e. of whether it is intended to be used, alone or in combination, for a purpose as specified in the definition of a medical device. This is why, e.g., a smartwatch app, which is intended to send alarm notifications to the user and/or health practitioner when it recognises irregular heartbeats for the purpose of detecting cardiac arrhythmia, is a medical device. To underline that the qualification of software does depend on its purpose and not on its location, the term ‘standalone’ software is no longer used.137 However, there is still the notion of accessory software, i.e. of software driving or influencing the use of a

133 Commission, ‘Guidance document Medical Devices – Scope, field of application, definition – Qualification and Classification of stand alone software – MEDDEV 2.1/6’ (2016) available at accessed 3 October 2020. 134 Medical Device Coordination Group, ‘Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR’ (Commission 2019) available at accessed 3 October 2020. 135 Ibid 6. 136 Ibid 3. 137 See also Recital 19, Regulation (EU) 2017/745. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

250

5. Analysis of the status quo – is the acquis fit for software?

medical device. Such software does not have or perform a medical purpose on its own, nor does it create information on its own for medical purposes. This software can, for instance, operate, modify the state of, or control the device through an interface (e.g., software, hardware) or via the operator of this device or supply output related to the (hardware) functioning of that device.138 Software driving or influencing the use of a (hardware) medical device is qualified as an accessory for a (hardware) medical device if the action performed goes beyond storage, archival, communication, simple search, lossless compression and brings benefits for individual patients. An example would be software with built-in electronic controls for in-vitro diagnostics (IVD) quality control procedures, which are intended to provide users with assurance that the device is performing within specifications.139 Manufactures of medical devices must include in the instructions for use information which allows the selection of the corresponding software and accessories.140 What is often more difficult is to draw a line between medical device, lifestyle, and general software. Recital 19 to the MDR clarifies that software in its own right, when specifically intended by the manufacturer to be used for one or more of the medical purposes set out in the definition of a medical device, qualifies as a medical device, while software for general purposes (e.g. hospital administration software), even when used in a healthcare setting, or software intended for lifestyle and well-being purposes (e.g. fitness apps) is not a medical device. There is a growing tendency of manufacturers making efforts to describe their products as lifestyle products in order to escape the MDR regime.141

5.1.2.2 Safety requirements The MDR provides for a multi-layered system of safety requirements, many of which are further explained and elaborated in Annexes to the MDR, i.e. there is no single and general definition of safety as in the GPSD. There is surprisingly lit-

138 Article 2(2) Regulation (EU) 2017/745; Medical Device Coordination Group, ‘Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR’ (Commission 2019) available at accessed 3 October 2020, 8. 139 Medical Device Coordination Group, ‘Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR’ (Commission 2019) available at accessed 3 October 2020, 8. 140 Article 32.2(c), Annex I, Chapter III No. 23.4(f), Annex II No. 1.1(h), Regulation (EU) 2017/745. 141 See e.g. Arjan van Drongelen and others, ‘Apps under the Medical Devices Legislation’ (Dutch Ministry of Health Welfare and Sport 2018), 16. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

251

tle indication as to the type of risks addressed. The specific characteristics of medical devices mean that there is no clear line between functionality risks and safety risks as, in the medical sector, poor functionality typically also causes personal injury or even death. This is why the requirements formulated by the MDR include the reliable performance of medical devices.142 Some important general requirements, which are also very relevant for software, are found in Annex I, No. 14 (Construction of devices and interaction with their environment). If the device is intended for use in combination with other devices or equipment, the whole combination, including the connection system, must be safe and must not impair the specified performance of the devices. Any restrictions on use applying to such combinations shall be indicated on the label and/or in the instructions for use. Connections which the user has to handle must be designed and constructed in such a way as to minimise all possible risks, such as misconnection. Devices must notably be designed and manufactured in such a way as to remove or reduce as far as possible, e.g., the risks associated with the possible negative interaction between software and the IT environment within which it operates and interacts (No. 14.2.(d)). Furthermore, devices must be designed and manufactured in such a way that adjustment and maintenance can be undertaken safely and effectively, and devices that are intended to be operated together with other devices or products shall be designed and manufactured in such a way that the interoperability and compatibility are reliable and safe. In the context of software, Annex I, No. 17 (Electronic programmable systems) is particularly important. Devices that incorporate electronic programmable systems, including software, or software that is a device in itself, must be designed to ensure repeatability, reliability and performance in line with their intended use. In the event of a single fault condition, appropriate means must be adopted to eliminate or reduce as far as possible consequent risks or impairment of performance. For devices that incorporate software or for software that is a device in itself, the software must be developed and manufactured in accordance with the state of the art, taking into account the principles of development life cycle, risk management, including information security, verification and validation. Software that is intended to be used in combination with mobile computing platforms must be designed and manufactured, taking into account the specific features of the mobile platform (e.g. size and contrast ratio of the screen) and the external factors related to its use (varying environment as regards level of light or noise). Last but not least, manufacturers must set out minimum requirements concerning hardware, IT networks characteristics and IT se-

142 Annex I, Chapter 1, No. 1, Regulation (EU) 2017/745. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

252

5. Analysis of the status quo – is the acquis fit for software?

curity measures, including protection against unauthorised access, necessary to run the software as intended. There are also software-specific requirements when it comes to information in the instructions for use (No. 23(4)). These must contain, inter alia, information allowing the healthcare professional to verify if the device is suitable and select the corresponding software and accessories. For devices that incorporate electronic programmable systems, including software, or software that is a device in itself, the information must further contain minimum requirements concerning hardware, IT networks characteristics and IT security measures, including protection against unauthorised access, necessary to run the software as intended. Annex II No. 6(1)(b) includes a provision on technical documentation concerning software verification and validation (describing the software design and development process and evidence of the validation of the software, as used in the finished device). This information shall typically include the summary results of all verification, validation and testing performed both in-house and in a simulated or actual user environment prior to final release. It shall also address all of the different hardware configurations and, where applicable, operating systems identified in the information supplied by the manufacturer. When it comes to assigning the Unique Device Identifier (UDI), Annex VI, Part C, No. 6.5. (Device Software) provides that the UDI shall be assigned at the system level of the software. Only software which is commercially available on its own and software which constitutes a device in itself shall be subject to that requirement. A new UDI-DI (UDI Device Identifier, i.e. the primary identifier of a device model) shall be required whenever there is a modification that changes: (a) the original performance; (b) the safety or the intended use of the software; (c) interpretation of data. Such modifications include new or modified algorithms, database structures, operating platform, architecture or new user interfaces or new channels for interoperability. Minor software revisions shall require a new UDI-PI (UDI Production Identifier) and not a new UDI-DI. Minor software revisions are generally associated with bug fixes, usability enhancements that are not for safety purposes, security patches or operating efficiency. Minor software revisions shall be identified by a manufacturer-specific form of identification. No. 6.5.4. includes specific provisions concerning UDI placement criteria for software. Annex VIII focuses on risk classification. According to No. 3.3., software which drives a device or influences the use of a device (accessory software) shall fall within the same class as the device. If the software is independent of any other device, it shall be classified in its own right. According to No. 6.3. Rule 11, the following classification applies: Software intended to provide information which is used to take decisions with diagnosis or therapeutic purposes is classified as class Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

253

IIa, except if such decisions have an impact that may cause death or an irreversible deterioration of a person’s state of health, in which case it is in class III; or a serious deterioration of a person’s state of health or a surgical intervention, in which case it is classified as class IIb. Software intended to monitor physiological processes is classified as class IIa, except if it is intended for monitoring of vital physiological parameters, where the nature of variations of those parameters is such that it could result in immediate danger to the patient, in which case it is classified as class IIb. All other software is classified as class I.

5.1.2.3 Post-market surveillance and other duties The MDR provides for an elaborate post-market surveillance regime, which introduces a range of duties for the manufacturer. According to Article 2(60), ‘postmarket surveillance’ refers to all activities carried out by manufacturers in cooperation with other economic operators to institute and keep up to date a systematic procedure to proactively collect and review experience gained from devices they place on the market, make available on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions. Post-market surveillance systems need to be proportionate to the risk class and appropriate for the type of device.143 Article 83(2) stresses that, for all medical devices, the manufacturer shall ensure safety and performance throughout the lifecycle of the device, through a continuous process of clinical and/or performance evaluation and risk management. If the need for preventive or corrective action is identified, the manufacturer has to ensure that appropriate measures are taken and has to inform the competent authorities.144 ‘Appropriate measures’ is to be understood broadly and may include positive action towards remedying any safety issues by way of ‘field safety corrective action’, e.g. software updates.145 However, it is unclear to what extent there may be an obligation of the manufacturer to provide updates instead of taking other steps, such as a recall. Under the market surveillance obligations, manufacturers shall evaluate any changes to the function, intended use, essential design, and manufacturing characteristics of the software’s qualification as medical device and its classification that may be caused by updates that are provided after the software has been put

143 Article 83(1) Regulation (EU) 2017/745. 144 Ibid Article 83(4). 145 Ibid Article 83(3)(e). Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

254

5. Analysis of the status quo – is the acquis fit for software?

on the market. Software updates that introduce entirely new modules to existing hardware or software might qualify as software itself.146

5.1.2.4 Addressees of safety-related obligations The primary addressee of safety-related obligations is the ‘manufacturer’, i.e. a natural or legal person who manufactures or fully refurbishes a device or has a device designed, manufactured or fully refurbished, and markets that device under its name or trade mark.147 ‘Fully refurbishing’ means the complete rebuilding of a device already placed on the market or put into service, or the making of a new device from used devices, to bring it into conformity with this Regulation, combined with the assignment of a new lifetime to the refurbished device.148 For each device whose manufacturer is established outside the Union, there must be an ‘authorised representative’,149 i.e. a natural or legal person established within the Union who has received and accepted a written mandate from a manufacturer, located outside the Union, to act on the manufacturer’s behalf in relation to specified tasks with regard to the latter’s obligations under the MDR.150 There are specific obligations also for importers and distributors as well as for parties putting on the market ‘system and procedure packs‘ under Article 22. All of these parties are considered to be ‘economic operators’.

5.1.2.5 Overall assessment with regard to software The MDR (as well as the IVDR) is a very modern piece of legislation that fully takes into account the specificities of software. In particular, it fully reflects the fact that software may be functionally equivalent to hardware; recognises the equivalence of software and physical devices/components; recognises that safety includes safety throughout the device lifecycle and in all environments in which the device is foreseeably going to be used; fully recognises the need for post-market surveillance and contemplates the possibility that the best way to eliminate a safety risk may be to take positive action and provide repair (i.e. a soft-

146 Medical Device Coordination Group, ‘Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR’ (Commission 2019) available at accessed 3 October 2020, 18. 147 Article 2(30) Regulation (EU) 2017/745. 148 Ibid Article 2(31). 149 Ibid Article 11(1). 150 Ibid Article 2(35). Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

255

ware update); provides rules for distinguishing between software updates that require an entirely new safety assessment and updates that require only limited action; and explicitly explains who is responsible for what in an ecosystem of various stakeholders involved. Compared with the GPSD, the MDR is less clear when it comes to clearly stating some general principles.

5.1.3 Radio Equipment Directive The Radio Equipment Directive151 (RED) succeeded the Radio and Telecommunication Terminal Equipment Directive (1999/5/EC) and aligned the safety legislation for radio equipment with the new legislative framework for the marketing of products. The RED entered into force on 11 June 2014 and is applicable as of 13 June 2016.

5.1.3.1 Notion of ‘radio equipment’ and the role of software Article 2.1(1) of the RED defines ‘radio equipment’ as an electrical or electronic product which intentionally emits and/or receives radio waves for the purpose of radio communication and/or radiodetermination, or an electrical or electronic product which must be completed with an accessory, such as antenna, so as to intentionally emit and/or receive radio waves for the purpose of radio communication and/or radiodetermination. Article 2.1(2) RED defines ’radio communication’ as communication by means of radio waves, which in turn are defined in Article 2.1(4) as electromagnetic waves of frequencies lower than 3 000 GHz, propagated in space without artificial guide. According to Article 2.1(3) RED, ‘radiodetermination’ means the determination of the position, velocity and/or other characteristics of an object, or the obtaining of information relating to those parameters, by means of the propagation properties of radio waves. Examples of radio equipment are laptops, mobile phones, radars, broadcasting devices, routers or conventional goods with Wi-Fi, Bluetooth, GPS and/or other radio transceivers, such as smart watches.152 Excluded from the scope is, according to Article 1.3, the equipment listed in Annex I. This includes marine

151 Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment and repealing Directive 1999/5/EC, OJ L 2014/153, 62. 152 Commission, ‘Clarifications on the initiative of upload of software into radio equipment’ (2020), available at accessed 3 October 2020, 2. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

256

5. Analysis of the status quo – is the acquis fit for software?

equipment, airborne products, radio equipment used by radio amateurs and custom-built evaluation kits used solely for research purposes. Not covered by the RED are products that use electromagnetic waves exclusively for purposes other than radio communication and/or radiodetermination, such as inductive heating appliances or high frequency surgical equipment. Software is a component and thus falls under the definition of ‘radio equipment’ if installed or embedded in tangible equipment at the time it is placed on the market.153 Article 10(8) of the RED specifically mentions that the information on the intended use of the radio equipment shall include ‘a description of accessories and components, including software, which allow the radio equipment to operate as intended.’

5.1.3.2 Essential requirements and delegated Commission acts The RED is particularly strong in listing explicitly what kind of risks it is to cover. Article 3(1) lists essential requirements for (a) the protection of health and safety of persons and of domestic animals and the protection of property; (b) an adequate level of electromagnetic compatibility. Article 3(2) stresses that radio equipment must be so constructed that it both effectively uses and supports the efficient use of radio spectrum in order to avoid harmful interference. Article 3 (3) lists essential requirements for certain categories or classes, including that (a) radio equipment interworks with accessories, in particular with common chargers; (b) radio equipment interworks via networks with other radio equipment; (c) radio equipment can be connected to interfaces of the appropriate type throughout the Union; (d) radio equipment does not harm the network or its functioning nor misuse network resources, thereby causing an unacceptable degradation of service; (e) radio equipment incorporates safeguards to ensure that the personal data and privacy of the user and of the subscriber are protected; (f) radio equipment supports certain features ensuring protection from fraud; (g) radio equipment supports certain features ensuring access to emergency services; (h) radio equipment supports certain features in order to facilitate its use by users with a disability; (i) radio equipment supports certain features in order to ensure that software can only be loaded into the radio equipment where the compliance of the combination of the radio equipment and software has been demonstrated. It is in particular privacy concerns and concerns about protection from

153 See Article 4(1), Article 10(8) Directive 2014/53/EU; Commission, ‘Clarifications on the initiative of upload of software into radio equipment’ (2020), available at accessed 3 October 2020, 3. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

257

fraud that are noteworthy in a safety context as only few instruments in the field of safety legislation provide for rules against economic and social risks. According to Article 3(3), the Commission is empowered to adopt delegated acts in accordance with Article 44 specifying which categories or classes of radio equipment are concerned by each of the essential requirements listed. With regard to software, the Commission might, in particular, adopt a delegated act concerning the essential requirement under (i) that radio equipment shall be constructed in a way that it ‘supports certain features in order to ensure that software can only be loaded into the radio equipment where the compliance of the combination of the radio equipment and software has been demonstrated.’ Such a delegated act could oblige manufacturers of radio equipment to install certain features that ensure the user cannot upload software which can affect the demonstrated compliance of that equipment with the requirements of the RED. Article 4(1) provides that manufacturers of radio equipment and of software allowing radio equipment to be used as intended shall provide the Member States and the Commission with information on the compliance of intended combinations of radio equipment and software with the essential requirements set out in Article 3. On the basis of Article 4 RED, the Commission can adopt a delegated act that obliges the manufacturer to provide the Member States and the Commission with information on the compliance of intended combinations of radio equipment and software, before the software can be uploaded into radio equipment. The upload of software to radio equipment can have a severe impact on the compliance initially demonstrated and lead to safety issues. For example, software can enable or disable certain components of radio equipment, allow products to operate at non-permitted frequencies or change the tested protocol of radio equipment that is intended for access to emergency services.154 The Commission is therefore currently evaluating whether to adopt a delegated act based on Article 3(3)(i) and/or Article 4 RED. The public consultation was open until 14 September 2020, and a possible adoption of the delegated acts was planned for the third quarter of 2020, but delays are to be expected due to the COVID-19 crisis. The Commission is exploring five different policy options: Option 0 is the baseline scenario; Option 1: self-regulation of the industry, which shall ensure that software uploaded into radio equipment does not compromise the initial compliance; Option 2: adoption of a delegated act on the basis of Article 4(1); Option 3: adoption of a delegated act on the basis of Article 3(3)(i); Option 4: adoption of two de-

154 Commission, ‘Clarifications on the initiative of upload of software into radio equipment’ (2020), available at accessed 3 October 2020, 4. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

258

5. Analysis of the status quo – is the acquis fit for software?

legated acts based on both articles. The reasons for the evaluation are mainly that, in the past, the modifications of radio equipment required expertise and specific tools. Today, the functioning of many radio equipment devices can be changed by uploading or changing the installed software, i.e. software that can basically be uploaded by anyone is functionally equivalent to electronic components that were typically hardware and could be installed only by experts (e.g. diodes, switches, mixers, filters, demodulators).155 Another reason is the development of flexible hardware that can fulfil different tasks depending on the software installed.156 With the rise of internet-connected devices and wearables, which are usually embedded with software, radio equipment is becoming an increased target of cybersecurity attacks. This leads to social risks in the form of data losses and privacy intrusions, as well as economic risks, such as illegitimate access to information, which is then used for fraudulent purposes. To address these issues, the Commission is evaluating whether to adopt two additional delegated acts under Article 3 that would ensure that radio equipment incorporates safeguards that protect the user’s privacy and personal data (e) and that the radio equipment supports certain features that ensure protection from fraud (f). The study, which was launched to support the EC’s impact assessment, found that while the protection of data is covered by the GDPR, privacy risks are not addressed at a market entry stage due to a missing link between product safety and cybersecurity.157 Moreover, since GDPR-compliance is not a condition for market access, market surveillance authorities might not be able to remove radio equipment that violates the individual’s privacy from the market. The study also points out that national criminal law tackles fraud only retrospectively. Prevention of fraudulent activities enabled by hacking or malware attacks could be achieved by adopting security requirements in product legislation. Based on these considerations, the study supports the adoption of both delegated acts;158 essential requirements with regard to cybersecurity could address the identified safety risk by introducing requirements such as adherence to the principles of (cyber)security of hard-

155 Inception impact assessment, Commission delegated regulation on Reconfigurable Radio System, Ares(2019)476957, 1. 156 Commission, ‘Clarifications on the initiative of upload of software into radio equipment’ (2020), available at accessed 3 October 2020, 4. 157 See Center for Startegy Evaluation and Services, ‘Impact Assessment on Increased Protection of Internet-Connected Radio Equipment and Wearable Radio Equipment, Executive Summary’ (Commission 2020), 3. 158 Ibid 8. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

259

5.1 Product safety

and software by design and default.159 According to the findings of the study, the policy option of introducing mandatory cybersecurity requirements on a horizontal level is only feasible in the medium term. The risk of market fragmentation caused by a sectoral cybersecurity approach is mitigated by the fact that 70 % of internet-connect products are wireless and are thus covered by the RED.160  

5.1.3.3 Post-market surveillance and other duties Pursuant to Article 39, the post-market surveillance duties are regulated by the current Market Surveillance Regulation (EC) No 765/2008, which has been amended with effect as of 16 July 2021 by the new Market Surveillance Regulation (EU) 2019/1020 (see 5.2). In principle, the RED does not require a new safety assessment if software is uploaded to radio equipment after it has been placed on the market. However, if a product is subject to significant changes that modify its original performance, purpose or type to the extent that it affects the compliance with the RED, it needs to be considered a new product. This also applies if these changes are not caused by physical modifications but by the upload of software to the radio equipment. The person who carries out the significant changes needs to assess whether the modifications affect the compliance with the safety requirements under the RED and whether the radio equipment thus has to be considered a new product. If the hazard of the radio equipment has changed due to a software update or the risk level was increased, compliance with applicable essential requirements has to be re-assessed by the person who carried out the changes and who is to be considered the new manufacturer.161

159 See Center for Startegy Evaluation and Services, ‘Impact Assessment on Increased Protection of Internet-Connected Radio Equipment and Wearable Radio Equipment, Final Report’ (Commission 2020), 24. 160 Center for Startegy Evaluation and Services, ‘Impact Assessment on Increased Protection of Internet-Connected Radio Equipment and Wearable Radio Equipment, Executive Summary’ (Commission 2020), 6. 161 Commission Notice, The ‘Blue Guide’ on the implementation of EU products rules 2016, OJ C 2016/272, Section 2.1 and European Commission (DG GROW), OJ 2016/272, 1; Commission, ‘Clarifications on the initiative of upload of software into radio equipment’ (2020), available at accessed 3 October 2020, 2. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

260

5. Analysis of the status quo – is the acquis fit for software?

5.1.3.4 Addressees of safety-related obligations Primary addressees of the obligations under the RED are similar to those of the General Product Safety Regulation. The definition of ‘manufacturer’ not only includes the person who manufactures the radio equipment, but also the person who has the radio equipment designed or manufactured by someone else, as well as persons marketing the equipment under their name or trade mark. As pointed out above, the person who significantly changes the radio equipment may qualify as ‘manufacturer’ insofar as the modification affects the compliance with the safety requirements of the radio equipment. This is why a party providing software updates or cloud services required for the functioning of the product could be included in the notion of ‘manufacturer’, as well as any economic operator that changes the product later, before supplying it to consumers in the course of a commercial activity.

5.1.4 Machinery Directive The third version of the Machinery Directive (MD)162 was adopted in 2006 and has been applicable since December 2009. According to its Article 1(1), the MD not only applies to machinery but also to interchangeable equipment, safety components, lifting accessories, chains, ropes and webbing, removable mechanical transmission devices, as well as partly completed machinery.

5.1.4.1 Notion of ‘machinery’ and scope of application with regard to software ‘Machinery’ is defined in Article 2(a) as ‘an assembly … of linked parts or components, at least one of which moves…’. Under the definition of machinery falls a wide range of products, such as lawnmowers, 3D printers, powered hand-tools, construction machinery, personal care robots or complete automated industrial production lines.163 Excluded from the scope inter alia are motor vehicles, seagoing vessels and electric products that do not exceed a certain voltage limit.164 Only physical devices are considered machinery, interchangeable equipment or safety components.165 Hence, standalone software does not fall within

162 Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (recast) OJ L 2004/154, 24. 163 Commission, ‘Evaluation of the Machinery Directive’ SWD(2018) 161 final. 164 Article 1(2) Directive 2006/42/EC. 165 Commission, ‘Guide to application of the Machinery Directive 2006/42/EC, Edition 2.2’ (2019) available at accessed 3 October 2020, 44. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

261

the scope of the MD. However, physical products that incorporate software may qualify as a ‘safety component’. A device is considered a safety component if it ‘serves to fulfil a safety function, which is independently placed on the market, the failure and/or malfunction of which endangers the safety of persons, and which is not necessary in order for the machinery to function, or for which normal components may be substituted in order for the machinery to function.’166 The explanation offered by the Guidance Document for the exclusion of standalone software is that safety-related software will always be dependent on a physical product to perform its function.167 Article 2(i) defines the ‘manufacturer’ as ‘any natural or legal person who designs and/or manufactures machinery or partly completed machinery covered by this Directive and is responsible for the conformity of the machinery or the partly completed machinery with this Directive with a view to its being placed on the market, under his own name or trade mark or for his own use.’ In the absence of a manufacturer, the person putting the machinery on the market or into service is considered the manufacturer. As the other analysed safety legislation, the MD follows the New Approach to technical harmonisation, which means that legislative harmonisation is limited to mandatory essential health and safety requirements. The technical production specifications are developed by industrial standardisation organisations and remain voluntary. Products that are produced in compliance with the voluntary standards are presumed to fulfil the essential requirements established by the Directive (see 4.1.2).168

5.1.4.2 Safety requirements The manufacturer of machinery in the broad sense needs to ensure that it is designed and constructed in a way that it is fit for its purpose and can be operated and maintained without putting persons at risk. The measures have to take into account the operating conditions, but also foreseeable misuse of the machinery. Furthermore, machinery needs to be supplied with all the essential equipment that is necessary to operate and maintain it safely.169 The MD aims at preventing

166 Article 2(c) Directive 2006/42/EC. 167 Commission, ‘Guide to application of the Machinery Directive 2006/42/EC, Edition 2.2’ (2019) available at accessed 3 October 2020, 447. 168 Article 7 Directive 2006/42/EC. 169 Ibid Annex 1, No 1.1.2. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

262

5. Analysis of the status quo – is the acquis fit for software?

accidents caused to the users of machinery and ensuring their health and safety170 and thus primarily addresses physical risks. Requirements that take into account the safety of integrated software components in hardware can only be found with regard to the control systems. According to Annex I, they have to be designed and constructed in a way that neither a fault in the hardware nor in software may give rise to hazardous situations.171 As the Guidance Document explains, the requirements for control systems take into account that faults causing hazardous situations may not only be caused by failures of the physical components but also by errors in the software. This is surprising, as control systems may hardly be the only parts of machinery where software bugs may affect the safety of the whole machinery. However, it needs to be pointed out that the lack of software-specific safety requirements in the MD is partly mitigated by the fact that RED also applies if radio equipment is incorporated in the machinery.172 A possible gap might arise from the fact that standalone software cannot be considered a ‘safety component’.173 Standalone software that is independently put on the market can fulfil a safety function which is not necessary for the functioning of the machinery and the failure of which endangers persons. For example, to prevent turbines from exceeding their maximum speed, which could cause system failures and endanger the life of employees, mechanical trip bolts are used. Today, mechanical overspeed protection is usually complemented by software solutions.174 The MD would cover mechanical overspeed protection devices if put independently on the market, but not standalone software that fulfils the same function. The European Commission’s evaluation of the MD identifies additional safety gaps that might arise due to new technological developments such as AI equipped robots. It is rightly pointed out that software/AI powered machinery is not necessarily less safe than traditional machinery, but the changed characteristics need to be addressed by safety legislation. As examples of safety challenges

170 Ibid Recital 2. 171 Ibid Annex 1, No. 1.2.1. 172 Commission, ‘Guide to application of the Machinery Directive 2006/42/EC, Edition 2.2’ (2019) available at accessed 3 October 2020, 83; Commission, ‘Evaluation of the Machinery Directive 2006/42/EC’ SWD(2018) 161 final, 7. 173 Ibid 44. 174 Darran Herschberger and Jim Blanchard, ‘Pushing machines to the limit: Transitioning to digital overspeed protection’ (Control Engineering) accessed 4 October 2020. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.1 Product safety

263

caused by new technology, the increased cybersecurity risks and the potential harm of the mental well-being to people are mentioned.175 Some cybersecurity threats to machinery that is wirelessly connected to the internet could be addressed by the envisaged delegated acts under the Radio Equipment Directive (see 5.1.3).

5.1.5 Other safety-relevant legislation The three instruments above have received particular attention because the Medical Devices (and In-vitro Diagnostics) Regulation is the one most adapted to the characteristics of software; the Radio Equipment Directive is of particular importance since almost all of the connected devices will have some sort of software component; and because the MD hardly addresses the safety of embedded software despite the fact it could cause significant harm to the health of workers and other people exposed to the machinery. However, software is – at least to some extent – also addressed in other EU safety instruments. Directive 2014/32/EU176, which establishes safety requirements for measuring instruments, acknowledges that the measuring function of a device or system might also be executed by software. Hence, the essential requirements provide that software which is critical for the measuring shall not be inadmissibly influenced by other associated software that does not provide a measuring function.177 Furthermore, it is laid down that the metrological characteristics of a measuring instrument shall not be influenced by connecting it to another device or by communication with a remote device. Software that is critical for the measuring function shall also be adequately protected against corruption.178 The essential requirements of the ATEX Directive,179 which regulates equipment and protective systems for explosive atmospheres with regard to embedded

175 Commission, ‘Evaluation of the Machinery Directive 2006/42/EC’ SWD(2018) 161 final, 28– 30. 176 Directive 2014/32/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of measuring instruments (recast), OJ L 2014/96, 149. 177 Ibid Annex I, No. 7.6. 178 Ibid No. 8.4. 179 Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive atmospheres (recast), OJ L 2014/96, 309. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

264

5. Analysis of the status quo – is the acquis fit for software?

software, are somewhat less elaborate. However, it is stressed that ‘equipment’, i. e. devices that are capable of causing an explosion through their own potential sources of ignition, and ‘protective systems’, i.e. devices that can halt explosions or limit their effect and which are controlled by software should be designed in a way that takes into account risks arising from faults in the programme.180 Even more subtle is the Low Voltage Directive, which requires that measures are taken to ensure that the electrical equipment is ‘resistant to non-mechanical influences’ that could endanger persons, animals or property.181 For products that are intended to be used in play by children under the age of 14, the Safety of Toys Directive182 applies. The Directive explicitly excludes software, intended for leisure and entertainment, as well as electronic equipment that is used to access interactive software (e.g. computers or gaming consoles) from its scope. As clarified in the Commission’s Guidance, the Safety of Toys Directive applies to electronic equipment that is specifically designed for children to play with, such as toy computers.183 However, the Directive is not equipped to address any potential risks posed by the software embedded in these toy computers as the essential requirements only concern the physical construction, flammability, use of chemicals, electrical properties and hygiene of toys. In line with the Safety of Toys Directive, there are a handful of other sector specific safety instruments that do not address the possibility that embedded software might have an impact on the safety of the covered products at all (e.g. Cableway Regulation,184 Personal Protective Regulation185).  

180 Ibid Annex II, No. 1.5.8. 181 Annex I, No. 2(a), Directive 2014/35/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the Member States relating to the making available on the market of electrical equipment designed for use within certain voltage limits, OJ L 2014/96, 357. 182 Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys, OJ L 2009/170, 1. 183 Commission, ‘Toy Safety Directive 2009/48/EC – An explanatory guidance document’ (rev.1.9, 2016) available at accessed 4 October 2020. 184 Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway installations and repealing Directive 2000/9/EC, OJ l 2016/81, 1. 185 Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective equipment and repealing Council Directive 89/686/EEC, OJ L 2016/81, 51. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.2 Market surveillance

265

5.2 Market surveillance 5.2.1 Purpose, scope and content The new Market Surveillance Regulation186 (MSR), which amends the 2008 Market Surveillance Regulation, aims at further strengthening the existing framework to improve the effectiveness of market surveillance in the EU. By introducing rules for the online sale of goods and obligations for ‘fulfilment service providers’ and ‘information society services’ (see below 5.4.4), the MSR also aims at addressing challenges posed by the digital age. All provisions (except for those concerning the new European Product Compliance Network, which will be applicable from 1 January 2021 on), will apply from 16 July 2021. According to Article 3, the MSR applies to all products that are subject to one of the 70 EU safety instruments listed in Annex I. This includes, inter alia, the RED, the MD or the Safety of Toys Directive. Excluded from the scope of the MSR are food, feed, medicinal products for human and veterinary use, living plants and animals, products of human origin and products of plants and animals relating directly to their future reproduction.187 In accordance with the principle of lex specialis, the MSR only applies in the absence of more specific rules on market surveillance.188 Article 7(1) stipulates that economic operators have to cooperate with market surveillance authorities to eliminate or mitigate risks that have been made available on the market by them. In Article 7(2), the Regulation lays down an obligation for information society service providers within the meaning of the E-Commerce Directive to eliminate or mitigate risks caused by products which were offered through their services, at the request of market surveillance authorities. However, the Regulation does not establish a general obligation to monitor the information they transmit, as this would run against Article 15 E-Commerce Directive.189 For products that are sold online or through other means of distance, Article 6 provides that they shall be deemed as ‘made available on the market’ if an offer targets end users in the EU. This is the case if the ‘economic operator directs, by any means, its activities to a Member State’.

186 Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011, OJ L 2019/169, 1. 187 Ibid Recital 6. 188 Ibid Recital 4. 189 Ibid Recital 16. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

266

5. Analysis of the status quo – is the acquis fit for software?

One of the key provisions of the MSR is Article 4, which lays down that a product that is subject to one of the 18 safety instruments mentioned in Article 4(5), which include the RED and MD, can only be placed on the market if an ‘economic operator’, who is responsible for performing the duties under this Regulation, is established in the EU.190 ‘Economic operator’ within the meaning of Article 4 is the manufacturer, an importer, an authorised representative, or – if no economic operator as just mentioned is established in the EU – a ‘fulfilment service provider’. The latter is defined as ‘any natural or legal person offering, in the course of commercial activity, at least two of the following services: warehousing, packaging, addressing and dispatching, without having ownership of the products involved’. Excluded are postal services as defined in Directive 97/67/EC191 and parcel delivery services as defined in Regulation 2018/644.192 Pursuant to Article 4(3), the economic operator has to keep the declaration of conformity or performance, for the period required by the legislation applicable to the respective product, at the disposal of market surveillance authorities and ensure that the technical documentation can be made available to these authorities upon request. They must also provide all information and documentation necessary to demonstrate the conformity of the product upon request of the market surveillance authorities, and inform the market surveillance authorities if the economic operator has reason to believe that a product presents a risk. Furthermore, the economic operator has to immediately take necessary corrective action if the product does not comply with product safety legislation, or if the economic operator considers or has reason to believe that the product in question presents a risk.

5.2.2 Overall assessment with regard to software The obligations laid down in Article 4(3)(c) and (d) of the MSR seem to provide an efficient basis for addressing the problem that a product may become unsafe due to a faulty update of its embedded software or because the embedded software is outdated and has become an easy target for cyberattacks. In particular, the economic operator not only has to inform the market surveillance authorities if there

190 Ibid Article 4(1). 191 Article 2(1), Directive 97/67/EC of the European Parliament and of the Council of 15 December 1997 on common rules for the development of the internal market of Community postal services and the improvement of quality of service OJ L 1998/15, 14. 192 Article 2(2), Regulation (EU) 2018/644 of the European Parliament and of the Council of 18 April 2018 on cross-border parcel delivery services OJ L 2018/112, 19. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.3 Product liability

267

are reasons to believe that the product presents a risk, but they also have to ensure that the necessary corrective action is taken. Such a corrective action might be the update of the embedded software or the recall of the product if the risk cannot be addressed otherwise. Since the application of Article 4 is limited to products that are subject to sector specific safety legislation, risks posed by standalone software and insufficient maintenance of such software are only covered to the extent that standalone software falls within the scope of the sector specific safety instruments. Including standalone software in sector specific legislation, such as the MD, would automatically also expand the scope of the MSR, which then could provide a basis for obliging developers to monitor their software and provide updates if necessary. The obligations of information society services, which are laid down in Article 7(2), could address situations where standalone software is provided through an online platform, such as an app store. Upon a request by the market surveillance authority, information society service providers would need to eliminate or mitigate the risk caused by software that has been distributed through their platforms. However, this would not include an obligation of the platform operator to update the software, but only to facilitate the update, such as by alerting the customer of the risk and the availability of the update, and the platform operator would have to immediately stop marketing the unsafe software.

5.3 Product liability The Product Liability Directive (PLD)193 is currently the only existing EU instrument dealing with liability. Surprisingly, the broad range of specific safety regimes that follow at least three different overall approaches is not mirrored by a comparable range of different liability regimes. This holds true for products, but even more so for services. Attempts made in the early 1990s to introduce a Directive on Service Liability,194 which would have merely reversed the burden of proof for negligence liability, were not successful. Under the PLD, producers shall be liable for harm caused by a defect in their product. It is in particular the emergence of AI and other new technologies that has led to a lively debate as to whether, and

193 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products, OJ L 1985/210, 29. 194 Proposal for a Council Directive on the Liability of Suppliers of Services OJ C 1991/12, 8. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

268

5. Analysis of the status quo – is the acquis fit for software?

how, product liability and liability regimes in general need to be adapted and supplemented.195

5.3.1 Notion of ‘product’ and the role of software The PLD applies to all ‘movable’ products, irrespective of whether the product is distributed by way of sales, leasing or in the course of providing a service.196 Immovables are excluded from its scope, but the PLD applies if a movable product is incorporated in an immovable. It is also clarified that agricultural products and game are excluded, and that the notion of product includes electricity. The definition reads as follows: Article 2 For the purposes of this Directive ‘product’ means all movables, with the exception of primary agricultural products and game, even though incorporated into another movable or into an immovable. ‘Primary agricultural products’ means the products of the soil, of stock-farming and of fisheries, excluding products which have undergone initial processing. ‘Product’ includes electricity.

In Recital 2, the PLD seems to indicate that its scope is limited to industrially produced goods. However, based on the CJEU’s Veedfald decision, the prevailing view today is that the PLD applies irrespective of whether products are handcrafted, artesian or industrially manufactured.197 Other than the General Product Safety Directive, the notion of ‘product’ is not limited to consumer goods. However, the PLD is limited insofar as only damage to property that is intended for private use or consumption is compensated.198

195 See e.g. EY, Technopolis and VVA, ‘Evaluation of Council Directive 85/374/EEC on the approximation of laws, regulations and administrative provisions of the Member States concerning liability for defective products, Final Report’ (Commission 2018); Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for AI and other digital technologies’ (Commission 2019); BEUC, ‘Product Liability 2.0 How to make EU rules fit for consumers in the digital age’ (2020). 196 C-203/99 Henning Veedfald v Århus Amtskommune [2001] CJEU,EU:C:2001:258, para 12. 197 Just to name a few, Daily Wuyts, ‘The Product Liability Directive – More than Two Decades of Defective Products in Europe’ (2014) 5 Journal of European Tort Law 1, 7; Gert Straetmans and Dimitri Verhoeven in Machnikowski (ed) European Product Liability (Intersentia 2016), 42; Staudinger/Oechsler (2018) § 2 ProdHaftG, para 8. 198 Article 9(b)(i) Council Directive 85/374/EEC. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.3 Product liability

269

The crux of the matter from the perspective of software liability is whether the PLD also covers intangible products. The Directive does not explicitly state whether ‘movables’ only covers tangible goods. However, based on an e contrario reasoning, the majority of legal scholars argue that the explicit inclusion of electricity was only necessary because, in general, intangible items do not fall under the definition of product.199 While this line of reasoning is certainly convincing, the exact opposite argument, i.e. that electricity was the only example for an intangible product at the beginning of the 1980s, and the mention of it merely clarifies that the notion of movable products is not limited to tangible items, is at least equally persuasive.200 While there is some room for debate on an EU-level, national laws are more concrete on this matter. Austrian law, for example, explicitly limits the transposed national law to tangible products201 and the German Product Liability Law uses the term ‘thing’ (Sache), which – under national understanding – presupposes a tangible nature.202 Even if one were to follow the majority view that the PLD only applies to tangible products, the application to software embedded in hardware devices is unproblematic – as long as the victim is able to prove defect and causation – because it is not decisive whether the damage was caused by a tangible or intangible component of a physical product.203 However, when it comes to standalone software, the matter becomes disputed. The traditional argument for including software in the scope of the PLD, despite its intangible nature, is that software has a physical manifestation in the medium on which it is supplied.204 However, since the advent of the internet, this reasoning leads to server inconsistencies, as the same software would be considered a product if supplied on USB

199 E.g. Andrew Tettenborn in Jones/Dugdale (eds), Clerk and Lindsell On Torts (20th edn, Sweet and Maxwell 2010), 11 – 49; Daily Wuyts, ‘The Product Liability Directive – More than Two Decades of Defective Products in Europe’ (2014) 5 Journal of European Tort Law 1, 4; Gert Straetmans and Dimitri Verhoeven in Machnikowski (ed), European Product Liability (Intersentia 2016), 41. 200 Pointed out by Gerhard Wagner ‘Robot Liability’ in Lohsse/Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 44. 201 § 4 Produkthaftungsgesetz, BGBl 1988/99. 202 § 90 Bürgerliches Gesetzbuch (BGBl. I S. 42, 2909; 2003 I S. 738). 203 Jean-Sébastien Borghetti, ‘How can Artificial Intelligence be Defective?’ in Lohsse/Schulze/ Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 104. 204 Gert Straetmans, Dimitri Verhoeven in Machnikowski (ed), European Product Liability (Intersentia 2016), 47; Daily Wuyts, ‘The Product Liability Directive – More than Two Decades of Defective Products in Europe’ (2014) 5 Journal of European Tort Law 1, 5. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

270

5. Analysis of the status quo – is the acquis fit for software?

but not when downloaded from the internet. As rightly pointed out by various legal scholars, liability should not depend on the form of distribution.205 In conclusion, it can be said that the current product definition leads to serious uncertainties with regard to the liability for software. Although there are good reasons why software should be considered a product under the current PLD and convincing attempts in legal literature have been made to bridge the existing inconsistencies, the many diverging views – not only between Member States but also within jurisdictions – on how to treat software under the PLD render a revision by the European legislator necessary. This conclusion stays true despite the fact that at least some clarification is expected from the CJEU based on a reference for a preliminary ruling from the OGH (Austrian Supreme Court) regarding information provided in a magazine.206

5.3.2 Addressees of product liability The party liable for damage caused by a defective product is the producer, within the broad understanding of the PLD. According to Article 3, producer means the manufacturer of the finished product, any raw material or a component. The producer of the final product is responsible for ensuring the overall safety of the product and is thus the primary addressee of product liability. The other manufacturers are jointly and severally liable if they provided a defective component or raw material.207 In addition, persons who present themselves as a producer by putting their trade mark on the product shall also be considered a producer within the meaning of the PLD. To address the problem that the producer of the product might not necessarily be established in the European Union, which could render it difficult for victims to take legal action, the person who imports the product into the European Union is also considered responsible under the PLD. This is without prejudice to the liability of the other producers.208 On a subsidiary basis, the supplier of the product is liable if a producer cannot be identified. However, suppliers

205 Gerhard Wagner, ‘Robot Liability’ in Lohsse/Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 44; Bernhard Koch ‘Product Liability 2.0 – Mere Update or New Version?’ in Lohsse/Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 102. 206 OGH 21.1.2020, 1 Ob 163/19f. 207 Gert Straetmans, Dimitri Verhoeven in Machnikowski (ed), European Product Liability: An Analysis of the State of the Art in the Era of New Technologies (Intersentia 2019), 72. 208 Article 3(2) Council Directive 85/374/EEC. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.3 Product liability

271

are exempted from liability if they inform the victim of the identity of the producer or the previous person in the supply chain.209 In the context of software, the problem arises that, even if it were clarified that PLD also applies to software downloaded from the internet, it might not be possible to identify a liable party within the EU. This is because the producer, i. e. the developer of the software, will regularly be established in a third country. The PLD’s current ‘safety net’ of assigning liability to the person importing the product in the EU is ineffective because, for downloaded software, no importer exists. Admittedly, the problem is not unique to software, as the importer of physical products may also be established outside the territory of the EU.210 However, since the need for any physical representation in the EU is much lower if the product is distributed over the air, the problem intensifies when it comes to the sale of software. The only instrument that currently addresses this issue is the MDR, which not only obliges manufacturers not established in the EU to designate an authorised representative,211 but also stipulates that this representative shall be legally liable for defective devices on the same basis as, and jointly and severally with, the manufacturer.212 This seems consistent, as the MDR is the only instrument that fully acknowledges the functional equivalence of software and hardware and provides rules that address the specific characteristic of software (see 5.1.2.5). On the safety level, the new MSR will close the gap of not having an addressee in the EU, by requiring that an economic operator responsible for market surveillance obligations is established in the EU before a product can be placed on the market.213 Any revision of the Product Liability Directive should consider the possibility of assigning liability for defective products to the economic operator established in the EU in order to achieve a similar unison between safety and liability as in the MDR, and also consider information society service providers (in particular platform operators).  

209 Ibid, Article 3(3). 210 Rightly pointed out by Geraint Howells, Christian Twigg-Flesner and Thomas Wilhelmsson, Rethinking EU Consumer Law (Routledge 2018), 271. 211 Article 11(1) Regulation (EU) 2017/745. 212 Ibid. 213 Article 4(1) Regulation (EU) 2019/1020. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5. Analysis of the status quo – is the acquis fit for software?

272

5.3.3 Defectiveness at the time of market entry The producer is not strictly liable for any damage caused by their product but only if it is established that the product was ‘defective’. Hence, it is rather unsurprising that the notion of ‘defect’ is of pivotal importance for practice and has received specific attention by legal scholars. Other than the U. S. law under the Third Restatement of Torts, the PLD does not distinguish between different categories of defect, but relies on the ‘consumer expectation test‘ to establish whether a product is defective or not:  

Article 6 1. A product is defective when it does not provide the safety which a person is entitled to expect, taking all circumstances into account, including: (a)

the presentation of the product;

(b) the use to which it could reasonably be expected that the product would be put; (c) 2.

the time when the product was put into circulation.

A product shall not be considered defective for the sole reason that a better product is subsequently put into circulation.

The defectiveness of a product is not determined by the subjective safety expectations of the concrete victim but by the expectations of the general public, i.e. by an objective safety standard.214 If a product is targeted at people with a reduced risk control potential, such as children or elderly people, or the producer can reasonably foresee that the product will be used by a group of vulnerable people, this needs to be taken into account when assessing the safety expectations.215 The expectations of a specific target group may differ, but the standard remains an objective one.216 These considerations not only apply to physical products but also to software. Thus, when determining the defectiveness of software, the digital literacy and skills of the target group should be taken into account (Principle 6). Because the ‘safety expectation of the general public’ is a very vague notion, the practice often follows the U. S. approach of distinguishing between construction, design, and information defect217 and, for the latter two categories, resorts to  

214 Gert Brüggemeier, Tort Law in the European Union (2nd edn, Wolters Kluwer 2018), para 400. 215 Gert Straetmans, Dimitri Verhoeven in Machnikowski (ed), European Product Liability (Intersentia 2016), 52. 216 Daily Wuyts, ‘The Product Liability Directive – More than Two Decades of Defective Products in Europe’ (2014) 5 Journal of European Tort Law 1, 9. 217 See Restatement (Third) of Torts: Product Liability § 2. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.3 Product liability

273

additional tests such as the risk-utility and reasonableness test.218 It is rightly pointed out that establishing defect solely with a risk-utility test would not be compatible with the PLD, as the liability regime is based on safety expectations and not utility. However, risk-utility considerations can constitute an element in determining the defectiveness of a product,219 and in practice, design defects are commonly established by comparing the product that caused the damage with either comparable existing or hypothetical products.220 However, applying this reasonable alternative design test to AI algorithms causes difficulties. Merely comparing how another algorithm would have behaved in the same situation in which the algorithm under investigation has caused the damage does not necessarily say anything about the latter’s overall safety and defectiveness if the safe outcome in this particular situation was only achieved by security losses elsewhere. For example, if damage is caused because an autonomous vehicle, after identifying an unusual potential obstacle, does not immediately stop but slows down and further evaluates the situation, an algorithm that would simply initiate immediate emergency braking would have certainly prevented this accident. However, this algorithm would probably be responsible for numerous rearend collisions. This is not to say that the first vehicle in this example should not be considered defective, but should only demonstrate the difficulty of simply comparing the outcome of two different algorithms. Another possibility is to compare the overall outcomes of two algorithms. However, this approach raises the question of what the standard to which the algorithms must conform should be. Simply taking the safest AI on the market is not feasible, since all other algorithms would then be considered defective. Defining a margin in which the algorithms may deviate from the highest standard is a possible option, but how wide exactly should this margin be, 5 %, 10 % or even 50 %?221  





218 Piotr Machnikowski in Machnikowski (ed), European Product Liability: An Analysis of the State of the Art in the Era of New Technologies (Intersentia 2019), para 82; Francesco Mezzanotte ‘Liability for Digital Products’ in De Franceschi/Schulze (eds), Digital Revolution – New Challenges for Law (Nomos, Beck 2019), 179. 219 Daily Wuyts, ‘The Product Liability Directive – More than Two Decades of Defective Products in Europe’ (2014) 5 Journal of European Tort Law 1, 7. 220 Jean-Sébastien Borghetti ‘How can Artificial Intelligence be Defective?’ in Lohsse/Schulze/ Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 66. 221 Jean-Sébastien Borghetti ‘How can Artificial Intelligence be Defective?’ in Lohsse/Schulze/ Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 66 et seq; Gerhard Wagner ‘Robot Liability’ in Lohsse/Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 44; Gerhard Wagner, ‘Produkthaftung für autonome Systeme’ (2017) 217 AcP 707, 726 et seq. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

274

5. Analysis of the status quo – is the acquis fit for software?

Another problem that arises in the context of software is that the producer only has to ensure that the product meets the safety expectations at the time the product is put in circulation.222 Hence, producers are not liable if they are able to prove that the defect that caused the damage did not exist when the product was placed on the market.223 The liability is also excluded if the product was already defective when it was put into circulation, but the defect could not be discovered based on the scientific and technical knowledge at the time the product was put into circulation (so-called development-risk defence).224 The focus on the moment when the product is put into circulation as the ‘magic moment’225 for product liability and the development-risk defence raises several issues for the application of the PLD to software. First, producers of software or software embedded products are only liable for cybersecurity risks that were already present at the time the software was placed on the market and that could already be detected based on the state of the art at that time.226 Thus, liability law does not provide an incentive for software developers to provide safety updates against new security threats that have emerged with digital advancements and may leave victims of cyberattacks uncompensated.227 Secondly, software is usually supplied with updates and upgrades during its lifecycle, which not only maintain the initial function but may also introduce new features that could change the software’s risk profile (see 2.2.3). It is at least questionable whether damage caused by defects that have emerged due to a subsequent update of the software is covered by the PLD. Where a single major version introduces a range of new features, it could very well be argued that, with an upgrade, a new product is put onto the market, which would cover all defects that have been introduced with this version. However, with the common trend of providing small updates on a weekly or monthly basis, significant functionality changes may not be introduced

222 Article 7(a) Council Directive 85/374/EEC; According to CJEU in Case C-127/04 Declan O’Byrne gegen Sanofi Pasteur MS. Ltd und Sanofi Pasteur SA [2006], EU:C:2006:93, a product is ‘put into circulation’ when it is ‘taken out of the manufacturing process operated by the producer and enters a marketing process in the form in which it is offered to the public in order to be used or consumed.’ 223 Piotr Machnikowski in Machnikowski (ed), European Product Liability: An Analysis of the State of the Art in the Era of New Technologies (Intersentia 2019), para 143. 224 Article 7(e) Council Directive 85/374/EEC. 225 Bernhard A. Koch, ‘Product Liability 2.0 – Mere Update or New Version?’ in Lohsse/Schulze/ Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 102. 226 Thomas Riehm, ‘Updates, Patches etc. – Schutz nachwirkender Qualitätserwartungen‘ in Schmidt-Kessel/Malte Kramme (eds), Geschäftsmodelle in der digitalen Welt (JWV 2017), 219. 227 Francesco Mezzanotte, ‘Liability for Digital Products’ in De Franceschi/Schulze (eds), Digital Revolution – New Challenges for Law (Nomos, Beck 2019), 180. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.3 Product liability

275

at once but will only occur over time, the argument that each of these versions is a new product is more difficult to sustain. Even if one were to follow the line of reasoning that with each update, a new product is put into circulation, the question of the appropriate addressee arises. Is the manufacturer of an IoT device liable as producer of the final product if damage was caused by a software update that was provided by a third party, or does the sole responsibility lie with the person providing the update? It has been pointed out under 2.2 that software maintenance activities are often outsourced. Where the developer of the initial software delegates the responsibility for updates to an entity outside the EU, it might be difficult for victims to receive compensation for damage caused by defective updates.

5.3.4 Burden of proof with regard to damage, defect and causation Under the Product Liability Directive, the victim does not have to prove any fault or wrongdoing of the producer, but according to Article 4, the injured person needs to prove damage, defect and the causal relationship between the two. The standard of proof, the evidence that suffices to meet this standard, as well as other alleviations such as disclosure obligations or allocation of expert costs by the court228 fall under the procedural autonomy of Member States and are thus matters of national law.229 Furthermore, the PLD does not provide a definition of the ‘causal relationship’ and thus again leaves space for the very different national rules on causation and the respective limitations (see 3.1.5).230 The burden of proving a defect and the causal relationship weighs heavily on victims. The Evaluation Study of PLD found that the lack of sufficient proof for defect and/or causality together account for 53 % of dismissals of product liability claims.231 The difficulties consumers face vary from sector to sector and depend to a large extent on the need for technical expertise to identify a defect in complex products, as well as on the access to relevant information to prove a defect and  

228 See Report from the Commission on the Application of Directive 85/374 on Liability for Defective Products, COM(2000) 893 final. 229 Gert Brüggemeier, Tort Law in the European Union (2nd edn, Wolters Kluwer 2018), para 419. 230 Francesco Mezzanotte, ‘Liability for Digital Products’ in De Franceschi/Schulze (eds), Digital Revolution – New Challenges for Law (Nomos, Beck 2019), 180. 231 EY, Technopolis and VVA ‘Evaluation of Council Directive 85/374/EEC on the approximation of laws, regulations and administrative provisions of the Member States concerning liability for defective products, Final Report’ (Commission 2018), 23. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

276

5. Analysis of the status quo – is the acquis fit for software?

causation in court.232 Whether to alleviate the burden of proof was already discussed on a policy level two decades ago.233 The debate has gained new thrust with the increasing importance of IoT products and AI. Concerns have been raised that the difficulties of proving defect and causation are elevated, because of the complexity of these technologies and the involvement of several actors whose contribution cannot easily be determined by a plaintiff.234 Identifying a defect in software with several million lines of source code may be challenging even with the aid of technical expertise. Furthermore, many connected devices also allow users to download and install third party software, which may change the hardware’s initial function or influence the interaction with other deceives in a connected ecosystem. It is this interplay between hardware, software and connectivity that can render it onerous for victims to allocate a defect235 in connected devices, and to prove that this defect caused the damage.236 However, voices have also been raised that the decision to reverse the burden of proof for new technologies should not be rushed. Technological developments may not only pose new threats but may also provide new possibilities to gather, record and store information that could help victims establish their claims. So-called logging obligations for information that might be necessary to determine whether the risks of the product have materialised and corresponding access rights could mitigate the difficulties for victims establish their claims without reversing the burden of proof.237 Usually, the burden of proving damage will be less onerous for the plaintiff. However, only two heads of damage can be recovered under the PLD. Article 9 stipulates that the producer is liable for death and personal injuries as well as for da-

232 Daily Wuyts, ‘The Product Liability Directive – More than Two Decades of Defective Products in Europe’ (2014) 5 Journal of European Tort Law 1, 24. 233 See Commission, ‘Report from the Commission on the Application of Directive 85/374 on Liability for Defective Products’ COM(2000) 893 final. 234 Francesco Mezzanotte, ‘Liability for Digital Products’ in De Franceschi/Schulze (eds), Digital Revolution – New Challenges for Law (Nomos, Beck 2019), 186. 235 Bernhard Koch, ‘Product Liability 2.0 – Mere Update or New Version?’ in Lohsse/Schulze/ Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 102. 236 Extensively on the issue of causation Miquel Martín-Casals ‘Causation and Scope of Liability in the Internet of Things (IoT)’ Mere Update or New Version?’ in Lohsse/Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 201. 237 Gerhard Wagner, ‘Robot Liability’ in Lohsse/Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 46; also raising words of caution Bernhard Koch, ‘Product Liability 2.0 – Mere Update or New Version?’ in Lohsse/Schulze/Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos 2019), 120. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.4 Other relevant frameworks and initiatives on new technologies

277

mage to items of property that were intended and mainly used for private purposes. Non-material harm, such as pain and suffering, is not compensated under the PLD. It is questionable whether, and to what extent, national courts are free to decide if psychological harm is to be considered a personal injury under the PLD or is non-material harm and is thus not compensated.238 While there are good reasons to argue that the destruction of data should be included in the scope of PLD239, it is highly uncertain that this is currently the case240 – even where the damage to data has very concrete physical manifestations (see 3.1.2).

5.4 Other relevant frameworks and initiatives on new technologies 5.4.1 New Regulatory Framework for AI Already in 2018, the Commission presented its AI Strategy and outlined the importance of a robust safety and liability framework that increases consumer trust and thus benefits the uptake of this new technology. In the AI White Paper,241 which was published in early 2020, the Commission sets out its considerations regarding a new legal framework for AI, which should protect fundamental rights and provide for an adequate safety and liability regime. The White Paper is accompanied by the Report on the Safety and Liability Implications of AI, IoT and Robotics.242 Proposed measures include the adaptation of product liability law, the introduction of a new strict liability regime for AI, as well as AI-specific principles, safety and transparency requirements and procedures (such as certification). In the (now adopted) Commission Work Programme 2020, the proposal for a legislative initiative is planned for the first quarter of 2021.

238 Simon Whittaker, ‘A Closer Look at the Product Liability Directive’ in Fairgrieve (ed), Product Liability in Comparative Perspective (Cambridge University Press 2005). 239 See Christiane Wendehorst, ‘Liability for Pure Data Loss’ in Karner/Magnus/Spier/Widmer (eds), Essays in Honor of Helmut Koziol (Jan Sramek 2020), 62. 240 Piotr Machnikowski in Machnikowski (ed), European Product Liability: An Analysis of the State of the Art in the Era of New Technologies (Intersentia 2019), para 82. 241 Commission, ‘White Paper on Artificial Intelligence – A European approach to excellence and trust’ COM(2020) 65 final. 242 Commission, ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics’ COM(2020) 64 final. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

278

5. Analysis of the status quo – is the acquis fit for software?

5.4.2 Cybersecurity Horizontal cybersecurity legislation in the EU is currently based on two pillars. The first one is the Directive on Security of Network and Information Systems (NIS Directive),243 which had to be transposed by the Members States by 9 May 2018; the second one is the Cybersecurity Act,244 which entered into force on 27 June 2019. A review of the NIS Directive has been announced in the Commission’s Work Program for the fourth quarter of 2020. The NIS Directive was the first horizontal instrument that aimed at improving the resilience against cybersecurity risks in the EU. To achieve this goal, Article 1(2) NIS lays down the following measures: Member States need to designate a Computer Security Incident Response Team (CSIRT) and a competent national NIS authority to increase their preparedness against cybersecurity attacks. In addition, a cooperation group is set up to facilitate the exchange of information among Member States. Furthermore, businesses in essential sectors that rely on ICT (like transport, energy, banking and health care), so-called operators of essential services (OES), and key digital service providers (search engines, cloud computing services and online marketplaces) have to take appropriate and state of the art security measures, and notify the relevant national authority of serious incidents.245 Cybersecurity certifications under the Cybersecurity Act, which still need to be put in place (see below), could help OES to demonstrate that they have adopted state of the art measures to address cybersecurity risks.246 The essential sectors and the digital services providers are prescribed by Annex II and III of the NIS respectively. However, it is left to the Member States to identify operators of essential services in their territories.247 The Commission’s NIS Toolkit Communication offers guidance in order to ensure that similar players with similar importance are consistently identified as OES in the Member States. Furthermore, Mem-

243 Directive (EU) 2016/1148 of the European Parliament and of the Council of 6 July 2016 concerning measures for a high common level of security of network and information systems across the Union, OJ L 2016/194, 1. 244 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act) OJ L 2015/151, 15. 245 Article 16(1) Directive (EU) 2016/1148. 246 Annegret Bendiek and Martin Schallbruch, ‘Europe’s third way in cyberspace: what part does the new EU Cybersecurity Act play?’ (SWP Comment 52/2019). 247 Ibid Article 5. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.4 Other relevant frameworks and initiatives on new technologies

279

ber States are advised to go beyond the sectors and services laid down in the Annexes in order to improve overall cybersecurity.248 For the review of the NIS, the Commission aims to achieve consistency with sectoral cybersecurity initiatives that are currently planned, such as the initiative on a digital operational resilience act in the financial sector (DORA)249 and the initiative on a network code on cybersecurity with sector-specific rules for crossborder electricity flows.250 The main priority will be to ensure a level playing field by addressing the fragmentation that was caused by the minimum harmonisation and the discretion of the Member States in defining and identifying OES. This fragmentation leads to an additional burden for operators that provide essential services in several Member States, as they might have to comply with different security and incident reporting regimes.251 The Cybersecurity Act has significantly strengthened the role of ENISA, the European Network and Information Security Agency (also referred to as the European Union Agency for Cybersecurity). Furthermore, an EU cybersecurity framework has been established, which enables the adoption of tailored certification schemes for specific categories of ICT products, processes and services. The Cybersecurity Act defines ICT products, processes and services as follows: Article 2 (12) ‘ICT product’ means an element or a group of elements of a network or information system; (13) ‘ICT service’ means a service consisting fully or mainly in the transmission, storing, retrieving or processing of information by means of network and information systems; (14) ‘ICT process’ means a set of activities performed to design, develop, deliver or maintain an ICT product or ICT service; 2. A product shall not be considered defective for the sole reason that a better product is subsequently put into circulation.

At the request of the European Commission, ENISA will be entrusted to draw up candidate cybersecurity certification schemes based on the framework set out by the Cybersecurity Act, which can then be adopted by the Commission through

248 Commission, ‘Making the most of NIS – towards the effective implementation of Directive (EU) 2016/1148 concerning measures for a high common level of security of network and information systems across the Union’ COM(2017) 476 final/2. 249 For more information see accessed 6 October 2020. 250 For more information see accessed 6 October 2020. 251 Inception Impact Assessment, Revision of the NIS Directive, Ares(2020)3320999. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

280

5. Analysis of the status quo – is the acquis fit for software?

an implementing act.252 The certificates issued under these schemes are valid across the EU.253 So far, ENISA has published one draft for a cybersecurity certification scheme, named the Common Criteria based European cybersecurity certification scheme (EUCC),254 which was open for public consultation until 31 July 2020. A second one, that deals with the certification of cloud services, is currently in preparation. According to the Cybersecurity Act, the certification schemes need to clarify which type or categories of ICT products, services and processes are covered. Furthermore, the purpose, the security standards that shall be met, the evaluation methods, and the period of validity of the scheme shall be defined.255 Based on the level of risk of the covered ICT product, service or process, schemes shall specify one or more level(s) of assurance (basic, substantial or high). Article 51 lays down a list of security objectives a cybersecurity certification scheme shall achieve. From this list, the following objectives are of particular importance for this study: (a), (b) Data shall be protected against unauthorised processing and destruction, loss or alternation throughout the entire lifecycle of ICT products, services and processes. (d) It shall be verified that ICT products, services and processes do not contain any known vulnerabilities. (i) ICT products, services ICT processes shall be secure by default and by design. (j) ICT products, services and processes shall be provided with up-to-date software and hardware that do not contain publicly known vulnerabilities, and with mechanisms for secure updates. All schemes will remain voluntary for now. However, the Cybersecurity Act provides that the Commission will evaluate whether and to what extent adopted cybersecurity schemes shall become mandatory. On a sector specific level, whether to introduce mandatory cybersecurity requirements for radio equipment with delegated acts based on the Radio Equipment Directive (see 5.1.3) is currently being evaluated.

252 Article 48, 49(4) Regulation (EU) 2019/881. 253 Ibid Recital 73. 254 Available at accessed 6 October 2020. 255 Article 54 Regulation (EU) 2019/881. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.4 Other relevant frameworks and initiatives on new technologies

281

5.4.3 Data protection law The centrepiece of the EU’s data protection framework, the General Data Protection Regulation (GDPR),256 strengthens the control of individuals over their personal data and ensures that data is only processed for legitimate purposes. To protect the personal data of individuals (called ‘data subjects’ in the words of the GDPR), a set of obligations for controllers, i.e. persons who determine the means of processing and processors, i.e. persons who process data on behalf of the controller257 are laid down. Where non-compliance with the duties causes damage, the GDPR also provides a basis for liability: Article 82 (1) Any person who has suffered material or non-material damage as a result of an infringement of this Regulation shall have the right to receive compensation from the controller or processor for the damage suffered. (2) Any controller involved in processing shall be liable for the damage caused by processing which infringes this Regulation. A processor shall be liable for the damage caused by processing only where it has not complied with obligations of this Regulation specifically directed to processors or where it has acted outside or contrary to lawful instructions of the controller.

The GDPR’s notion of damage is based on a broad understanding, which encompasses both material and non-material damage.258 Its predecessor, Directive 95/ 46/EG, lacked such an explicit wording.259 Liable persons under the GDPR are the controller and the processor. According to Article 82(4), several processors and/or controllers are jointly and severable liable if they are involved in the same illegitimate processing that has caused the damage. This is another difference in comparison to Directive 95/46/EG, which channelled liability exclusively towards the controller.260 Similar to the Product Liability Directive (see 5.3), liability under the GDPR does not presuppose any fault of the controller or processor; it suffices that

256 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 2016/119, 1. 257 Ibid Article 4(7), (8). 258 Ibid Recital 146, which states that ‘any damage’ shall be compensated. 259 Article 23 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data OJ L 1995/281, 31. 260 Ibid Article 23. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

282

5. Analysis of the status quo – is the acquis fit for software?

they have violated their obligations under the GDPR.261 Since controllers are, according to Article 5(2) GDPR, obliged to be able to demonstrate their compliance with the basic principles of data protection, it is argued that the burden of proving compliance with the GDPR lies with the controller and not with the data subject.262 Even if one were to admit that Article 5(2) does not lead to a shift of the legal burden of proof, it could still be argued that the evidential burden of proof is reversed, as the controller is best placed to provide evidence for the compliance with the respective duty.263 Where developers of software, the producers of connected devices, or other players in an interconnected ecosystem process personal data of end-users, they may qualify either as controller or processor. Hence, they are liable for damage that arises due to a breach of obligations under the GDPR. Due to the expansion of liable persons compared to the Directive 95/46/EC and their joint liability, it should be less onerous for victims to receive compensation for damage caused by connected ecosystems, where multiple actors are involved, because it does not need to be established who determined the purposes and means of processing.264 While the disclosure or transfer of personal data to unauthorised parties certainly constitutes a breach of obligation under the GDPR, it is questionable whether this is also the case when personal data is deleted or destroyed. For example, the data in a cloud-based photo editing software is lost because of a faulty update of the provider’s servers. A basis for arguing that the controllers and processors are liable for accidental data loss can be found in Article 5(f), which states that ‘data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures (‘integrity and confidentiality’)’. However, the GDPR is also based on the principle of data minimisation265 and even explicitly de-

261 Brendan Van Alsenoy, ‘Liability under EU Data Protection Law: From Directive 95/46 to the General Data Protection Regulation’ (2017) 7 JIPITEC 271, 283. 262 Paul de Hert and others, ‘The Proposed Regulation and the Construction of a Principles-Driven System for Individual Data Protection’ (2013) 26 Innovation: The European Journal of Social Science Research 133, 141. 263 Brendan Van Alsenoy, ‘Liability under EU Data Protection Law: From Directive 95/46 to the General Data Protection Regulation’ (2017) 7 JIPITEC 271, 283. 264 Francesco Mezzanotte, ‘Liability for Digital Products’ in De Franceschi/Schulze (eds), Digital Revolution – New Challenges for Law (Nomos, Beck 2019), 183. 265 Article 5(1)(c) Regulation (EU) 2016/679. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.4 Other relevant frameworks and initiatives on new technologies

283

mands that data is deleted if no longer required for a recognised purpose.266 Hence, the loss of personal data may not lead to the liability of the controller or processor in all cases.267

5.4.4 Digital Services Act With the adoption of the E-Commerce Directive268 in 2000, the EU established a legal framework that aims at ensuring the free movement of internet intermediaries that provide digital services, which – in the words of the Directive – are called ‘information society services’. Liability was among the issues identified that impede a proper functioning of the internal market. However, the ECD does not regulate liability as such but harmonises the conditions under which information society services that host or transmit illegal content are exempted from liability. Until the ECD was adopted, this aspect was regulated very differently in the national legal systems.269 The rationale of this so-called safe harbour regime is to create a legal framework that allows online services to develop and that protects the user’s freedom of expression.270 With the emergence of platforms, the number of internet intermediaries and their influence on society and the economy has drastically increased. Their success has been made possible by digital technologies like the internet, connected devices, algorithms and cloud computing.271 However, with the uptake of platforms, new and unforeseen risks, such as using large internet platforms to spread illegal content or hate speech, or the systematic abuse of algorithms to distribute disinformation, have also arisen. To address some of the very pressing issues that have emerged, the European Union adopted sector-specific legislation that increases the obligation of internet intermediaries. These instruments aim at fighting sexual abuse of chil-

266 Ibid Article 17. 267 Christiane Wendehorst, ‘Liability for Pure Data Loss’ in Karner/Magnus/Spier/Widmer (eds), Essays in Honor of Helmut Koziol (Jan Sramek 2020), 57. 268 Directive 2000/31/EC of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market, OJ L 2000/178, 1. 269 DLA Piper UK LLP, ‘EU Study on the Legal Analysis of a Single Market for the Information Society’ (Commission 2014), 4. 270 Tambiama Madiega, ‘Reform of the EU Liability Regime for Online Intermediaries: Background on the Forthcoming Digital Services Act: In Depth Analysis’ (European Parliamentary Research Service 2019), 2. 271 Mark Fenwick and Erik Vermeulen, ‘A Sustainable Platform Economy and the Future of Corporate Governance’ available at accessed 6 October 2020, 4. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

284

5. Analysis of the status quo – is the acquis fit for software?

dren online,272 hate speech and violence on video sharing platforms,273 and copyright infringements.274 The proposed Regulation on preventing the dissemination of terrorist content online is currently being discussed in the Council.275 In addition to these sector specific instruments, the EC issued a non-binding Recommendation on measures to effectively tackle illegal content online,276 and it also supported the development of the ‘EU Code of Practice on Disinformation, a set of self-regulatory standards’,277 which has been agreed upon by leading industry players. As part of the EU Commission’s proposed goal to make Europe fit for the digital age, a new legislative framework for platforms, under the name of the Digital Services Act, was put on the top priorities list.278 The Digital Services Act package is composed of two initiatives. Under the first one, which aims at ‘deepening the internal market and clarifying responsibilities for digital services’, a possible revision and update of the ECR was announced by the Commission in 2019.279 The increased societal importance of platforms will render it necessary to fine-tune the broad liability exemption of the ECD.280 A revision of the ECD will also need to take into account new software technology. For example, cloud services do not

272 Directive 2011/93/EU of the European Parliament and of the Council of 13 December 2011 on combating the sexual abuse and sexual exploitation of children and child pornography, and replacing Council Framework Decision 2004/68/JHA, OJ L 2011/335, 1. 273 Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC, OJ L 2019/130, 92. 274 Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities, OJ L 2018/ 303, 69. 275 Commission, ‘Proposal for a Regulation of the European Parliament and of the Council on preventing the dissemination of terrorist content online’ COM(2018) 640 final. 276 Commission Recommendation of 1.3.2018 on measures to effectively tackle illegal content online, C(2018) 1177 final. 277 Available at accessed 6 October 2020. 278 Political Guidelines for the next European Commission 2019-2024, available at accessed 6 October2020, 14. 279 See accessed 6 October 2020, 14. 280 Hans Schulte-Nölke and others, ‘The legal framework for e-commerce in the Internal Market’ (IMCO 2020), 37. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.4 Other relevant frameworks and initiatives on new technologies

285

clearly fall under one of the three categories of providers (mere conduit, cache and hosting) to which the ECD attaches the exemption of liability.281 Another example is that the nodes of a blockchain (see 5.5.3) could – based on the wording of Article 14 ECD – qualify as hosting providers. This would mean that they become liable for illegal content stored on the DLT if they do not take down the content after they have become aware of it. However, as simple nodes on a distributed ledger, it is technically impossible for them to delete content on their own.282 While this result can be avoided by means of interpretation, a revision of the ECD will have to address the changing technological environment.

5.4.5 Developments with regard to cloud computing With cloud computing, activities that are usually performed by a local computer or data centre, such as storing, processing and use of data, are outsourced to an external server and accessed via the internet. This, for example, allows users to store data remotely, or to access and use software from different hardware, without having to download and install the software on each device. The benefits of cloud services are even more evident for business and other entities, as it allows them to successively outsource their internally run data centres and ICT departments. Without the need to build up the physical infrastructure that is necessary for providing certain online services, companies can test innovative ideas with less economic risk and scale up successful business models much quicker.283 The fact that cloud computing leads to the outsourcing of activities and is usually combined with a subscription payment model gave rise to terms like Infrastructure-as-a-Service, Platform-as-a-Service and Software-as-a-Service. Already in 2012, the European Commission was committed to providing a robust and sound framework that would allow cloud computing to unfold its full potential. In its Data Strategy Communication, the Commission stressed that cloud infrastructures and services need to provide essential features of security, sustainability and interoperability. Of particular importance from a safety perspective is the planned cybersecurity scheme for cloud services that is cur-

281 See DLA Piper UK LLP, ‘EU Study on the Legal Analysis of a Single Market for the Information Society’ (Commission 2014), 16. 282 See Maurice Schellekens, ‘Does regulation of illegal content need reconsideration in light of blockchains?’, International Journal of Law and Information Technology, 2019, 27, 292–305 283 See Commission, ‘Unleashing the Potential of Cloud Computing in Europe’ COM(2012) 529 final. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

286

5. Analysis of the status quo – is the acquis fit for software?

rently being drawn up by ENISA.284 Furthermore, the Commission announced to bring together a comprehensive rule book of the different (self-)regulatory regimes that apply to cloud technology, and it will monitor the compliance of cloud service providers with existing horizontal regimes like the General Data Protection Regulation.285

5.5 Summary of findings and need for action 5.5.1 Best practices in the acquis and need for adaptation A survey of the existing and imminent legal frameworks on safety and liability with regard to software and software-enabled products and services has revealed that different legal instruments within the acquis are adapted to software to a very different extent, ranging from very modern and innovative approaches that reflect cutting edge global thinking to wholly outdated approaches hardly able to cope with the specific challenges of digital technologies. Broadly speaking, the 2019 DCSD and SGD and the 2017 MDR score best, and the 1985 PLD lowest when it comes to fitness for dealing with software-related aspects, which is a plausible result in the light of more than three decades of time difference. With regard to more or less each of the pivotal points that need to be addressed in the context of safety and liability, a best practice model within the acquis seems to exist and it would be advisable to collate these best practices and implement them in a coherent and harmonised way across the whole acquis.

5.5.1.1 A technologically neutral notion of software When it comes to striving for a notion of software that is as technologically neutral as possible and that complies with the principle of functional equivalence while being emancipated from the specific context of other areas of the law (notably copyright law), the DCSD may serve as a best practice example. Even the MDR (which otherwise seems to be the best practice example in many respects) still adheres to a notion of software that focuses on a set of instructions, arguably excluding ‘mere data’. However, instructions and ‘mere data’ may, to a certain extent, be functionally equivalent, and there is no reason to exclude from safety

284 See accessed 6 October 2020. 285 Commission, ‘A European strategy for data’ COM/2020/66 final. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.5 Summary of findings and need for action

287

and liability requirements, e.g. the supplier of navigation data to be fed into a navigation system. While the DCSD does not refer to the term ‘software’, future safety and liability legislation is well advised to choose the much wider scope of the DCSD as a starting point. This automatically implies that it is immaterial whether software is downloaded onto the user’s device or used online within an SaaS scheme.

5.5.1.2 Accessory software elements While all existing legal instruments on safety and liability would include tangible items with embedded software, and risks created by that embedded software (to the extent risks are covered by the relevant instrument), it is already unclear to what extent, e.g., the RED, the MD and the STD cover accessory software that is not embedded, such as a control app to be installed separately on the user’s mobile, or as accessory analytics software to be installed on the user’s PC (e.g. for processing image data collected by a drone, or analysing fitness data collected by a fitness tracker). Only two legal instruments have a fairly developed concept of such accessory software: the DCSD (combined with the SGD) and the MDR. These two definitions are not contradictory and could be merged, but the concept of ‘digital element’ in the DCSD and SGD is better suited to draw a clear line between accessory software and standalone software, so it should be preferred (but of course the contract-specific elements of the definition must be modified).

5.5.1.3 Standalone software as a product As to the role of standalone software, it is both the DCSD and the MDR that fully recognise the functional equivalence of hardware and software, whereas instruments such as the RED, the MD or STD would definitely not apply, and the application of the GPSD, and even more so the PLD, is not sufficiently clear and therefore contested. It is highly advisable to recognise this functional equivalence and to acknowledge that standalone software can be a product in its own right. As the DCSD and the MDR rightly stress, it should be immaterial whether software is downloaded or accessed within an SaaS scheme, again because of the principle of functional equivalence. Likewise, in line with the GPSD, other safety-relevant legislation, and the PLD, it should be immaterial whether the software is off-the-shelf or customised. As we do not differentiate, for safety and liability purposes, whether radio equipment, machinery or toys (or other tangible items) are mass products or tailored to the user’s needs, we should not differentiate with regard to software either. It is only where the user is really the producer or co-producer that, e.g., Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

288

5. Analysis of the status quo – is the acquis fit for software?

uses the services of a self-employed software developer for developing the software according to the user’s needs and directions that product safety and liability law should not be applied (but this would be the same for tangible items).

5.5.1.4 Add-on software Whether or not standalone software qualifies as accessory software or a software element of another product (which is important for the safety obligations and the liability of the producer of that product) the question arises of who is responsible for the situation that, when uploaded later onto hardware, software causes that hardware to become unsafe (the same situation arises where software is combined with existing software). While the GSPD and more or less all the sectoral instruments, explicitly or implicitly, state that the producer who is responsible for the hardware must ensure that the hardware remains safe when combined with other elements it will foreseeably be used together with (which should include software), these instruments are not very clear as to whether the producer of the hardware must actually prevent the upload of other than safe software by technical means or restrict itself to recommendations and warnings. In the RED context, there is currently a lively debate as to whether the Commission should adopt a delegated act. While this would clearly be in the interest of the safety of users and the public at large, the question has a strong competition law aspect and potentially infringes users’ fundamental liberties, which is why a strictly risk-based approach should be taken and the measure limited to particular high-risk devices with a potential for causing harm to third parties (and not just to the user who installed the software despite warnings), e.g. drones. What is still more difficult is the question of who should have which responsibilities with regard to the resulting combination of hardware and software, in particular if the software changes the performance and functionalities of the hardware in any significant way. The view taken on the RED is that the provider of the software should in this case become the producer of the hardware with the changed features, i.e. has to make a conformity assessment, assume liability, etc. Arguably, this can only be true if that use of the software was specifically intended by the provider of the software. In generalising this notion, which has great persuasive force, a risk-based approach is again of the essence, as otherwise the development of independent software components, in particular by small developers, might become too burdensome, which would impede competition, innovation and growth.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.5 Summary of findings and need for action

289

5.5.1.5 Risks and types of harm covered There are very surprising differences between the various legal instruments as regards the types of risk or harm covered. Most of the safety instruments, i.e. the GPSD and most of the sectoral instruments, are reduced to harm caused to the ‘safety and health’ of persons, which arguably comprises death and personal injury, i.e. only some physical risks. In the case of the GPSD, even this is only the case where the product at stake would foreseeably be used by consumers. Compared with this, the RED is much broader, mentioning, alongside the health and safety of persons, also the health of domestic animals and the protection of property. In addition, essential safety requirements comprise issues such as electromagnetic compatibility, protection from fraud, data protection, protection of persons with disabilities, cross-border mobility, integrity of networks, etc., many of which can be subsumed under economic risks or social risks, and some of which are collective risks. The PLD provides liability for death, personal injury, and damage to property, unless the latter is predominantly used in a business, which is not the same concept as that of consumer products under the GPSD. The approach under the RED seems to be the preferable one. The health and safety of persons, the health of domestic animals and the protection of property should be the baseline of protected interests. Other protected interests, in particular economic and social risks, should be phrased in a more general way as principles or ‘essential requirements’ with the possibility to adopt delegated acts or to develop harmonised standards.

5.5.1.6 Post-market surveillance and product monitoring obligations The safety instruments more or less unanimously provide for extensive post-market surveillance obligations, i.e. a producer must monitor the safety of the product throughout its lifecycle and also after it has been put into circulation. Where a risk becomes apparent, the producer needs to take appropriate action, which – in the case of software – should primarily include over-the-air software updates. This clear and important policy decision should be reflected also in the PLD as it is not clear why, where there is a defect of the product and a producer violates a statutory duty whose purpose it is to prevent causation of harm to users of the product, and due to this violation the user (or an innocent bystander) suffers harm, the producer should not be held liable.

5.5.1.7 Safety and security updates Updates are very important for software and software-enabled products and services, but they may also negatively affect the safety-relevant features. This is why Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

290

5. Analysis of the status quo – is the acquis fit for software?

rules indicating under which circumstances an update triggers a duty to conduct a fresh conformity assessment are needed. The MDR provides some indications as to a division between major software revisions triggering an entirely new assessment procedure and minor software revisions, which are generally associated with bug fixes, usability enhancements or security patches, and which trigger only a very limited need for action on the part of the producer. In principle, this seems to be a helpful rule, which could be applied on a more general scale. However, one must also take into account that the MDR applies to devices that are subject to an AFAP regime and that, with updates becoming a continuous exercise, repeated safety assessment may not be possible or pose a disproportionate burden. In these cases, assessment (in particular where certification is required) of the system of update generation and provision may be more appropriate.

5.5.1.8 Economic operator within the Union and platform operators Finally, the MDR and MSR contain the most modern and up-to-date provisions concerning the relationship between different economic operators. In particular, they provide, in line with some other instruments, that there must be a responsible economic operator established within the Union, such as an authorised representative. The authorised representative is then jointly and severally liable with the producer for all obligations with regard to safety and liability and must be equipped with the necessary mandates and financial means. The rules on authorised representatives are crucial in the context of software, as software is often supplied over-the-air from outside the Union, which is why there is no importer that could be held liable. The approach taken by MDR and MSR in this respect should be extended, in particular to the GPSD and the PLD. The MSR also provides for some extremely useful provisions clarifying responsibilities of distributors, fulfilment service providers, and information society service providers, who must support efforts to ensure safety and to enforce liability claims. Also this approach includes a best practice within the acquis that is, in principle, suitable for generalisation. However, we must again not forget that the duty to nominate an authorised representative within the Union can be a huge obstacle, in particular for small software developers from around the globe, and that it is next to impossible to enforce this requirement unless by blocking online access for parties from within the Union to the relevant websites, which would be incompatible with fundamental rights and freedoms and be a totally disproportionate measure. Thus while the duty to nominate an authorised representative within the Union is essential for effective enforcement of safety and liability rules, this duty must be imposed in a strictly risk-based manner. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.5 Summary of findings and need for action

291

5.5.2 The special case of AI 5.5.2.1 Specific risks posed by AI Not all systems that are labelled AI act with a similar degree of autonomy. The risk profile of AI systems depends on the extent to which they can make decisions without direct human control. To better visualise these differences, the automotive sector, for example, developed a standard that distinguishes between six levels of driving automation, from Level 0 (no automation) to Level 5 (full vehicle autonomy).286 Harmonised standards that categorise AI according to the level of autonomy allow targeting safety requirements according to the degree of opacity and automatisation. However, not only the degree of autonomy but also whether AI machine learning is only used at a development stage, or whether the AI is learning in the field is of particular importance from a safety perspective. That an AI may change its behaviour and function over time is often pointed out as one of its unique features. However, AI systems today are usually trained during the development phase and do not further adapt their behaviour once they are put on the market. While the AI is deployed, developers are working on changes in the background, which are then supplied in the form of updates.287 For example, the AI of a fruit picking robot is trained to detect ripe apples by showing it different pictures of ripe fruits. When the deployed AI does not recognise a certain type of apple, the result will not change in the course of time if the AI is not updated, or does not allow the user to change the behaviour by giving the robot some additional input. Where AI is not learning in the field, much of what can be said about software in general also holds true for the more specific case of AI. Since the AI does not change its function while deployed, pre-market testing allows the manufacturer to assess the AI’s risk before its rollout, taking into account the intended use, as well as potential misuse.288 As with traditional software, if the AI is provided with updates that change the product’s original performance, purpose or type, the person who carries out the significant changes has to ensure the compliance with the

286 See SAE Standard J3016_201806 available at accessed 10 October 2020. 287 Rob Ashmore, Radu Calinescu and Colin Paterson, ‘Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges’ [2019] arXiv:1905.04223 available at accessed 10 October 2020. 288 Commission, ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics’ COM(2020) 64 final, 6. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

292

5. Analysis of the status quo – is the acquis fit for software?

applicable essential requirements.289 In the Commission’s Safety and Liability Report, it is suggested that, in addition to the risk assessment that is performed before placing a product on the market, a new risk assessment procedure could be put in place where the product is subject to important changes during its lifetime that were not foreseen by the manufacturer during the initial risk assessment.290 While this suggestion is to be welcomed, it also has to be pointed out that the need for such an additional risks assessment is not mandated by the AI’s autonomy but by the fact that updates and upgrades may significantly change the function of software, after it has been put on the market. Therefore, if such an additional risk assessment is introduced, it should not be limited to AI systems but include software in general. The situation is much more complex where the AI adapts its function and behaviour after the deployment based on machine learning. First of all, it is becoming difficult to assess the risk of an AI system before its deployment because its risk profile may significantly change over time. Since changes are gradual and are not initiated by a human decision, it also seems challenging to determine a certain point when the AI’s original performance has changed, and therefore a reassessment of compliance with the essential requirements is necessary. On a more general level, evolving AI intensifies the issue of human oversight. While the possibility of human intervention at every level of decision making is neither desirable nor achievable, humans should at least be able to oversee the activities of the AI and decide when and how the AI system is used in a particular situation.291 Laying down essential requirements for certain sectors/activities that refer to a minimum standard of human oversight could help to detect and rectify incorrect decisions taken by AI systems that pose significant risks to individuals292 and may limit the use of evolving AI to low risk actives (such as writing tools). As is rightly pointed out in the Commission’s Report, repeated, prolonged and intensive interactions between humans and AI robots/systems may lead to an increase in mental health risks.293 For example, working on a full-time job with

289 Commission Notice, The ‘Blue Guide’ on the implementation of EU products rules 2016, OJ C272/1, 2.1 (p 18). 290 Commission, ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics’ COM(2020) 64 final, 7. 291 See, High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’ (Commission 2019), 18. 292 High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’ (Commission 2019), 42. 293 Commission, ‘Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics’ COM(2020) 64 final, 7. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.5 Summary of findings and need for action

293

no human but only machine interaction, or receiving geriatric care solely by an AI equipped robot may take a serious toll on the mental well-being of humans. It is widely accepted that adverse psychological effects that amount to a diagnosed illness according to WHO criteria constitute a physical risk and are thus covered by existing safety legislation. However, due to this increase in psychological health risks, legislative frameworks should explicitly cover the psychological harm that amounts to a recognised state of illness.

5.5.2.2 AI liability Where the operation of an AI system causes damage, the question arises whether and how the victim can get compensation under existing liability regimes. The autonomy and opacity of AI systems may render it difficult to apply existing regimes that are common to the Member States’ legal systems. While there are significant differences between the various jurisdictions, national tort laws generally have a fault-based liability with a broad scope of application, which is accompanied by more targeted regimes that alleviate the victim’s burden of proving fault, or that do not presuppose any fault at all. Where damage is caused by a defective product, the harmonised rules of the Product Liability Directive (see 5.3) apply. The application of strict liability to damage caused by an AI may not in itself create specific problems, but liability may be placed on an actor who is not (solely) in control of the risk, or who is not the cheapest cost avoider. For example, with the emergence of self-driving cars, most accidents will no longer be caused by human error but by malfunctioning of technology. Hence, it may not be appropriate to hold the driver or holder of the car accountable (as currently foreseen under most national legal systems) if another person (the so-called backend operator) is controlling the features of the AI and monitors its functioning.294 Under fault-based liability regimes, it may be difficult to identify any human wrongdoing if the damage was caused by an AI system that makes autonomous decisions within its set limits. The user or operator does not influence the decisions made by the AI system and can thus not be considered to be at fault, as long as the right AI system was deployed for the intended purpose.295 It may also not be possible to establish fault of the AI’s developer because, despite diligent training and testing, the AI will encounter unique situations in which it will make decisions autonomously; it will be difficult to argue that developers, who have set

294 See Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for AI and other digital technologies’ (Commission 2019), 41. 295 Ibid 44. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

294

5. Analysis of the status quo – is the acquis fit for software?

reasonable boundaries, and ensured sufficient training and testing with appropriate datasets, have acted negligently. Where an AI acts entirely unforeseeably, national tort law systems may exclude liability under doctrines of adequacy or foreseeability (see 3.1.5). And even where some wrongdoing in the development process can be established, the victim will still need to prove the causal link between the damage and the wrongdoing. Due to the difficulty of explaining the decision making of complex algorithmic systems,296 the task of proving fault and the causal link may – if at all – only be accomplished with the aid of technical experts and with the cooperation of the defendant in order to access the algorithm and the training data. The unsatisfying result that damage caused by a malfunctioning product may not be compensated due to the difficulties of establishing fault in a complex production process is, for traditional products, mitigated by the rules on product liability. However, the provisions of the current Product Liability Directive are not well adapted to deal with software in general and raise particular difficulties when it comes to AI (see in detail 5.3). Another challenge that AI poses to existing liability regimes is that human auxiliaries might be replaced with AI systems. For example, a hospital uses an AI-driven surgery robot instead of a doctor to perform specific procedures. Under certain requirements, depending very much on the applicable legal system,297 principals can be held liable for damage caused by their auxiliary under the concept of vicarious liability. The underlying idea is that the principal benefits from outsourcing their duties and should thus be held accountable for the damage caused by the auxiliary.298 Where the human auxiliary is replaced by an AI system, legal systems should ensure that the principal is liable for damage caused by the AI to the same extent as for the acts or omissions of a human auxiliary. In several jurisdictions, vicarious liability requires some kind of misconduct of the auxiliary. While the concept of fault per se is not suitable for AI systems, the benchmark accepted for human auxiliaries could be used to determine the performance expected of an AI system. However, this approach reaches its limits once the AI significantly outperforms human auxiliaries. Reference should

296 So-called ‘opacity’ or ‘black box effect’. 297 See e.g. Helmut Koziol, Basic Questions of Tort Law from a Germanic Perspective (Jan Sramek Verlag 2012) 298 Suzanne Galand-Carval, ‘Comparative Report on Liability for Damage Caused by Others’, in Jaap Spier (ed), Unification of Tort Law: Liability for Damage Caused by Others (Kluwer Law International 2003), 290. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.5 Summary of findings and need for action

295

then be made to the principal’s duty to choose a system appropriate for the intended task and its inherent risks.299 Not only the European Commission has been exploring ways to regulate AI liability; just recently, the European Parliament passed a resolution that includes a full ‘Proposal for a Regulation of the European Parliament and of the Council on liability for the operation of Artificial Intelligence-systems’.300 The Parliament’s initiative should, in principle, be applauded, especially because some of the proposed provisions demonstrate an innovative approach to regulating AI and are also in line with the recommendations of this Study. For example, the Proposal uses a rather broad and technologically neutral definition of AI, which is not limited to machine learning. ‘AI system’ is defined as ‘a system that is either software-based or embedded in hardware devices, and that displays behaviour simulating intelligence by, inter alia, collecting and processing data, analysing and interpreting its environment, and by taking action, with some degree of autonomy, to achieve specific goals.’301 The Proposal also distinguishes between ‘frontend’ and ‘backend operator’ of AI systems, a concept that has been developed by the author of this Study during the work on the Expert Group Report of the New Technologies Formation. However, several other provisions of the Proposal raise red flags. The cornerstone of the EP Proposal for a Regulation on AI liability is a strict liability regime for ‘high-risk’ applications coupled with an insurance obligation.302 What counts as ‘high-risk’ follows from an enumerative list in an Annex, which may be updated at regular intervals by delegated acts of the Commission.303 However, other than recommended in this Study, the strict liability not only covers physical risks but also encompasses pure economic and social risks if they lead to verifiable economic loss. Subjecting the latter two to a strict liability regime is hardly appropriate unless further conditions are added, such as non-compliance with mandatory legal standards or some defect or malperformance, which would, however, mean that the liability regime is no longer strict in the proper sense. If immaterial harm (or the economic consequences resulting from it, such as loss of earnings due to stress and anxiety that do not qualify as a recognised illness) is compensated through a strict liability regime whose only threshold is causation, the situa-

299 Expert Group on Liability and New Technologies – New Technologies Formation, ‘Liability for AI and other digital technologies’ (Commission 2019), 46. 300 European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)). 301 Ibid Article 3(a). 302 Ibid Article (4). 303 Ibid Recommendation to the Commission no 19. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

296

5. Analysis of the status quo – is the acquis fit for software?

tions where compensation is due are potentially endless and difficult to cover by way of insurance. This is so because there is no general duty not to cause significant immaterial harm of any kind to others, unless it is caused by way of noncompliant conduct (such as by infringing the law or by intentionally acting in a way that is incompatible with public policy), e.g., where AI used for human resources management (assuming such AI were qualified as a ‘high-risk’ application) leads to a recommendation not to employ a particular candidate, and if that candidate therefore suffers economic loss by not receiving the job offer, full compensation under the Proposal for a Regulation would be due even if the recommendation was absolutely well-founded and if there was no discrimination or other objectionable element involved. For all AI applications that are not considered ‘high-risk’, a fault-based liability regime with a reversed burden of proof is introduced. To prove the presumption of fault wrong, the operator can only rely on the following two grounds: ‘(a) the AI-system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken, or (b) due diligence was observed by performing all the following actions: selecting a suitable AI-system for the right task and skills, putting the AI-system duly into operation, monitoring the activities and maintaining the operational reliability by regularly installing all available updates’.304 This is problematic for a number of reasons. First of all, liability under Article 8 is unreasonably strict as it seems that the operator must, in order to escape liability, demonstrate due diligence in all aspects mentioned, even if it is clear that the lack of an update cannot have caused the damage. Possibly this result can be avoided by way of common-sense interpretation. However, it is still the case that, in the absence of any restriction to professional operators, even consumers would become liable for any kind of AI device, from a smart lawnmower to a smart kitchen stove. This would mean burdening consumers with obligations to ensure that updates are properly installed, irrespective of their concrete digital skills, and possibly confronting them with liability risks they would hardly ever have had to bear under national legal systems.

5.5.3 The special case of distributed ledger technologies (DLT) The key characteristic of the distributed ledger technology (DLT) is that information is not stored on a single server but distributed among a large number of com-

304 Ibid Article 8. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.5 Summary of findings and need for action

297

puters (so-called nodes). This distribution of identical datasets achieves protection against subsequent manipulation or loss and independency from central authorities. Blockchain, which gained much prominence due to crypto currencies, like Bitcoin or Ethereum, is one example of the application of DLT. On a blockchain, information (such as the date, time and value of a transaction) is compiled in a ‘block’ and assigned a cryptographic code (so-called hash) and a timestamp. Each new block is then added to existing blocks in the network and contains the hash of the previous one. If one of the stored blocks were to be tampered with, this would change the hash value of all the following blocks, and the manipulation would immediately be discovered. The fact that blocks can neither be deleted nor changed in their chronological order ensures that the digital representation of value cannot be spent more than once in the network (so-called double spending problem), without the need for a central trusted party that verifies the transactions. DLTs require a consensus mechanism under which datasets of new events are authorised and adopted by all computers in the network. In permissioned systems, usually, either a single node or a consortium of pre-defined servers is tasked with the authorisation of new blocks and also decides who is allowed to participate. In permissionless blockchains, where in principle everyone can participate, consensus is determined by mechanisms like proof of stake or proof of work.305 The latter is used by Bitcoin and allows nodes (so-called miners), which solve complex tasks with considerable computing power (and thus energy), to verify new blocks and awards the miners with Bitcoins if the block is added to the correct version of the chain. That a competing chain with a different history is built by malicious nodes is rendered unlikely because the next block will probably be verified by a different miner, who would not receive a reward for investing the required energy if the block is not added to the correct and ‘longest’ version of the chain.306 However, despite their prescribed characteristics of tamper-resilience and immutability,307 blockchains may give rise to unfair distribution of data, data losses or manipulation. Especially the consensus mechanisms of blockchains

305 Dirk Zetzsche, Ross Buckley and Douglas Arner, ‘The Distributed Liability of Distributed Ledgers: Legal Risks of Blockchain’ [2017] SSRN Electronic Journal accessed 30 September 2020. 306 Konstantinos Christidis and Michael Devetsikiotis, ‘Blockchains and Smart Contracts for the Internet of Things’ (2016) 4 IEEE Access 2292, 2294. 307 ITU-T Focus Group on Application of Distributed Ledger Technology, ‘Technical Report FG DLT D4.1 Distributed ledger technology regulatory framework’ (2019), 5. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

298

5. Analysis of the status quo – is the acquis fit for software?

constitute a major point of attack (e.g. 51 % attack on the Bitcoin blockchain308). Furthermore, while the DLT provides for secure data storage, it does not solve the problem of inaccurate or false data being stored on the network. Thus, cyberattacks might not focus on the DLT itself but rather on the point of input.309 Since blockchains gained particular traction in the financial sector, the risks associated with blockchains are often economic risks. One of the most well-known examples is probably the hack of the Ethereum Blockchain, where $53 million of crowd funded money was stolen. Most of the stolen crypto assets could eventually be recovered because the Ethereum community decided to overwrite the transaction history, proving the immutability of blockchains is not absolute.310 However, as pointed out under 3.1.2, pure economic losses, i.e. losses that are not the realisation of physical damage, are hardly compensated under national tort law and are not covered by the PLD. In permissioned systems, a contract might exist between the operator of the distributed ledger and the participants, which would mitigate the risk that victims remain uncompensated for financial loss suffered due to a flawed code. However, where no contractual relationship can be established, it is likely that the developers/operators of the DLT will not be liable for the pure economic damage caused by an insufficient consensus mechanism or a bug in the code of a blockchain-based smart contract.311 In conclusion, it can be said that, while some of the possible cyberattacks on blockchains are unique in their nature, many other vulnerabilities are no different from those found in traditional software.312 Thus, general considerations, like cybersecurity certifications for software, can also be applied to DLT applications.  

308 Sarwar Sayeed and Hector Marco-Gisbert, ‘Assessing Blockchain Consensus and Security Mechanisms against the 51 % Attack’ (2019) 9 Applied Sciences 1788. 309 See for an extensive elaborations on the vulnerabilities of DLT, Dirk Zetzsche, Ross Buckley and Douglas Arner, ‘The Distributed Liability of Distributed Ledgers: Legal Risks of Blockchain’ [2017] SSRN Electronic Journal accessed 30 September 2020, 13 et seq. 310 Samuel Falkon, ‘The Story of the DAO — Its History and Consequences’ (Medium, 24 December 2017) accessed 30 September 2020; Osman Gazi Güçlütürk, ‘The DAO Hack Explained: Unfortunate Take-off of Smart Contracts’ (Medium, 1 August 2018) accessed 29 September 2020. 311 Dirk Zetzsche, Ross Buckley and Douglas Arner, ‘The Distributed Liability of Distributed Ledgers: Legal Risks of Blockchain’ [2017] SSRN Electronic Journal accessed 30 September 2020, 33 et seq. 312 European Union Blockchain Observatory and Forum, ‘Blockchain And Cybersecurity’ (Commission 2020), 13.  

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

5.5 Summary of findings and need for action

299

Based on the Cybersecurity Act,313 ENISA could be tasked with drawing up a cybersecurity certification scheme that addresses the specific cybersecurity risks of the DLT. A precedent for DLT targeted measures in the acquis is the revised Anti-Money Laundering Directive,314 which defines ‘virtual currencies’ and expands the anti-money laundering legislation to wallet providers and crypto currency exchange providers.315 From a liability perspective, the distinctive feature of the DLT compared to other software is that it aggregates the risk of pure economic loss due to the importance it plays in financial transactions. While there are several highly relevant issues related to data protection on blockchains, from a liability perspective, the distributed nature of blockchains raises the question of who is the addressee of such claims under the GDPR. It may also be impossible or unreasonably difficult to identify the producer of software running on a distributed ledger, or any other economic operator who might be held accountable for any kind of unlawful conduct.

6. Key options for action at EU level The analysis of the various existing legal instruments that are relevant for safety and liability with regard to software leads to a number of different options for the European legislator. Some of these options relate to software in general and will be explained and evaluated under 6.1. Other options are specifically related to software that qualifies as AI and will be explained and evaluated under 6.2 with regard to trustworthy AI, and under 6.3 with regard to AI liability.

313 Article 48 Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act), OJ L 2019/151, 15. 314 Directive (EU) 2018/843 of the European Parliament and of the Council of 30 May 2018 amending Directive (EU) 2015/849 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing, and amending Directives 2009/138/EC and 2013/36/EU OJ L 2018/156, 43. 315 See Philipp Hacker and others, ‘Regulating Blockchain: Techno-Social and Legal Challenges —An Introduction’ in Hacker and others, Regulating Blockchain (Oxford University Press 2019). Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

300

6. Key options for action at EU level

6.1 Options for action with regard to software in general The authors mainly see four different options for the EU legislator to move forward with regard to software in general.

Option 0: Baseline scenario and no further action Naturally, the first option would be to not take any kind of action at all but to trust that national courts and the Court of Justice will resolve remaining uncertainties in the interpretation of the acquis and that, should any major adverse events triggered by unsafe software occur, and not be captured by existing EU legislation, national law (e.g. tort law) will fill the gaps. This Option 0 would, at least in the short term, save resources at the level of the EU institutions and national lawmakers and avoid difficult discussions with powerful parts of the industry. It would, however, mean that important parts of the acquis are outdated and no longer fully functional because they fail to recognise even the fact that modern tangible products come with embedded software, other accessory software, and network connectivity, and that software updates regularly alter safety features of products, or that products that had initially been safe become unsafe for want of an update. This would result in uncertainty of the law and in national laws having to fill the gaps, potentially leading to a patchwork of different national solutions that would still aggravate the existing patchwork of, e.g., harmonised product liability and parallel fault liability under national tort laws. This, in turn, is likely to be an obstacle to cross-border trade within the Union, increase the cost of compliance and litigation, and reduce consumer trust in the safety of products within the Union. With Option 0, the EU might also lose its leading role as a global pacesetter in terms of adapting regulatory concepts to the digital age. Option 0 is therefore definitely not recommended.

Option 1: Adapt the existing instruments to the characteristics of products with software elements Option 1 would mean adapting the existing horizontal and sectoral elements within their original scope to the fact that modern tangible products come with embedded software, other accessory software, and network connectivity, and that software updates regularly alter safety features of products, or that products that had initially been safe become unsafe for want of an update. Due to its high Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

6.1 Options for action with regard to software in general

301

level of generality, hardly any revisions would have to be made to the GPSD. However, legislative clarifications or at least a guidance document with regard to both the GPSD and MSR as to the notion of ‘product with software element’ (trying to achieve consistency with DCSD and SGD), to the extent of post-market surveillance duties, to cases where ‘appropriate action’ may involve mandatory updates, and the conditions under which software updates trigger a fresh conformity assessment would be helpful. There would have to be a minor change in the MD, ensuring that safety components may also be software components. The most farreaching and most controversial changes would have to be made to the PLD, which would have to be fully aligned with safety regulations insofar as failure to comply with post-market surveillance duties and duties to take ‘appropriate action’ would have to lead to liability. Option 1 would be clearly preferable to Option 0. However, all that could be achieved by way of this ‘minimally invasive update’ of the acquis is that tangible items that come with embedded software or that are initially intended to work together with accessory software (such as a control app of a fitness tracker to be installed on the user’s mobile), i.e. that are the counterpart of ‘goods with digital elements’ within the meaning of the DCSD and SGD, are dealt with in an appropriate way (to the extent tangible items are covered at all, cf. e.g. the limitation of the scope of the GPSD to consumer products). By way of contrast, Option 1 would already not capture the increasing number of cases where accessory software is produced independently of a tangible product but in order to be used as an add-on together with that product (such as add-on software designed for analysing data collected by fitness trackers from various producers), creating a combination with new safety-relevant features, or potentially even altering safety-relevant features of the original tangible product. While it would theoretically be possible to say, by way of a generous interpretation of existing safety legislation, that the producer of the tangible product must monitor the market and take action against third parties offering unsafe add-ons, or that the supplier of add-ons takes over all responsibilities for the combined product, this would hardly work well in practice, not least for competition law reasons and because the producer of the add-on software might not be established in the Union. Option 1 would definitely not capture standalone software that is not an accessory to tangible products, which is highly unsatisfactory, considering that standalone software may cause considerable damage, e.g. to the user’s digital environment.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

302

6. Key options for action at EU level

Option 2: Option 1 plus recognise standalone software as a product in its own right The remaining shortcomings of Option 1 might be overcome with Option 2, which goes one step further and officially recognises, in line with the approach taken by the MDR, that standalone software can be a product in its own right. It would be important in this context to choose a technologically-neutral definition of ‘software’ that applies, in particular, independently of whether software is downloaded or operated within an SaaS scheme. Ideally, it would be extended to all digital content and digital services within the meaning of the DCSD. This would affect all horizontal instruments, such as the GPSD and PLD, and all sectoral instruments to the extent that standalone software can be functionally equivalent to the items currently covered. Concerning the latter, each sectoral instrument would have to be dealt with individually. For instance, software can be functionally equivalent to tangible toys (e.g. video games), but may be functionally equivalent to machinery only in exceptional cases (e.g. where a robot is marketed as ‘empty hardware’ whose functionalities are to be fully defined by the software). Concerning the GPSD and PLD, a mere clarification of the scope of application and that ‘product’ includes ‘software’ (in addition to the changes already described under Option 1, and ideally abolishing the limitation of the scope of the GPSD to consumer products) would not be sufficient. Neither of the two instruments requires the existence of an authorised representative within the Union but relies on the importer as a fall-back option, which would have to be reconsidered because software can be supplied over-the-air from outside the Union. While it would be disproportionate to require an economic operator within the Union for all software, this should at least be a requirement for ‘high-risk’ software (to be defined, e.g., by delegated act and based on an assessment of technological developments and market trends) and/or big producers. The respective responsibilities of the various economic operators involved would also require clarification, in particular as concerns cases where software interoperates with hardware or other software. The GPSD and the MSR should be aligned (while still differentiating between specifically regulated products and other products), ensuring consistency across the acquis and ensuring also that the general regime profits from more modern terminology, concepts and regulatory elements introduced by the MSR. In order to be fully effective and to ensure that the new regime covers software-specific risks, Option 2 should include a clarification that, for all safety and liability regimes, ‘health’ includes psychological harm amounting to a recognised state of illness (e.g. depression) and that damage to ‘property’ includes damage to data and other software. There should equally be a clarification that Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

6.1 Options for action with regard to software in general

303

intermediated risks are included where they are rooted in the functionality of the software (e.g. anti-virus software fails to warn the user, which is why the user fails to act and damage occurs), and also that risks indirectly created by security gaps (e.g. a security gap in the smart heating system leads to a burglary and property damage) are covered. While leading to significant changes for the software industry worldwide and probably triggering fierce opposition by the industries affected, Option 2 is much better suited than Option 1 to make the acquis fit for the digital age, avoiding the shortcomings of Option 1. It would be necessary, however, to implement measures that are proportionate and, in particular, to avoid unnecessary red tape, and that appropriate exceptions are formulated, notably for the development of community-based software for the public good. Option 2 would already be a huge step forward. However, it is clear that – even if the suggested clarifications concerning the risks covered were adopted – risks other than individual ‘physical’ risks would not be addressed. What would remain excluded are, for instance, risks such as risks of damage to networks, facilitation of fraud or privacy infringements. While some of these risks are, via the ‘essential requirements’ and the possibility to adopt delegated Commission acts, covered by the RED, this would not solve the problem because software as such is not functionally equivalent to radio equipment (and will thus not be considered radio equipment in its own right). While it would theoretically be possible by way of a delegated Commission act to require producers of radio equipment to enable only safe software to be loaded onto their equipment, it would be plainly unrealistic (and arguably incompatible with basic requirements of free competition as well as with fundamental rights and freedoms) to do so across-the-board for ‘general purpose radio equipment’ such as personal computers, notebooks, tablets or mobile phones. It is, however, mainly software loaded onto such general purpose radio equipment that will be responsible for the bulk of risks created. We also have to consider that not all connected devices fall under the RED. Another drawback of Option 2 would be that it would hardly be possible to deal with many cross-cutting issues that are relevant only in a software context within the framework of the GPSD, and to achieve the risk-based approach that is required for software in order not to jeopardise open software development and many important communities, without totally changing the nature of the GPSD and other legislation.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

304

6. Key options for action at EU level

Option 3: Introducing a new semi-horizontal instrument on software safety (with special parts for connectivity and other specific types of software) instead of merely extending the scope of the GPSD With Option 3, the European legislator would introduce a semi-horizontal instrument (more or less as a counterpart to the RED) on the safety of software, e.g. as a Software Safety Directive (SSD). There could be a specific part for software that comes with connectivity or is intended to be run on connected hardware, and further specific parts for special types of software (such as software running on DLT; theoretically also for AI, but see specific options discussed under 6.2 and 6.3). The approach would have to be risk-based, extending only very general safety requirements to all software and avoiding any disproportionate red tape that could jeopardise software development, including open software development. This instrument would also deal with cross-cutting risks, such as cybersecurity (until addressed by other acts developed under the Cybersecurity Act), privacy by design, or fraud prevention. It would, in addition to defining essential requirements and other central provisions, have to include a basis for delegated acts and harmonised standards. Its relationship with existing and future sectoral legislation would be one of complementarity, the semi-horizontal instrument dealing with cross-cutting issues such as the delineation between products with software elements, add-on software, software that is intended to or will foreseeably interoperate with other products, the division of responsibilities between the different producers in either case, cybersecurity, privacy by design and by default, the extent of post-market surveillance duties, issues arising in the context of updates, and similar issues. The sectoral instruments would continue to deal with specific aspects that are characteristic of the relevant type of product, such as machinery or toys. Option 3 would automatically include the steps suggested under Option 2 as far as safety legislation is concerned. With regard to liability, the steps suggested under Options 1 and 2 would remain necessary, i.e. Option 3 would also mean that the scope of the PLD would have to be explicitly extended to standalone software, that there would have to be, in appropriate cases, an authorised representative who is jointly and severally liable with the producer, that liability arises for failure to comply with post-market surveillance duties and duties to take ‘appropriate action’, and that the respective responsibilities of the various economic operators involved are clarified, in particular as concerns cases where software interoperates with hardware or other software.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

6.1 Options for action with regard to software in general

305

Evaluation of the options with regard to software in general As concerns the relationship between the different Options and the authors’ order of preference, much has already been stated in the description of the respective Options. Option 0 is obviously incompatible with the other Options. Option 2 fully includes Option 1 and is the more far-reaching one of the two. Option 3 would partly include and partly replace Option 2 and is clearly the most far-reaching option. Option 0

Option 1

Option 2

Option 3

Is able to address in an adequate manner embedded and accessory software



+

+

+

Is able to address in an adequate manner add-on software





+

+

Is able to address in an adequate manner standalone software





+

+

Is able to address cross-cutting risks beyond physical risks (e.g. privacy — by design)

0

0

+

Is able to address cross-cutting concepts (such as relationship between producers)





0

+

Is able to address software-specific issues of scope (e.g. open software development)

N/A

N/A

0

+

Is able to address specific challenges posed by specific types of software — (e.g. DLT)





+

Is likely to enhance public trust in technologies



0

+

+

Is likely to enhance legal certainty and avoid market fragmentation



0

+

+

Is likely not to increase the cost of compliance (at least for some players)

+

+

0



Is likely to receive broad support from stakeholders addressed by + regulation

0





Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

306

6. Key options for action at EU level

Option 0

Option 1

Option 2

Option 3

Is likely to be implemented without much delay and/or costs

N/A

+

0



Is likely to enhance Europe’s role as a global pacesetter in regulation



0

+

+

In the view of the authors, Option 3 is the preferable one, even though already Option 2 would be a huge step forward. The main reason is that Option 3 is the only option that would both cover standalone software, which is becoming increasingly important for the safety of citizens in Europe, and allow the many cross-cutting software-specific issues (ranging from issues of scope to cross-cutting conceptual issues to cross-cutting risks) to be addressed with the appropriate level of detail, plus it would allow the addition of central provisions for specific types of software. The downside of Option 3 as compared with Option 2 obviously is that getting all these cross-cutting and specific issues right, and figuring out the relationship with other instruments, will require more time and resources and might prove to be too difficult ultimately.

6.2 Options specifically with regard to trustworthy AI While the options discussed so far have been options with regard to software in general, a number of current debates are more specifically focussed on AI. An important part of these debates are debates about ensuring trustworthy AI.

Option 0: Baseline scenario and no further action Again, the first Option would be not to take any kind of action at all at EU level but to trust that national legislators and courts manage to grapple with the emerging problems. If this Option were taken, it is to be expected that national solutions that would be developed with regard to AI would vastly differ, resulting in a patchwork of different solutions and significant market fragmentation. Until solutions are developed, there would be much legal uncertainty. This uncertainty, together with the fragmentation, would probably pose major obstacles to the further development and roll-out of the technology, significantly reducing the attractiveness of Europe as a place for AI research and development. Definitely, Europe would lose its role as global pacesetter in terms of regulation of emerging Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

6.2 Options specifically with regard to trustworthy AI

307

technologies. What is possibly worse is that citizens in Europe might fail to build up trust in the new technology and any kind of AI might meet with strong resistance, potentially preventing Europe from being among the frontrunners in technological development.

Option 1: Considering AI in existing horizontal or sectoral legislation Option 1 would mean that risks posed by AI would, increasingly, be considered when revising existing horizontal or sectoral legislation in particular areas, such as the law on unfair commercial practices (e.g. economically harmful recommendations to consumers), anti-discrimination law (e.g. discrimination by algorithms), competition law (cf. the issue of collusion by algorithms), or media law (cf. manipulation by social bots). With the mass roll-out of AI, this new technology would arguably lose much of the ‘exceptional’ touch it currently has in the public perception. In other words, AI may become the new normal, which is why Option 1 would suggest it should not be dealt with separately but within the relevant framework in which an issue would also be dealt with if it were not AI-related. For instance, personalised pricing based on inferences about a customer’s psychological disposition might be blacklisted in the Annex to the UCPD. Option 1 would therefore mean a ‘fit-for-AI-check’ of the whole legal system. Option 1 is arguably inevitable at some point because all existing legal frameworks must be fit to cope with the challenges posed by AI. Therefore, having more horizontal frameworks for AI, as suggested by Options 2 and 3, would never be able to replace the modernisation of other legal frameworks. However, the question arises as to whether we need a horizontal instrument addressing the social risks posed by AI as a fall-back option where – for whatever reason – the other existing legal frameworks fail to apply. This is arguably to be answered in the affirmative. For instance, the UCPD is restricted to activities ‘by a trader, directly connected with the promotion, sale or supply of a product to consumers’ and which ‘are likely to materially distort the economic behaviour of consumers’. This may potentially include certain forms of manipulative personal pricing, but not the manipulation of voting behaviour or arguably not many addictive designs, whose purpose it is to absorb the users’ attention for as long as possible. Similarly, existing anti-discrimination law may capture discrimination on grounds, e. g., of gender or ethnic origin, but not in all situations (e.g. not when seeking a loan for a family home), and it does not, and will never, cover discrimination on grounds other than those exhaustively listed in legislation.  

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

308

6. Key options for action at EU level

Option 2: Including a separate part on AI within a general Software Safety Directive The fall-back option just mentioned might be provided by a semi-horizontal instrument on software safety (e.g. in the form of a Software Safety Directive, SSD, see above under 6.1). Such a semi-horizontal instrument could feature a special part on software that is designed to run on connected devices, and possibly a part on DLT, it could feature a specific part on AI safety that would deal with physical risks and a limited number of other risks of a cross-cutting nature (such as privacy by design). It could provide the basis for AI-specific safety standards, e.g. with regard to machine learning. Without a doubt, it would be possible to integrate almost any kind of AI regulation in a specific part within a broader new instrument on software. This would have the benefit of creating optimal coherence with general provisions on software. However, it would be a very ‘technical’ and ‘low-key’ solution, which would fail to take the opportunity of having a significant, horizontal instrument, more or less as a counterpart to the GDPR, which would be of strong symbolic value and send a strong signal of Europe as providing an ecosystem of trust and taking the role of a global pacesetter of technology regulation. Perhaps even more importantly, the software safety framework would not be the optimal framework for addressing AI-specific social risks, such as discrimination, manipulation or exploitation. This is where AI-specific harmonised rules would be most urgently required.

Option 3: Creating a comprehensive horizontal new regulatory framework on trustworthy AI Option 3 therefore combines the ‘fit-for-AI-check’ of Option 1 with a comprehensive horizontal regulatory framework on trustworthy AI. In the light of the far-reaching effects which sophisticated algorithmic systems are already having on individuals and society at large, and which are expected to become ever more apparent with the imminent mass roll-out of AI, it is highly advisable for the EU to take rigorous action in the direction of both an ‘ecosystem of excellence’ and an ‘ecosystem of trust’ as set out in the White Paper on AI. It seems to be largely undisputed that legislative action is required in that direction, while the details remain to be fixed (such as the definition of what counts as ‘high-risk’ application or which mix of legislative approaches ultimately to take). Such a new framework would certainly have to contain general principles or essential requirements that address goals rather than means (such as ‘explainWendehorst/Duller, Safety- and Liability-Related Aspects of Software

6.2 Options specifically with regard to trustworthy AI

309

ability’ of AI). In addition, a decision would have to be taken whether to focus on concrete requirements (i.e. requirements that address means, such as particular information duties), rights (such as the right to request human intervention) and procedures (such as impact assessment), what has been called a ‘PRRP approach’, or whether to focus rather on the blacklisting of ‘unfair algorithmic practices’. Arguably, what would be preferable is a combination of a PRRP approach, which would be restricted to defined high-risk applications, and the blacklisting of objectionable practices for all applications covered by the instrument. What is less clear, however, is whether this new framework should in fact also include liability rules as the debate on liability for AI has, so far, largely been restricted to physical risks (i.e. to the traditional ‘safety’ debate, with a narrow understanding of ‘safety’). As far as a PRRP approach is taken, liability rules could look similar to Article 82 GDPR, i.e. liability would arise for damage caused by infringement of given PRRP standards. As far as a blacklisting approach is taken, liability would arise for damage caused by engaging in blacklisted activities. This sounds quite simple, but the practical problems and implications of attaching liability to the infringement of PRRP standards should not be underestimated. The range of ‘social risks’ posed by AI is rather broad, and so is therefore the range of situations where some kind of non-material or material damage might occur that has possibly been caused by the infringement (e.g., the impact assessment is not conducted properly, and later a person affected by the AI feels marginalised, suffers stress and anxiety, and ultimately loss of earnings). Liability is much more straightforward where it is attached to blacklisted activities, in particular if delegated acts provide concrete lists (e.g. assume that personalised pricing based on inferences about a customer’s psychological disposition is blacklisted – a customer would then probably have a right to rescind the relevant contract or be entitled to a refund where the price was above the market price).

Evaluation of the specific options with regard to trustworthy AI Options 0 is incompatible with each of the other Options. However, Option 1 is not an alternative to Options 2 and 3, but rather a first (or second) step that can supplement and support the approaches under either Option 2 or Option 3. Options 2 and 3 are two alternative approaches. Theoretically, they could be combined to a certain extent, e.g. traditional physical risks with regard to AI might be covered by the software safety framework, while social risks could be covered by the new horizontal framework on AI as a follow-up to the White Paper, e.g., the former could deal with issues such as privacy by design, prevention of Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

310

6. Key options for action at EU level

fraud, etc. while the horizontal instrument on AI would deal with issues such as discrimination, exploitation, manipulation, or total surveillance. Nevertheless, despite the theoretical possibility of a division of tasks, it is to be expected that the European legislator would, if Option 3 were taken, rather deal with everything within the new horizontal framework rather than take a split approach. Option 0

Option 1

Option 2

Option 3

Is able to address in an adequate manner physical risks associated with AI



+

+

+

Is able to address in an adequate manner social risks associated with AI





0

+

Is able to address in an adequate manner cross-cutting issues (e.g. explainability)





0

+

Is able to address specific challenges — of AI in particular sectors

+

0

0

Is likely to enhance public trust in technologies



0

0

+

Is likely to enhance legal certainty and avoid market fragmentation



0

+

+

Is likely to achieve coherence of the law and be technologically neutral



+

+

0

Is likely not to increase the cost of compliance (at least for some players)

+

+

0



Is likely to receive broad support by stakeholders addressed by regulation

+

0

0



Is likely to be implemented without too much delay and costs

N/A

0

0

0

0

0

+

Is likely to enhance Europe’s role as a — global pacesetter in regulation

On balance, the authors would strongly recommend combining Option 1 with Option 3, i.e. with a comprehensive horizontal instrument on trustworthy AI, as has been set out in the White Paper. While there would be some benefit in splitting tasks between this instrument and a semi-horizontal instrument on software safety, Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

6.3 Options specifically with regard to AI liability (mainly for physical risks)

311

also for reasons of creating coherence, the authors would rather accept some degree of incoherence than miss the opportunity of creating this new regulatory framework on AI as a strong signal to both citizens and businesses in Europe.

6.3 Options specifically with regard to AI liability (mainly for physical risks) While AI liability might theoretically likewise be dealt with under the new horizontal instrument on trustworthy AI, the discussion, due to a variety of reasons, goes more in the direction of dealing with AI liability for traditional physical risks (and possibly selected further risks) within a different context.

Option 0: Baseline scenario and no further action Again, the baseline scenario would be to do nothing about AI liability. This would mean that problems (in particular of tracing damage back to human fault or a defect) would have to be resolved at national level by national courts and possibly legislators. It is to be expected that this would lead to lengthy discussions, a patchwork of different legal solutions, and market fragmentation in addition to the already existing patchwork and market fragmentation in the field of liability law.

Option 1: Add rules on vicarious liability with regard to risks created by AI Independently of which of the following two Options is taken, it might be advisable to have rules on vicarious liability for AI. This is why Option 1 suggests harmonising national rules on vicarious liability with regard to their application to AI. It is advisable to do this by way of a Directive (or: a Directive-style provision within a Regulation) as national rules differ to a great extent and any too detailed provision might disrupt national regimes and create inconsistencies at national level. Vicarious liability is always liability of the frontend operator. A number of details would need to be specified, including what is an appropriate ‘standard of performance’ for machines, in particular from the moment when machines outperform humans. The added value as compared with any of the other Options would become apparent in particular where risks other than physical risks are at stake, such as pure economic risks.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

312

6. Key options for action at EU level

Option 1 relies on the insight that a person should not, when deploying AI in lieu of employing a human auxiliary, be able to escape liability for any malperformance of this AI where the same person would be liable for comparable malperformance on the part of a human auxiliary. For instance, a bank should be liable for a malperformance of any AI calculating their customers’ credit scores to the same extent as the bank would be liable for a malperformance of its employee that has been tasked with calculating credit scores. Given that Option 1 would implement important considerations of fairness and only cause minimal intervention into national laws, it is highly advisable to make use of this Option in any case.

Option 2: Include strict liability for AI in the PLD While liability under the PLD is often referred to as ‘strict’ liability, it is not ‘strict liability’ in the narrower sense because product liability requires that the victim establishes the existence of a defect, and it is precisely this requirement that is inadequate to grapple with difficult situations posed by AI. However, given that the PLD has to be revised for other reasons anyway (see above under 6.1), it would not be too great a task to include a specific article or possibly even just a paragraph on AI. For AI-specific harm caused by AI-driven products, no defect of the AI should have to be established by the victim and it should be sufficient for the victim to prove that the harm was caused by an incident that might be connected in some way with the AI (e.g. the cleaning robot making a sudden move in the direction of the victim) as contrasted with other incidents (e.g. the victim stumbling over the powered-off cleaning robot).

Option 3: Introduce special strict liability rules for physical risks caused by AI Option 3 would go a great step further and introduce special strict liability for (physical) risks related to AI, ideally in a rather broad and technologically neutral sense of the term. Strict liability for AI that is separate from the PLD could mainly lie with two different parties: the frontend operator that takes the economic responsibility for the concrete deployment of the AI (e.g. a consumer using a lawn-mowing robot in the garden, a hospital using AI for triage, or the operator of a fleet of autonomous vehicles), or, at least in the case of connected AI, the backend operator as the provider of updates, cloud services, etc. who continuously defines and controls safety-relevant features and who may or may not be identical with the producer. For some years, there has been a fierce debate over Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

6.3 Options specifically with regard to AI liability (mainly for physical risks)

313

who should be the proper addressee of new strict liability rules, in particular in the context of autonomous driving. Introducing a separate instrument at EU level on AI liability would be of strong symbolic value, signalling that the EU is abreast of developments and enhancing public trust in the mass rollout of AI. It is also likely to put an end to, or at least to reduce significantly, the existing confusion and anxiety on the part of both citizens and legal experts about what AI means for liability. While further harmonisation of liability law within the EU would clearly be very desirable, it is – leaving aside the aspect of public trust in AI for a moment – not entirely clear, though, why we need harmonised strict liability for physical risks posed by AI more than harmonised strict liability for physical risks posed by motor vehicles in general or for many other dangerous facilities and activities, many of which are already subject to differing strict liability regimes under national law. Physical risks created by an AI-driven product are not very different from physical risks created by a product that is of the same kind but human-driven. In any case, it would be preferable for this instrument to be a Directive rather than a Regulation, as a Directive would leave Member States a chance to adapt their own tort law in a coherent manner instead of having a split regime, e.g. one for traditional and semi-autonomous vehicles and one for autonomous vehicles. With regard to devices of a type for which the (frontend) operator is already now strictly liable under the majority of legal systems (e.g. motor vehicles, larger drones), the EU legislator should ideally determine the cornerstones of such strict liability, irrespective of whether or not a device is AI-driven. With regard to devices without a functional equivalent in the pre-AI age, or where the risks posed by a functional equivalent were hardly significant enough to make the device a candidate for strict liability, the dynamic regulatory model foreseen by current proposals (suggesting that a list of ‘high-risk’ applications is to be updated at regular intervals) is only practicable where backend operators are the addressees of liability. Under no circumstances should frontend operators who are consumers be addressees of such strict liability, nor of otherwise enhanced liability.

Evaluation of the specific options with regard to AI liability While, again, Option 0 is incompatible with the other options, Option 1 is compatible with both Options 2 and Options 3 and is minimally invasive in nature. Options 2 and 3 are not strictly mutually exclusive, but it is clear that having strict producer liability (in the proper sense, i.e. without the victim having to establish

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

314

6. Key options for action at EU level

a defect) for AI would significantly reduce the need for any further strict liability scheme. Option 0

Option 1

Option 2

Option 3

Is able to address in an adequate manner physical risks associated with AI



0

+

+

Is able to address in an adequate manner social risks associated with AI



0





Will provide an appropriate level of compensation for victims



0

+

+

Will allow harm to be borne by the party with the highest degree of control



0

0

+

Is likely to enhance public trust in technologies



+

+

+

Is likely to enhance legal certainty and avoid market fragmentation



+

0

+

Is likely to achieve coherence of the law and be technologically neutral

N/A

+

+



Is likely not to increase the cost of compliance (at least for some players)

+

0





Is likely to receive broad support by stakeholders addressed by regulation

+

0





Is likely to be implemented without too much delay and cost

N/A

+

+



0

0

+

Is likely to enhance Europe’s role as a — global pacesetter in regulation

At the end of the day, the authors would definitely recommend Option 1, which could, theoretically, be combined with either Option 2 or Option 3 if the goal is to ensure victims receive compensation. In particular if a combination of Option 1 and Option 2 turned out not to be feasible, the recommendation would be to opt for a combination of Option 1 and Option 3 instead. Option 3 is also likely to both enhance public trust in the mass roll-out of AI and to put an end to discussions about operator liability at national level, which are detrimental for the technology Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

7. Key recommendations for action at EU level

315

and might result in market fragmentation. Also, we must bear in mind that, even if there are strong claims under the PLD, producers strictly liable under the PLD must be able to take recourse against (professional) operators in cases where the harm had really been caused by a factor from beyond the sphere of the producer, so AI liability may be important also for producers at the redress stage.

7. Key recommendations for action at EU level The evaluation of different options leads to the following key recommendations for action at EU level:

Key recommendation I: Introduce a new semi-horizontal and risk-based regime on software safety (accompanied by further steps to modernise the safetyrelated acquis) The European legislator should introduce a new regime on software safety. This regime would be semi-horizontal in nature as it would apply to all software (e.g. whether embedded, accessory or standalone), in a very broad and technologically neutral sense and including, e.g., SaaS. It would overcome shortcomings in the existing safety legislation, which either does not cover software, in particular not standalone software, or is poorly equipped to deal with software. The European legislator might thus consider introducing a new Software Safety Directive (SSD), which would have to be accompanied by selected further steps towards modernisation of the safety-related acquis. The relationship of the SSD with existing and future sectoral legislation would be one of complementarity. The SSD would deal with cross-cutting issues, such as the delineation between products with software elements, add-on software for other products, and standalone software that otherwise interacts with other products, and the division of responsibilities between the different producers in either case. It would deal with privacy by design, cybersecurity (until/unless addressed by other acts developed under the Cybersecurity Act), post-market surveillance duties, issues arising in the context of updates, and similar questions. The sectoral instruments would continue to deal with specific aspects that are characteristic of the relevant type of product, such as machinery or toys. While only few provisions would apply to software in general, the SSD would have a specific part for software to be run on connected hardware (addressing issues such as protection of networks, fraud prevention, or privacy by design and by default). The SSD would also contain specific parts for other special types of Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

316

7. Key recommendations for action at EU level

software (such as software run on DLT, addressing governance issues required, e. g., for the prevention of illegal activities and for ensuring enforcement of legal rights and obligations). In designing the SSD, it would be necessary to ensure that the measures taken are proportionate and that, in particular, there is no unnecessary red tape for software development and innovation. Specific requirements or procedures should be restricted to software beyond a certain risk-level and/or producers beyond a particular size, and appropriate exceptions must be formulated, notably for open- and community-based software development for the public good.  

Key recommendation II: Revise the Product Liability Directive (PLD) The European legislator should continue its efforts to revise the PLD. A whole series of changes would be required for the PLD to be fit for the challenges posed by software. First of all, the scope of the PLD should explicitly be extended to software within the broad, technologically-neutral meaning proposed in this Study. For software marketed to customers within the Union, there would have to be an authorised representative that would be jointly and severally liable together with the producer, but appropriate restrictions would have to be made concerning this requirement in order not to jeopardise access to software developed outside the Union. There would have to be liability for defects persisting because of a failure to comply with post-market surveillance duties and duties to take ‘appropriate action’, in particular by way of software updates, as well as liability for defects that have their origin in an update. Clarification of the respective responsibilities of various actors involved where software interacts with hardware or other software would be helpful. The development risk defence, which is problematic as such, is particularly ill-suited for software and should not apply. The types of damage eligible for compensation under the PLD should mirror the broader range of risks covered by the proposed SSD. Also other links between safety requirements and liability should be intensified. In addition, the PLD should be adapted to the challenges posed by AI. For AIspecific harm caused by AI-driven products, no defect of the AI should have to be established by the victim and it should be sufficient for the victim to prove that the harm was caused by an incident that might have something to do with the AI (e.g. the cleaning robot making a sudden move in the direction of the victim) as contrasted with other incidents (e.g. the victim stumbling over the powered-off cleaning robot).

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

7. Key recommendations for action at EU level

317

Key recommendation III: Introduce new regulatory framework for AI Following the approach indicated in the White Paper on AI, the European legislator should introduce a new regulatory framework for AI in the form of a Regulation. This Regulation should also address social risks of AI. It would have to contain general principles or essential requirements that address goals rather than means (such as ‘explainability’ of AI and minimum degrees of human control). Apart from this, it is recommended that there is a limited number of ‘high-risk applications’ for which a regulatory approach is taken that focuses on concrete requirements (such as particular information duties), rights (such as the right to request human intervention) and procedures (such as impact assessment). In addition, a range of ‘unfair algorithmic practices’, such as discrimination, exploitation of vulnerabilities, manipulation, total surveillance, or oppression, should be prohibited and ‘blacklisted’ for all AI applications, whether high-risk or not. Engagement in blacklisted practices or violation of requirements, procedures, etc. should result in liability arising for any harm thereby caused. There could also be a range of whitelisted data practices in the context of AI development and deployment, i.e. practices for which a specific statutory justification within the meaning of the GDPR is created.

Key recommendation IV: Introduce new instrument on AI liability In addition to the measures mentioned so far, and definitely if the PLD is not fully adapted to the challenges posed by AI, the European legislator should consider the introduction of a new Directive on AI Liability. This would also serve to foster public trust in the roll-out of AI and pre-empt fragmentation of national solutions, which could be an obstacle to innovation. The Directive could provide for strict liability for physical risks, such as death, personal injury or property damage caused by certain high-risk AI applications (such as cleaning robots in public spaces) and also clarify liability for other applications. Strict liability should preferably be on the backend operator. A frontend operator who is a consumer should definitely not be subject to strict liability, nor to any other regime of enhanced liability. For activities for which there already exists strict liability and mandatory liability insurance under many legal systems (such as motor vehicles), a harmonised regime for both AI-driven and conventional devices would be ideal. The Directive would equally provide for vicarious liability for risks of any kind and any AI to the extent that AI has been deployed in lieu of a human auxiliary and the deployer would have been liable for the malperformance of that human auxiliary under national law. Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

318

7. Key recommendations for action at EU level

Key recommendation V: Continue digital fitness check of the whole acquis Apart from the recommendations submitted so far, it is likewise recommended to continue efforts to make the acquis as a whole fit to meet the challenges posed by software, connected devices, and AI. This could concern, for instance, the law of unfair commercial practices law (e.g. economically harmful recommendations to consumers), anti-discrimination law (e.g. discrimination by algorithms that is not based on a recognised discrimination criterion), competition law (cf. the issue of collusion by algorithms), or media law (cf. manipulation by social bots) as well as new provisions to be expected under the Digital Services Act.

Wendehorst/Duller, Safety- and Liability-Related Aspects of Software

Annex

Expert Group on Liability and New Technologies New Technologies Formation

Liability for Artificial Intelligence and Other Emerging Digital Technologies Table of Contents Executive summary 323 Key Findings 324 A. Introduction 329 I. Context 329 II. Background 330 B. Liability for emerging digital technologies under existing laws in Europe 332 I. Overview of existing liability regimes 332 II. Some examples of the application of existing liability regimes to emerging digital technologies 334 III. Specific challenges to existing tort law regimes posed by emerging digital technologies 337 1. Damage 337 2. Causation 338 3. Wrongfulness and fault 342 4. Vicarious liability 344 5. Strict liability 346 6. Product liability 348 7. Contributory conduct 350 8. Prescription 351 9. Procedural challenges 351 10. Insurance 352 C. Perspectives on liability for emerging digital technologies 353 1. Challenges of emerging digital technologies for liability law ([1]–[2]) 354 2. Impact of these challenges and need for action ([3]–[4]) 356 3. Bases of liability ([5]–[7]) 359 4. Legal personality ([8]) 360 5. Operator’s strict liability ([9]–[12]) 362 6. Producer’s strict liability ([13]–[15]) 366 7. Fault liability and duties of care ([16]–[17]) 368 8. Vicarious liability for autonomous systems ([18]–[19]) 370 Expert Group on Liability and New Technologies https://doi.org/10.1515/9783110775402-003

322

Table of Contents

9. Logging by design ([20]–[23]) 371 10. Safety rules ([24]) 373 11. Burden of proving causation ([25]–[26]) 374 12. Burden of proving fault ([27]) 378 13. Causes within the victim’s own sphere ([28]) 381 14. Commercial and technological units ([29]–[30]) 382 15. Redress between multiple tortfeasors ([31]) 384 16. Damage to data ([32]) 386 17. Insurance ([33]) 388 18. Compensation funds ([34]) 390 Annex: The New Technologies Formation of the Expert Group on Liability for New Technologies 391

Final Report of the New Technologies Formation

Executive summary

323

Executive summary Artificial intelligence and other emerging digital technologies, such as the Internet of Things or distributed ledger technologies, have the potential to transform our societies and economies for the better. However, their rollout must come with sufficient safeguards to minimise the risk of harm these technologies may cause, such as bodily injury or other harm. In the EU, product safety regulations ensure this is the case. However, such regulations cannot completely exclude the possibility of damage resulting from the operation of these technologies. If this happens, victims will seek compensation. They typically do so on the basis of liability regimes under private law, in particular tort law, possibly in combination with insurance. Only the strict liability of producers for defective products, which constitutes a small part of this kind of liability regimes, is harmonised at EU level by the Product Liability Directive, while all other regimes – apart from some exceptions in specific sectors or under special legislation – are regulated by the Member States themselves. In its assessment of existing liability regimes in the wake of emerging digital technologies, the New Technologies Formation of the Expert Group has concluded that the liability regimes in force in the Member States ensure at least basic protection of victims whose damage is caused by the operation of such new technologies. However, the specific characteristics of these technologies and their applications – including complexity, modification through updates or self-learning during operation, limited predictability, and vulnerability to cybersecurity threats – may make it more difficult to offer these victims a claim for compensation in all cases where this seems justified. It may also be the case that the allocation of liability is unfair or inefficient. To rectify this, certain adjustments need to be made to EU and national liability regimes. Below are listed the most important findings of this report on how liability regimes should be designed – and, where necessary, changed – in order to rise to the challenges emerging digital technologies bring with them. – A person operating a permissible technology that nevertheless carries an increased risk of harm to others, for example AI-driven robots in public spaces, should be subject to strict liability for damage resulting from its operation. – In situations where a service provider ensuring the necessary technical framework has a higher degree of control than the owner or user of an actual product or service equipped with AI, this should be taken into account in determining who primarily operates the technology. – A person using a technology that does not pose an increased risk of harm to others should still be required to abide by duties to properly select, operate, monitor and maintain the technology in use and – failing that – should be liable for breach of such duties if at fault. Expert Group on Liability and New Technologies

324











– –

Key Findings

A person using a technology which has a certain degree of autonomy should not be less accountable for ensuing harm than if said harm had been caused by a human auxiliary. Manufacturers of products or digital content incorporating emerging digital technology should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market. For situations exposing third parties to an increased risk of harm, compulsory liability insurance could give victims better access to compensation and protect potential tortfeasors against the risk of liability. Where a particular technology increases the difficulties of proving the existence of an element of liability beyond what can be reasonably expected, victims should be entitled to facilitation of proof. Emerging digital technologies should come with logging features, where appropriate in the circumstances, and failure to log, or to provide reasonable access to logged data, should result in a reversal of the burden of proof in order not to be to the detriment of the victim. The destruction of the victim’s data should be regarded as damage, compensable under specific conditions. It is not necessary to give devices or autonomous systems a legal personality, as the harm these may cause can and should be attributable to existing persons or bodies.

Key Findings [1] Digitalisation brings fundamental changes to our environments, some of which have an impact on liability law. This affects, in particular, the (a) complexity, (b) opacity, (c) openness, (d) autonomy, (e) predictability, (f) data-drivenness, and (g) vulnerability of emerging digital technologies. [2] Each of these changes may be gradual in nature, but the dimension of gradual change, the range and frequency of situations affected, and the combined effect, results in disruption. Final Report of the New Technologies Formation

Key Findings

325

[3] While existing rules on liability offer solutions with regard to the risks created by emerging digital technologies, the outcomes may not always seem appropriate, given the failure to achieve: (a) a fair and efficient allocation of loss, in particular because it could not be attributed to those: – whose objectionable behaviour caused the damage; or – who benefitted from the activity that caused the damage; or – who were in control of the risk that materialised; or – who were cheapest cost avoiders or cheapest takers of insurance. (b) a coherent and appropriate response of the legal system to threats to the interests of individuals, in particular because victims of harm caused by the operation of emerging digital technologies receive less or no compensation compared to victims in a functionally equivalent situation involving human conduct and conventional technology; (c) effective access to justice, in particular because litigation for victims becomes unduly burdensome or expensive. [4] It is therefore necessary to consider adaptations and amendments to existing liability regimes, bearing in mind that, given the diversity of emerging digital technologies and the correspondingly diverse range of risks these may pose, it is impossible to come up with a single solution suitable for the entire spectrum of risks. [5] Comparable risks should be addressed by similar liability regimes, existing differences among these should ideally be eliminated. This should also determine which losses are recoverable to what extent. [6] Fault liability (whether or not fault is presumed), as well as strict liability for risks and for defective products, should continue to coexist. To the extent these overlap, thereby offering the victim more than one basis to seek compensation against more than one person, the rules on multiple tortfeasors ([31]) govern. [7] In some digital ecosystems, contractual liability or other compensation regimes will apply alongside or instead of tortious liability. This must be taken into account when determining to what extent the latter needs to be amended. [8] For the purposes of liability, it is not necessary to give autonomous systems a legal personality [9] Strict liability is an appropriate response to the risks posed by emerging digital technologies, if, for example, they are operated in non-private environments and may typically cause significant harm.

Expert Group on Liability and New Technologies

326

Key Findings

[10] Strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation (operator). [11] If there are two or more operators, in particular (a) the person primarily deciding on and benefitting from the use of the relevant technology (frontend operator) and (b) the person continuously defining the features of the relevant technology and providing essential and ongoing backend support (backend operator), strict liability should lie with the one who has more control over the risks of the operation. [12] Existing defences and statutory exceptions from strict liability may have to be reconsidered in the light of emerging digital technologies, in particular if these defences and exceptions are tailored primarily to traditional notions of control by humans. [13] Strict liability of the producer should play a key role in indemnifying damage caused by defective products and their components, irrespective of whether they take a tangible or a digital form. [14] The producer should be strictly liable for defects in emerging digital technologies even if said defects appear after the product was put into circulation, as long as the producer was still in control of updates to, or upgrades on, the technology. A development risk defence should not apply. [15] If it is proven that an emerging digital technology has caused harm, the burden of proving defect should be reversed if there are disproportionate difficulties or costs pertaining to establishing the relevant level of safety or proving that this level of safety has not been met. This is without prejudice to the reversal of the burden of proof referred to in [22] and [24]. [16] Operators of emerging digital technologies should have to comply with an adapted range of duties of care, including with regard to (a) choosing the right system for the right task and skills; (b) monitoring the system; and (c) maintaining the system. [17] Producers, whether or not they incidentally also act as operators within the meaning of [10], should have to: (a) design, describe and market products in a way effectively enabling operators to comply with the duties under [16]; and (b) adequately monitor the product after putting it into circulation. [18] If harm is caused by autonomous technology used in a way functionally equivalent to the employment of human auxiliaries, the operator’s liability

Final Report of the New Technologies Formation

Key Findings

327

for making use of the technology should correspond to the otherwise existing vicarious liability regime of a principal for such auxiliaries. [19] The benchmark for assessing performance by autonomous technology in the context of vicarious liability is primarily the one accepted for human auxiliaries. However, once autonomous technology outperforms human auxiliaries, this will be determined by the performance of comparable available technology which the operator could be expected to use, taking into account the operator’s duties of care ([16]). [20] There should be a duty on producers to equip technology with means of recording information about the operation of the technology (logging by design), if such information is typically essential for establishing whether a risk of the technology materialised, and if logging is appropriate and proportionate, taking into account, in particular, the technical feasibility and the costs of logging, the availability of alternative means of gathering such information, the type and magnitude of the risks posed by the technology, and any adverse implications logging may have on the rights of others. [21] Logging must be done in accordance with otherwise applicable law, in particular data protection law and the rules concerning the protection of trade secrets. [22] The absence of logged information or failure to give the victim reasonable access to the information should trigger a rebuttable presumption that the condition of liability to be proven by the missing information is fulfilled. [23] If and to the extent that, as a result of the presumption under [22], the operator were obliged to compensate the damage, the operator should have a recourse claim against the producer who failed to equip the technology with logging facilities. [24] Where the damage is of a kind that safety rules were meant to avoid, failure to comply with such safety rules, including rules on cybersecurity, should lead to a reversal of the burden of proving (a) causation, and/or (b) fault, and/or (c) the existence of a defect. [25] As a general rule, the victim should continue to be required to prove what caused her harm. [26] Without prejudice to the reversal of the burden of proof proposed in [22] and [24](a), the burden of proving causation may be alleviated in light of the challenges of emerging digital technologies if a balancing of the following factors warrants doing so: (a) the likelihood that the technology at least contributed to the harm;

Expert Group on Liability and New Technologies

328

Key Findings

(b) the likelihood that the harm was caused either by the technology or by some other cause within the same sphere; (c) the risk of a known defect within the technology, even though its actual causal impact is not self-evident; (d) the degree of ex-post traceability and intelligibility of processes within the technology that may have contributed to the cause (informational asymmetry); (e) the degree of ex-post accessibility and comprehensibility of data collected and generated by the technology; (f) the kind and degree of harm potentially and actually caused. [27] If it is proven that an emerging digital technology caused harm, and liability therefor is conditional upon a person’s intent or negligence, the burden of proving fault should be reversed if disproportionate difficulties and costs of establishing the relevant standard of care and of proving their violation justify it. This is without prejudice to the reversal of the burden of proof proposed in [22] and [24](b). [28] If a cause of harm is attributable to the victim, the reasons for holding another person liable should apply correspondingly when determining if and to what extent the victim’s claim for compensation may be reduced. [29] Where two or more persons cooperate on a contractual or similar basis in the provision of different elements of a commercial and technological unit, and where the victim can demonstrate that at least one element has caused the damage in a way triggering liability but not which element, all potential tortfeasors should be jointly and severally liable vis-à-vis the victim. [30] In determining what counts as a commercial and technological unit within the meaning of [29] regard is to be had to (a) any joint or coordinated marketing of the different elements; (b) the degree of their technical interdependency and interoperation; and (c) the degree of specificity or exclusivity of their combination. [31] Where more than one person is liable for the same damage, liability to the victim is usually solidary (joint). Redress claims between tortfeasors should only be for identified shares (several), unless some of them form a commercial and/or technological unit ([29]-[30]), in which case the members of this unit should be jointly and severally liable for their cumulative share also to the tortfeasor seeking redress. [32] Damage caused to data may lead to liability where (a) liability arises from contract; or (b) liability arises from interference with a property right in the medium on which the data was stored or with another interest protected as a property right under the applicable law; or Final Report of the New Technologies Formation

I. Context

329

(c) the damage was caused by conduct infringing criminal law or other legally binding rules whose purpose is to avoid such damage; or (d) there was an intention to cause harm. [33] The more frequent or severe potential harm resulting from emerging digital technology, and the less likely the operator is able to indemnify victims individually, the more suitable mandatory liability insurance for such risks may be. [34] Compensation funds may be used to protect tort victims who are entitled to compensation according to the applicable liability rules, but whose claims cannot be satisfied.

A. Introduction I. Context Artificial intelligence (AI) and other emerging digital technologies,1 such as the Internet of Things and of Services (IoT/IoS), or distributed ledger technologies (DLT), have extraordinary potential to transform products, services and activities, procedures and practices, in a multitude of economic sectors and in relation to many aspects of society. Although some of these technologies2 are not new, their increasing application to a growing variety of purposes, and new combinations of a range of different emerging digital technologies opens up unprecedented possibilities. All this comes with the promise of making the world a safer, fairer, more productive, more convenient place, of helping to fight illness, poverty, crime, discrimination and other forms of injustice, and of connecting people worldwide. Although many of these promises are expected to come true, new or enhanced potential brings new risks with it, or increases existing ones.3

1 The term ‘emerging digital technologies’ is used with the same meaning as in the Commission Staff Working Document ‘Liability for emerging digital technologies’ (SWD(2018) 137 final). 2 Strictly speaking, it is not so much the technology itself, but a particular product or service making use of the technology that poses a risk. However, for brevity and simplicity, this report will use the term ‘technology’. 3 This is also acknowledged by key players in this area of technology; for example Microsoft in its 2018 US Securities and Exchange Commission filing stated that: ‘As with many disruptive innovations, AI presents risks and challenges that could affect its adoption, and therefore our business. AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liabiliExpert Group on Liability and New Technologies

330

A. Introduction

Throughout history, legal rules, concepts and principles have risen to the challenges posed by scientific, technical and, more recently, technological progress. In the last few decades, the adaptable principles of technological neutrality and functional equivalence have catered for the impact of digital technologies. These principles have served as the basis for the international response to the advent and first stages of development of the digital economy, and have largely guided the legislative and regulatory initiatives on electronic commerce (and information society services) adopted to date. The adequacy and completeness of liability regimes in the face of technological challenges are crucially important for society. If the system is inadequate or flawed or has shortcomings in dealing with damage caused by emerging digital technologies, victims may end up totally or partially uncompensated, even though an overall equitable analysis may make the case for indemnifying them. The social impact of a potential inadequacy in existing legal regimes, in addressing new risks created by emerging digital technologies, might compromise the expected benefits. Certain factors, such as the ever-increasing presence of emerging digital technologies in all aspects of social life, and the multiplying effect of automation, can also exacerbate the damage these technologies cause. Damage can easily become viral and rapidly propagate in a densely interconnected society.

II. Background On 16 February 2017 the European Parliament adopted a Resolution on Civil Law Rules on Robotics with recommendations to the Commission.4 It proposed a whole range of legislative and non-legislative initiatives in the field of robotics and AI. In particular, it asked the Commission to submit a proposal for a legislative instrument providing civil law rules on the liability of robots and AI. In February 2018, the European Parliamentary Research Service (EPRS) published a study on ‘A common EU approach to liability rules and insurance for connected and autonomous vehicles’5 as a European added value assessment accompanying the Resolution on Civil Law Rules.

ty, and brand or reputational harm. Some AI scenarios present ethical issues. If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.’ . 4 P8_TA(2017)0051. 5 . Final Report of the New Technologies Formation

II. Background

331

The 2018 Commission Work Programme announced that the Commission would be seeking to make the most of AI, since it will increasingly play a role in our economies and societies.6 On 14 December 2017, in a Joint Declaration,7 the Presidents of the Commission, Parliament and Council agreed to ensure ‘a high level of data protection, digital rights and ethical standards while capturing the benefits and avoiding the risks of developments in artificial intelligence and robotics’. On 25 April 2018, the Commission published a Staff Working Document on ‘Liability for emerging digital technologies’8 accompanying a Communication from the Commission to the other institutions on the same day, on ‘Artificial Intelligence for Europe’.9 This Communication and the Sibiu Communication of May 201910 stress that ‘a robust regulatory framework should proactively address the ethical and legal questions surrounding AI’. In its 2018 AI Communication the Commission also announced the adoption of a report assessing the implications of emerging digital technologies on existing safety and liability frameworks by mid-2019. In its 2019 Work Programme, it confirmed it would ‘continue work on the emerging challenge of Artificial Intelligence by enabling coordinated action across the European Union’.11 In March 2018, the Commission set up an Expert Group on Liability and New Technologies,12 operating in two different formations: the Product Liability Directive formation and the New Technologies formation. In the call for applications,13 the New Technologies formation (NTF) was asked to assess ‘whether and to what extent existing liability schemes are adapted to the emerging market realities following the development of the new technologies such as Artifical Intelligence, advanced robotics, the IoT and cybersecurity issues’. The experts were asked to examine whether the current liability regimes are still ‘adequate to facilitate the uptake of … new technologies by fostering investment stability and users’ trust’. If there are shortcomings, the NTF should make recommendations for amendments, without being limited to existing national and EU legal instruments. However, recommendations should be limited to mat-

6 . 7 . 8 SWD(2018) 137 final. 9 COM(2018) 237 final. 10 . 11 . 12 . 13 . Expert Group on Liability and New Technologies

332

B. Liability for emerging digital technologies under existing laws in Europe

ters of extracontractual liability, leaving aside in particular corresponding (and complementary) rules on safety and other technical standards.14 The NTF15 first convened in June 2018 and held nine further meetings up to May 2019. After analysing the relevant national laws and looking at specific use cases,16 it compares various aspects of existing liability regimes. This report presents the NTF’s findings.

B. Liability for emerging digital technologies under existing laws in Europe I. Overview of existing liability regimes The law of tort of EU Member States is largely non-harmonised, with the exception of product liability law under Directive 85/374/EC,17 some aspects of liability for infringing data protection law (Article 82 of the General Data Protection Regulation (GDPR)18), and liability for infringing competition law (Directive 2014/104/EU19). There is also a well-established regime governing liability insurance with regard to damage caused by the use of motor vehicles (Directive 2009/103/EC20), although

14 See the overview in the Commission Staff Working Document (fn 8) 4 ff. 15 See the list of members in the Annex. 16 The following use cases were examined, with the further participation of technical experts in the field in question: autonomous cars, smart home, blockchain and other distributed ledger technologies, autonomous healthcare applications, algorithmic decision making in the financial and other sectors, drones. 17 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (OJ L 210, 7.8.1985, p. 29), as amended by Directive 1999/34/EC of the European Parliament and of the Council of 10 May 1999, OJ L 141, 20 4.6.1999. See also infra B.III.6. 18 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ L 119, 4.5.2016, p. 1. 19 Directive 2014/104/EU of the European Parliament and of the Council of 26 November 2014 on certain rules governing actions for damages under national law for infringements of the competition law provisions of the Member States and of the European Union, OJ L 349, 5.12.2014, p. 1. 20 Directive 2009/103/EC of the European Parliament and of the Council of 16 September 2009 relating to insurance against civil liability in respect of the use of motor vehicles, and the enforcement of the obligation to insure against such liability, OJ L 263, 7.10.2009, p. 11–31. The directive is currently under review, see Proposal COM(2018) 336 final.  

Final Report of the New Technologies Formation

I. Overview of existing liability regimes

333

without touching upon liability for accidents itself. EU law also provides for a conflict of tort laws framework, in the form of the Rome II Regulation.21 On a national level, it can generally be observed that the laws of the Member States do not (yet) contain liability rules specifically applicable to damage resulting from the use of emerging digital technologies such as AI. By way of exception, those jurisdictions that already allow the experimental or regular use of highly or fully automated vehicles usually also provide for coverage of any damage caused, be it only by way of insurance22 or by reference to the general rules.23 Apart from this legislation, the harmful effects of the operation of emerging digital technologies can be compensated under existing (‘traditional’) laws on damages in contract and in tort in each Member State. This applies to all fields of application of AI and other emerging digital technologies the NTF of the Expert Group have analysed. In general, these domestic tort laws include a rule (or rules) introducing faultbased liability with a relatively broad scope of application, accompanied by several more specific rules which either modify the premises of fault-based liability (especially the distribution of the burden of proving fault) or establish liability that is independent of fault (usually called strict liability or risk-based liability), which also takes many forms that vary with regard to the scope of the rule, the conditions of liability and the burden of proof.24 Most liability regimes contain the notion of liability for others (often called vicarious liability).25

21 Regulation (EC) No 864/2007 of the European Parliament and of the Council of 11 July 2007 on the law applicable to non-contractual obligations (Rome II), OJ L 199, 31.7.2007, p. 40. 22 For example, Article 19 of the Italian Decree of 28 February 2018 on the testing of connected and automated vehicles on public roads (Modalità attuative e strumenti operativi della sperimentazione su strada delle soluzioni di Smart Road e di guida connessa e automatica, 18A02619, GU n° 90 of 18 April 2018) provides that the person seeking approval for testing automated vehicles on public roads must give proof of sufficient liability insurance cover. Spain’s Directorate-General for Traffic (Dirección General de Tráfico) circular of 13 November 2015 (Instrucción 15/V-113) also authorises the testing of automated cars and requires liability insurance to cover compulsory insurance limits for motor vehicles. 23 For example, § 7 of the German Road Traffic Act (Straßenverkehrsgesetz) provides for strict liability of the keeper of the vehicle. This rule was deliberately left unchanged when the Road Traffic Act was adapted to the emergence of automated vehicles. Similarly, French Decree n° 2018-211 of 28 March 2018 on experimentation with automated vehicles on public roads relies on the Loi Badinter of 5 July 1985 (n°85-677). The most conspicuous example is the recent UK Automated and Electric Vehicles Act 2018 (c 18), Section 2 of which provides that ‘the insurer is liable’ for damage incurred by the insured or any other person in an accident caused by an automated vehicle. If the vehicle is uninsured, it is the owner of the vehicle who is liable instead. 24 See also infra B.III.5. 25 See also infra B.III.4. Expert Group on Liability and New Technologies

334

B. Liability for emerging digital technologies under existing laws in Europe

However, these regimes may not always lead to satisfactory and adequate results.26 Furthermore, given the significant differences between the tort laws of all Member States, the outcome of cases will often be different depending on which jurisdiction applies. As experience with the Product Liability Directive has shown, efforts to overcome such differences by harmonising only certain aspects of liability law may not always lead to the desired degree of uniformity of outcomes.

II. Some examples of the application of existing liability regimes to emerging digital technologies In most jurisdictions, damage caused by motor vehicles is subject to a special liability regime. As mentioned above, there is an EU-wide insurance scheme in place, in the form of the (recodified) Motor Insurance Directive (MID),27 but the MID only harmonises liability insurance cover, not civil liability itself. Member States therefore continue to regulate tortious liability for accidents involving motor vehicles themselves, limited in their discretion only by the principle of effectiveness of the MID.28 These rules usually impose liability on the owner/keeper of a vehicle and/or on the driver, although there are systems which introduce direct claims against the insurer regardless of any other person’s liability. The appropriateness of existing traffic liability regimes for autonomous vehicles (AV) may be disputed, especially with regard to systems which rely on fault-based liability in general (Malta for example) or in limited circumstances, such as in the case of a collision (Poland for example), or for certain types of damage (Spain for example29), or which make the application of the traffic liability regime conditional on the involvement of a driver (Italy). Liability gaps may emerge in the case of a single-vehicle accident in as much as, under existing traffic liability rules, the injured owner/keeper is excluded from compensation. Some legal systems even exclude passengers from protection under strict traffic liability, either in general (Greece30 or the Netherlands for example31) or only in specific circumstances (Po-

26 See infra B.III. 27 Supra fn 20. 28 CJEU Delgado Mendes, C‑503/16, EU:C:2017:681, paragraph 48; CJEU Marques Almeida, C‑300/10, EU: C: 2012: 656, paragraphs 31, 32, and the case law cited. 29 Under Spanish law, liability for damage to property caused by motor vehicles remains subject to a fault-based regime. 30 Article 12 of Law 3950 /1911 on Liability for Automobiles. 31 Article 185, paragraph 1 of the Road Traffic Act (Wegenverkeerswet 1994) in the Netherlands. The problem there has recently been solved in a pragmatic way as, per 1 April 2017, the Dutch AsFinal Report of the New Technologies Formation

II. Some examples of the application of existing liability regimes

335

land32 or Austria for example33). This would be hard to accept for accidents involving AVs. Given the complex character of the autonomous driving environment, exclusion of strict liability in the case of a third-party intervention may also prove problematic, particularly in the context of cybersecurity risks, such as where a connected AV has been hacked, or where an accident has been caused because the ICT infrastructure sent the wrong signals. Where damage was caused by a defective vehicle, product liability or producer’s liability in tort may apply, but usually become relevant only at the redress stage.34 For most technological ecosystems (by which we mean systems with interacting devices or programmes), however, no specific liability regimes exist. This means that product liability, general tort law rules (fault-based liability, tort of negligence, breach of statutory duty), and possibly contractual liability, occupy centre stage. The more complex these ecosystems become with emerging digital technologies, the more increasingly difficult it becomes to apply liability frameworks. An example would be the use case of smart home systems and networks. Where smart home devices were already defective at the point at which they were put into circulation, product liability law applies. In most jurisdictions the producer may also be liable under general tort law, which could go beyond product liability by making the producer liable for, for example, defective ancillary digital services, and for updates as well as for failures in product surveillance or monitoring. In the case of damage caused by the seller of a product, an installing/configuring service provider, internet service provider, energy supplier, cloud operator and others involved in the smart home scenario, both general tort law, and possibly contractual liability, may come into play. Some countries (such as Spain or Greece) can use their special regimes of liability for flawed services, based on a presumed fault on the service provider’s part. Other legal systems operate solely or mainly on the basis of their general provisions on fault liability (general clauses) or relatively open tort law concepts (tort of negligence, breach of statu-

sociation of Insurers declared that motor third-party liability (MTPL) insurers shall compensate passengers of the insured motor vehicle regardless of liability (). 32 According to Article 436 § 2 of the Polish Civil Code, strict liability does not apply to passengers transported without any remuneration or other benefit (‘out of politeness’). 33 According to § 3 of the Austrian Railway and Motor Vehicle Liability Act (EKHG), people transported by the vehicle without the keeper’s consent are not covered. 34 Typically, at least in systems with strict liability for motor vehicles, the fact that these were defective does not preclude liability of the vehicles’ keeper. It will therefore often be the motor vehicle liability insurer who will pursue the product liability claim. Expert Group on Liability and New Technologies

336

B. Liability for emerging digital technologies under existing laws in Europe

tory duty). These provisions or legal concepts usually require proof of the defendant’s failure to observe the required standard of care.35 When the user in the smart home scenario is contractually tied to the actor (seller, installing service providers, internet service provider, energy supplier, cloud operator), the latter may be liable in contract to the user for damage caused by non-performance. Some legal systems (Germany, Austria, or Greece, and to some extent Denmark, for example) extend contractual liability under certain conditions, allowing a third party to invoke a contract they were not a party to themselves. This applies to situations where the contract is deemed to establish duties to also protect such third parties, allowing the latter to sue for compensation in cases of breach.36 The protected third party must be foreseeably close to the contracting partner, though, confronted in a similar way with the danger stemming from non-performance (such as family members or guests). Any kind of contractual liability is, however, usually subject to contractual (and sometimes also statutory) limitations. Similarly, complex situations may result in cases where damage was caused by autonomous healthcare applications. Such damage would usually be subject to fault-based liability, either in contract or in tort. Many jurisdictions allow the victim to bring concurrent claims based on contract and on tort alternatively. In some jurisdictions however, this is not possible, in which case it becomes necessary to choose the one or the other. When damage is triggered by a defect present before putting these applications into circulation, product liability may apply, if the application or the device is considered a product for the purpose of product liability law. Further complexities arise from the interplay between these regimes and social insurance and/or healthcare systems. Damage in connection with the use of algorithms or AI in the financial market is currently subject to reparation under traditional fault-based regimes. Some jurisdictions, however, allow the claimant to invoke administrative law (financial regulations) to establish the benchmark against which the perpetrator’s conduct is to be assessed. On the contractual level, information asymmetry resulting from the use of AI may justify the application of a (statutory or case law) pre-contractual liability regime (culpa in contrahendo and similar concepts). It seems more likely, however, that the reaction of the legal system to potential irregularities in

35 The standard of care referred to in this document is the model of careful and prudent conduct required from the perpetrator of the damage. It should not be confused with standards of safety or quality of products or services established by law or by certain bodies. 36 This is typically used as a workaround for deficiencies of the tort law regime, whereas other legal systems come to similar solutions via their (at least in some respects such as vicarious liability) more generous law of torts. Final Report of the New Technologies Formation

III. Specific challenges to existing tort law regimes

337

contracting with the use of algorithms will rely on contract law tools for assessing and challenging the validity of contracts (vitiated consent, lack of fairness, etc.).37 The use of blockchain, in particular cryptocurrencies, is not subject to any particular liability rules, and new legislation already enacted or under discussion in some Member States, related among other things to initial coin offerings, certifications of platforms and cybersecurity, does not extend to compensation for damage. In as much as this legislation provides for the duties and responsibilities of the participants in a blockchain or of public authorities, it may be relevant for establishing the standard of care for the purpose of applying fault-based liability rules.

III. Specific challenges to existing tort law regimes posed by emerging digital technologies It is possible to apply existing liability regimes to emerging digital technologies, but in light of a number of challenges and due to the limitations of existing regimes, doing so may leave victims under- or entirely uncompensated. The adequacy of existing liability rules may therefore be questionable, considering in particular that these rules were formulated decades or even centuries ago, based on even older concepts and incorporating a primarily anthropocentric and monocausal model of inflicting harm.

1. Damage The main purpose of tort law is to indemnify victims for losses they should not have to bear themselves entirely on the basis of an assessment of all the interests involved. However, only compensable harm will be indemnified, meaning damage to a limited range of interests that a legal system deems worthy of protection.38

37 Cf, e.g., Spanish case law related to swap agreements according to which the infringement of financial regulations and of the duty to inform in the pre-contractual stage were treated as grounds for vitiated consent. 38 See Article VI-2:101 Draft Common Frame of Reference (DCFR), in particular paragraph 1 lit c; Article 2:101 Principles of European Tort Law (PETL). This range is defined differently at present, with some systems (such as the Romanic systems) being more generous than others, and some of those others setting out only a limited list of protected interests by statute. Expert Group on Liability and New Technologies

338

B. Liability for emerging digital technologies under existing laws in Europe

While there is unanimous accord that injuries to a person or to physical property can trigger tortious liability,39 this is not universally accepted for pure economic loss.40 Damage caused by self-learning algorithms on financial markets, for example, will therefore often remain uncompensated, because some legal systems do not provide tort law protection of such interests at all or only if additional requirements are fulfilled, such as a contractual relationship between the parties or the violation of some specific rule of conduct. Nor is it universally accepted throughout Europe that damage to or the destruction of data is a property loss, since in some legal systems the notion of property is limited to corporeal objects and excludes intangibles.41 Other differences exist when it comes to the recognition of personality rights, which may also be adversely affected by emerging digital technologies, if certain data is released which infringes on the right to privacy for example.42 However, generally speaking, AI and other emerging digital technologies do not call into question the existing range of compensable harm per se. Rather, some of the already recognised categories of losses may be more relevant in future cases than in traditional tort scenarios. Damage as a prerequisite for liability is also a flexible concept – the interest at stake may be more or less significant, and the extent of damage to such an interest may also vary. This may in turn have an impact on the overall assessment of whether or not a tort claim seems justified in an individual case.43

2. Causation One of the most essential requirements for establishing liability is a causal link between the victim’s harm and the defendant’s sphere. As a rule, it is the victim

39 See Article 2:102 paragraphs 2 and 3 PETL. 40 See Article 2:102 paragraph 4 PETL: ‘Protection of pure economic interests … may be more limited in scope.’ See, e.g., W van Boom/H Koziol/Ch Witting (eds), Pure Economic Loss (2004); and M Bussani/V Palmer, ‘The liability regimes of Europe – their façades and interiors’, in M Bussani/V Palmer (eds), Pure Economic Loss in Europe (2003) 120 ff. 41 Compare § 90 German BGB (according to which a ‘thing’ by definition must be corporeal) with § 285 Austrian ABGB (which does not provide for such a limitation, so that ‘things’ in Austria may also be intangible). 42 But see Article 82 of the GDPR for a harmonised claim for compensation in cases of data breach. 43 See Article 2:102 paragraph 1 PETL: ‘The scope of protection of an interest depends on its nature; the higher its value, the precision of its definition and its obviousness, the more extensive is its protection’.  

Final Report of the New Technologies Formation

339

III. Specific challenges to existing tort law regimes

who must prove that their damage originated from some conduct or risk attributable to the defendant. The victim needs to then produce evidence in support of this argument. However, the less evident the sequence of events was that led to the victim’s loss, the more complex the interplay of various factors that either jointly or separately contributed to the damage, the more crucial links in the chain of events are within the defendant’s control, the more difficult it will be for the victim to succeed in establishing causation without alleviating their burden of proof. If the victim fails to persuade the court, to the required standard of proof,44 that something for which the defendant has to account for triggered the harm they suffered, they will lose their case, regardless of how strong it would have been against the defendant otherwise (for example, because of evident negligence on the defendant’s part). Hard as it is to prove that some hardware defect was the reason someone was injured, for example, it becomes very difficult to establish that the cause of harm was some flawed algorithm. Illustration 1. If a smoke detector in a smart home environment fails to trigger an alarm because of flawed wiring, this defect may be identifiable (and in this case is even visible). If, on the other hand, the smoke detector did not go off because of some firmware error, this may not be proven as easily (even though the absence of an alarm per se may be easily proven), if only because it requires a careful analysis of the firmware’s code and its suitability for the hardware components of the smoke detector.

It is even harder if the algorithm suspected of causing harm has been developed or modified by some AI system fuelled by machine learning and deep learning techniques, on the basis of multiple external data collected since the start of its operation. Even without changes to the original software design, the embedded criteria steering the collection and analysis of data and the decision-making process may not be readily explicable and often require costly analysis by experts. This may in itself be a primary practical obstacle to pursuing a claim for compensation, even if those costs should ultimately be recoverable as long as the chances of succeeding are hard to predict for the victim upfront.

44 The standard of proof determines the degree to which a court must be persuaded of some assertion in order to hold it as true. This standard is quite different throughout Europe. Most civil law systems traditionally require that the judge be convinced to something equivalent to a certainty, or at least a high degree of probability, to find in favour of the party with the burden of proof. By contrast, common law countries require that there be a probability greater than 50 % (or a preponderance of the evidence) to satisfy the burden of proof.  

Expert Group on Liability and New Technologies

340

B. Liability for emerging digital technologies under existing laws in Europe

In cases of strict liability,45 proving causation may be easier for the victim, and not only in those jurisdictions where causation is presumed in such cases.46 Instead of establishing some misconduct in the sphere of the defendant, the victim only has to prove that the risk triggering strict liability materialised. Depending on how this risk was defined by the legislator, this may be easier, considering that, for example, current motor vehicle liability statutes merely require an ‘involvement’ of the car or its being ‘in operation’ when the accident happened. In addition to the initial complexity of AI systems upon release, they will most likely be subject to more or less frequent updates which are not necessarily supplied by the original producer. Identifying which part of a now flawed code was wrong from the beginning or adversely changed in the course of an update will at least require (again) significant expert input, but doing so is essential in order to determine whom to sue for compensation. The operation of AI systems often depends on data and other input collected by the system’s own sensors or added by external sources. Not only may such data be flawed in itself, but the processing of otherwise correct data may also be imperfect. The latter may be due to original defects in designing the handling of data, or the consequence of distortions of the system’s self-learning abilities due to the bulk of data collected, whose randomness may lead the AI system in question to misperceive and miscategorise subsequent input. Problems of uncertain causation are of course not new to European legal systems, even though they are posed differently depending on the applicable standard of proof.47 As long as the uncertainty exceeds that threshold, the victim will remain uncompensated, but as soon as the likelihood of the causation theory on which the victim’s case rests meets the standard of proof, they will be fully compensated (subject to the further requirements of liability). This all-or-nothing dilemma is already being addressed throughout Europe by some modifications that aid the victim in proving causation under certain circumstances. Courts may for instance be willing to accept prima facie evidence in complex scenarios, such as those emerging digital technologies give rise to, where the exact sequence of events may be difficult to prove. While the burden of

45 But see the differences between the tort laws of the Member States when it comes to introducing and applying strict liability infra B.III.5. 46 See for example Article 1063 of the Croatian Civil Obligations Act: ‘Damage caused in relation to a dangerous thing or dangerous activity shall be considered to result from that thing or activity, unless it has been proved that it did not cause the damage.’ (translated by M Baretić in E Karner/ K Oliphant/BC Steininger (eds), European Tort Law: Basic Texts [2nd edition 2019] 48). 47 See fn 44. Final Report of the New Technologies Formation

III. Specific challenges to existing tort law regimes

341

proving causation is not shifted yet,48 it is clearly alleviated for the victim, who need not prove every single link in the chain of causation if courts accept that a given outcome is the typical effect of a certain development in that chain. Furthermore, as past medical malpractice cases have shown, courts tend to be willing to place the burden of producing evidence on the party who is or should be in control of the evidence, with failure to bring forward such evidence resulting in a presumption to the disadvantage of that party. If, for example, certain log files cannot be produced or properly read, courts may be prepared to hold this against the party that was in charge of these recordings (and/or of the technology for analysing them). In some cases, some European legislators have intervened and shifted the burden of proving causation altogether,49 thereby presuming that the victim’s harm was caused by the defendant, though leaving the defendant the possibility to rebut this.50 It remains to be seen to what extent any of these tools will be used in favour of the victim if their harm may have been caused by emerging digital technologies. It is already difficult to prove that some conduct or activity was the cause of harm, but it gets even more complex if other alternative causes come into play. This is nothing new, but it will become much more of an issue in the future, given the interconnectedness of emerging digital technologies and their increased dependency on external input and data, making it increasingly doubtful whether the damage at stake was triggered by a single original cause or by the interplay of multiple (actual or potential) causes. Current tort law regimes in Europe handle such uncertainties in the case of multiple potential sources of harm quite differently. Even if something is proven to have triggered the harm (for example, because an autonomous car collided with a tree), the real reason for it is not always equally evident. The car may have been poorly designed (be it its hardware, pre-installed software, or both), but it may also have either misread correct, or received incorrect, data, or a software update done by the original producer or by some third party may have been flawed, or the user may have failed to install an update which would have prevented the

48 Unlike in a full reversal of the burden of proof, prima facie evidence is meant to resolve uncertainties rather than bridge non liquet situations, and it can be rebutted already if the opponent can prove (again adhering to traditional standards) that there is a (mere) genuine possibility of a turn of events deviating from the one expected according to experience. 49 One such example is § 630h of the German BGB in the field of medical malpractice. 50 However, since the reason for shifting the burden of proof has often been the expectation that the victim will not succeed in establishing causation, the burden on the defendant will typically not be lighter. Expert Group on Liability and New Technologies

342

B. Liability for emerging digital technologies under existing laws in Europe

collision, to give just a few examples, not to mention a combination of multiple such factors. The classic response by existing tort laws in Europe in such cases of alternative causation, if it remains unclear which one of several possible causes was the decisive influence to trigger the harm, is that either no-one is liable (since the victim’s evidence fails to reach the threshold to prove causation of one cause), or that all parties are jointly and severally liable, which is the majority view.51 The former outcome is undesirable for the victim, the latter for those merely possible tortfeasors who in fact did not cause harm, but may still be attractive targets for litigation because of their procedural availability and/or their more promising financial ability to actually pay compensation. The problem of who really caused the harm in question will therefore often not be solved in the first round of litigation initiated by the victim, but on a recourse level, if ever. More modern approaches provide for proportional liability at least in some cases, reducing the victim’s claim against each potential tortfeasor to a quota corresponding to the likelihood that each of them in fact caused the harm in question.52

3. Wrongfulness and fault As already mentioned in the overview above, tort laws in Europe are traditionally fault-based, providing compensation to the victim if the defendant is to blame for the former’s damage.53 Such blame is commonly linked to the deviation from some conduct expected of, but not shown, by the tortfeasor. Whether or not a legal system distinguishes between objective or subjective wrongdoing and/or divides the basis of liability for misconduct into wrongfulness and fault,54 two things remain crucial: to identify the duties of care the perpetrator should have discharged and to prove that the conduct of the perpetrator of the damage did not discharge those duties. The duties in question are determined by various factors. Sometimes they are defined beforehand by statutory language prescribing or prohibiting certain spe-

51 See, e.g., B Winiger et al (eds), Digest of European Tort Law I: Essential Cases on Natural Causation (2007) 387 ff. 52 See I Gilead/M Green/BA Koch (eds), Proportional Liability: Analytical and Comparative Perspectives (2013). 53 See also the Commission Staff Working Document (fn 8) 7. 54 On the range of existing approaches in this regard throughout Europe, see H Koziol, ‘Comparative Conclusions’, in H Koziol (ed), Basic Questions of Tort Law from a Comparative Perspective (2015) 685 (782 ff).  



Final Report of the New Technologies Formation

343

III. Specific challenges to existing tort law regimes

cific conduct, but often they must be reconstructed after the fact by the court on the basis of social beliefs about the prudent and reasonable course of action in the circumstances.55 Emerging digital technologies make it difficult to apply fault-based liability rules, due to the lack of well established models of proper functioning of these technologies and the possibility of their developing as a result of learning without direct human control. The processes running in AI systems cannot all be measured according to duties of care designed for human conduct, or not without adjustments that would require further justification. As European legal systems tend to regulate product and safety requirements in advance more than other jurisdictions,56 it may well be the case that at least certain minimum rules will be introduced (if only, for example, logging requirements alleviating an analysis, after the fact, of what actually happened), to help define and apply the duties of care relevant for tort law should damage occur. A violation of such statutory or regulatory requirements may also trigger liability more easily for the victim, by shifting the burden of proving fault in many systems for example.57 Still, such requirements will not be present from the beginning, and it may take years for such rules to emerge, either in legislation or in the courts. Legal requirements have to be distinguished from industry standards (or practices) not yet recognised by the lawmaker. Their relevance in a tort action is necessarily weaker, even though the courts may look at such requirements as well when assessing in retrospect whether or not conduct complied with the duties of care that needed to be discharged under the circumstances. Taking a step back and shifting the focus onto a software developer who wrote the firmware for some smart gadget, for example, does not resolve the problem entirely, since – as already mentioned – the software may have been designed to adjust itself to unprecedented situations or at least to cope with novel input not matching any pre-installed data. If the operation of some technology that includes AI, for example, is legally permissible, presuming that the developer

55 See the notion of a ‘Schutzgesetz’ (protective norm) in a comparative overview, see B Winiger et al (eds), Digest of European Tort Law III: Essential Cases on Misconduct (2018) 696 ff. 56 See U Magnus, ‘Why is US Tort Law so Different?’, JETL 2010, 1 (20). 57 See for example § 2911 Czech Civil Code: ‘If a wrongdoer causes damage to the injured party by breaching a legal obligation, he shall be deemed to have caused the damage through negligence.’ (translated by J Hradek in European Tort Law: Basic Texts² [fn 46] 68). There are also tort law systems where fault is presumed in general (see Article 45(2) of the Bulgarian Law on Obligations and Contracts; § 1050 of the Estonian Law of Obligations Act; § 6:519 of the Hungarian Civil Code; § 420(3) of the Slovak Civil Code).  

Expert Group on Liability and New Technologies

344

B. Liability for emerging digital technologies under existing laws in Europe

made use of state-of-the-art knowledge at the time the system was launched, any subsequent choices made by the AI technology independently may not necessarily be attributable to some flaw in its original design. The question therefore arises whether the choice to admit it to the market, or implement the AI system in an environment where harm was subsequently caused, in itself is a breach of the duties of care applicable to such choices. In addition to the difficulties of determining what constitutes fault in the case of damage caused by an emerging digital technology, there may also be problems with proving fault. Generally, the victim has to prove that the defendant (or someone whose conduct is attributable to them) was at fault. The victim therefore not only needs to identify which duties of care the defendant should have discharged, but also to prove to the court that these duties were breached. Proving the defendant is at fault entails providing the court with evidence that may lead it to believe what the applicable standard of care was and that it has not been met. The second part of this is to provide evidence of how the event giving rise to the damage occurred. The more complex the circumstances leading to the victim’s harm are, the harder it is to identify relevant evidence. For example, it can be difficult and costly to identify a bug in a long and complicated software code. In the case of AI, examining the process leading to a specific result (how the input data led to the output data) may be difficult, very time-consuming and expensive.

4. Vicarious liability Existing tort laws in Europe differ substantially in their approach to holding someone (the principal) liable for the conduct of another (the auxiliary).58 Some attribute an auxiliary’s conduct to the principal without further requirements, other than that the auxiliary acted under the direction of the principal and for the benefit of the principal. Others hold the principal liable in tort law only under very exceptional circumstances, such as known dangerousness of the auxiliary or the auxiliary’s complete unsuitability for the assigned task,59 or if the defendant was

58 See the overview by H Koziol, ‘Comparative Conclusions’ (fn 54) 795 ff. 59 The latter is true in Austria for example. See § 1315 ABGB: ‘Whosoever, for the conduct of his affairs, avails himself either of an unfit person, or knowingly of a dangerous person, is liable for the harm such a person causes to another in that capacity.’ (translated by BC Steininger in European Tort Law: Basic Texts² [fn 46] 5).  

Final Report of the New Technologies Formation

III. Specific challenges to existing tort law regimes

345

at fault in selecting or supervising the auxiliary.60 There are also jurisdictions which use both approaches.61 Jurisdictions with a neutral (and therefore broader) definition of strict liability (as liability without fault of the liable person in general) regard vicarious liability as a mere variant of this strict (or no-fault) liability. If the notion of strict liability is equated with liability for some specific risk, dangerous object or activity instead, vicarious liability is rather associated with fault liability, as liability of the principal without personal fault of their own, but for the (passed-on) ‘fault’ of their auxiliary instead, even though the auxiliary’s conduct is then not necessarily evaluated according to the benchmarks applicable to themselves, but to the benchmarks for the principal.62 Irrespective of such differences, the concept of vicarious liability is considered by some as a possible catalyst for arguing that operators of machines, computers, robots or similar technologies should also be strictly liable for their operations, based on an analogy to the basis of vicarious liability. If someone can be held liable for the wrongdoing of some human helper, why should the beneficiary of such support not be equally liable if they outsource their duties to a non-human helper instead, considering that they equally benefit from such delegation?63 The policy argument is quite convincing that using the assistance of a self-learning and autonomous machine should not be treated differently from employing a human auxiliary, if such assistance leads to harm of a third party (‘principle of

60 See, e.g., the German § 831 BGB, according to which the principal can excuse himself ‘where the principal has exercised due care in the selection of the agent and – in so far as he has to provide equipment or tools or has to supervise the performance of the duties – has acted with due care in such provision and supervision, or where the loss would have occurred even if such care had been exercised’ (translated by F Wagner von Papp/J Fedtke in European Tort Law: Basic Texts² [fn 46] 144). 61 See, e.g., Article 429 of the Polish Civil Code, according to which the principal is liable for the agent’s unlawful (but not necessarily culpable) conduct, unless the principal has chosen the agent carefully or has chosen a professional agent, and Article 430 of the Polish Civil Code, which makes the principal strictly liable for the culpable conduct of the agent if the agent is a subordinate of the principal. See also Article 3-19-2 of Act no 11000 of 15 April 1683, King Christian the Fifth’s law of Denmark. 62 On this divide, see S Galand-Carval, ‘Comparative Report on Liability for Damage Caused by Others’, in J Spier (ed), Unification of Tort Law: Liability for Damage Caused by Others (2003) 289 (290). 63 One might even draw support for such a solution from the analogy to a historic precedent – the Roman legal concept of noxal liability for slaves, whom the law at the time treated as property and not as persons, see, e.g., W Buckland/A McNair, Roman Law and Common Law (1952) 359 ff; AJB Sirks, ‘Delicts’, in D Johnston (ed), The Cambridge Companion to Roman Law (2015) 246 (265 ff).  



Expert Group on Liability and New Technologies

346

B. Liability for emerging digital technologies under existing laws in Europe

functional equivalence’). However, at least in those jurisdictions which consider vicarious liability a variant of fault liability, holding the principal liable for the wrongdoing of another, it may be challenging to identify the benchmark against which the operations of non-human helpers will be assessed in order to mirror the misconduct element of human auxiliaries. The potential benchmark should take into account that in many areas of application non-human auxiliaries are safer, that is less likely to cause damage to others than human actors, and the law should at least not discourage their use.64

5. Strict liability Particularly from the 19th century onwards, legislators often responded to risks brought about by new technologies by introducing strict liability, replacing the notion of responsibility for misconduct with liability irrespective of fault, attached to specific risks linked to some object or activity which was deemed permissible, though at the expense of a residual risk of harm linked to it.65 So far, these changes to the law have concerned, for example, means of transport (such as trains or motor vehicles), energy (such as nuclear power, power lines), or pipelines.66 Even before that, tort laws often responded to increased risks by shifting the burden of proving fault, making it easier for the victim to succeed if the defendant was in control of particular sources of harm such as animals67 or defective immovables.68 The landscape of strict liability in Europe is quite varied. Some legal systems are restrictive and have made very limited use of such alternative liability regimes (often expanding fault liability instead). Others are more or less generous, while

64 R Abbott, ‘The Reasonable Computer: Disrupting the Paradigm of Tort Liability’, 86 Geo. Wash. L. Rev. 1 (2018). 65 See the contributions to M Martín-Casals (ed), The Development of Liability in Relation to Technological Change (2010). See also the Commission Staff Working Document (fn 8) 8 f. 66 See the overview provided by BA Koch/H Koziol, ‘Comparative Conclusions’, in BA Koch/ H Koziol (eds), Unification of Tort Law: Strict Liability (2002) 395 ff. 67 See the notes to Article VI-3:202 DCFR, describing the rather diverse landscape in Europe, which sometimes holds the keeper of the animal regardless of fault, or based on a presumption of the keeper’s fault (in particular of an omission). Some jurisdictions also distinguish between the types of animal (wild or farm animals). 68 See the notes to Article VI-3:202 DCFR, which sets out strict liability ‘for damage caused by the unsafe state of an immovable’, inspired by existing laws in Europe, which typically either provide for strict liability or liability based on a presumption of flawed maintenance (which may or may not be rebutted).  





Final Report of the New Technologies Formation

III. Specific challenges to existing tort law regimes

347

not allowing analogy to individually defined strict liabilities (with the sole exception of Austria69). Some Member States have also introduced a (more or less broad) general rule of strict liability, typically for some ‘dangerous activity’,70 which the courts in those jurisdictions interpret quite differently.71 In some jurisdictions, the keeping of a thing triggers strict liability,72 which is another way to provide for a rather far-reaching deviation from the classic fault requirement. Existing rules on strict liability for motor vehicles (which can be found in many, but not all EU Member States) or aircrafts may well also be applied to autonomous vehicles or drones, but there are many potential liability gaps.73 Strict liability for the operation of computers, software or the like is so far widely unknown in Europe, even though there are some limited examples where countries provide for the liability of the operator of some (typically narrowly defined) computer system, such as databases operated by the state.74 The advantage of strict liability for the victim is obvious, as it exempts them from having to prove any wrongdoing within the defendant’s sphere, let alone the causal link between such wrongdoing and the victim’s loss, allowing the victim to focus instead only on whether the risk brought about by the technology materialised by causing them harm. However, one has to bear in mind that often strict liabilities are coupled with liability caps or other restrictions in order to counterbalance the increased risk of liability of those benefitting from the technology. Such caps are often further justified as contributing to making the risk insurable, as

69 BA Koch/H Koziol, ‘Austria’, in Unification of Tort Law: Strict Liability (fn 66) 14. 70 See Article 1064 of the Croatian Civil Obligations Act (dangerous things and activities); § 2925 of the Czech Civil Code (extraordinarily dangerous operation); § 1056 of the Estonian Law of Obligations Act (major source of danger); § 6:535 of the Hungarian Civil Code (extraordinarily dangerous activity); Article 2050 of the Italian Civil Code (dangerous activity); Article 2347 of Latvian Civil Law (activity associated with increased risk for other persons); § 432 of the Slovakian Civil Code (extremely dangerous operation); Article 149 ff of the Slovenian Obligations Code (dangerous objects or activities). The French liability for things (Article 1242 of the Civil Code) is another peculiar solution not limited to any specific object or risk. 71 See also the variety of causes of action in the Czech Civil Code (): Article 2924 (damage caused by an operating gainful activity unless all reasonable care exercised), Article 2925 (damage caused by a particularly hazardous operation, ‘if the possibility of serious damage cannot be reasonably excluded in advance even by exercising due care’), Article 2937 (damage caused by a thing, though with a reversal of the burden of proof that the defendant had properly supervised it). 72 See Article 1242 of the French Civil Code and Article 1384 of the Belgian Civil Code. 73 See supra B.II. 74 See §§ 89e, 91b paragraph 8 of the Austrian Gerichtsorganisationsgesetz (Court Organisation Act).  

Expert Group on Liability and New Technologies

348

B. Liability for emerging digital technologies under existing laws in Europe

strict liability statutes often require adequate insurance cover for the liability risks. A factor which any legislator considering the introduction of strict liability will have to take into account is the effect that such introduction may have on the advancement of the technology, as some may be more hesitant to actively promote technological research if the risk of liability is considered a deterrent. On the other hand, this allegedly chilling effect of tort law is even stronger as long as the question of liability is entirely unresolved and therefore unpredictable, whereas the introduction of a specific statutory solution at least more or less clearly delimits the risks and contributes to making them insurable.

6. Product liability For more than 30 years, the principle of strict producer liability for personal injury and damage to consumer property caused by defective products has been an important part of the European consumer protection system. At the same time, the harmonisation of strict liability rules has helped to achieve a level playing field for producers supplying their products to different countries. However, while all EU Member States have implemented the Product Liability Directive (PLD75), liability for defective products is not harmonised entirely. Apart from differences in implementing the directive,76 Member States also continue to preserve alternative paths to compensation in addition to the strict liability of producers for defective products under the PLD. The PLD is based on the principle that the producer (broadly defined along the distribution channel) is liable for damage caused by the defect in a product they have put into circulation for economic purposes or in the course of their business.77 Interests protected by the European product liability regime are limited to life and health and consumer property. The PLD was drawn up on the basis of the technological neutrality principle. According to the latest evaluation of the directive’s performance, its regime continues to serve as an effective tool and contributes to enhancing consumer protec-

75 Council Directive of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (85/374/EEC). 76 Apart from variations allowed by the directive itself (Article 15 f PLD), there is, for example, no accord on whether the threshold of 500 ECU in Article 9 lit b PLD is a minimum loss (allowing recovery for the entire harm as long as it exceeds this amount) or a deductible (granting only compensation for any loss above the minimum). 77 Articles 4 and 7 PLD. Final Report of the New Technologies Formation

III. Specific challenges to existing tort law regimes

349

tion, innovation, and product safety.78 Nonetheless, some key concepts underpinning the EU regime, as adopted in 1985, are today an inadequate match for the potential risks of emerging digital technologies.79 The progressive sophistication of the market and the pervasive penetration of emerging digital technologies reveal that some key concepts require clarification. This is because the key aspects of the PLD’s liability regime have been designed with traditional products and business models in mind – material objects placed on the market by a one-time action of the producer, after which the producer does not maintain control over the product. Emerging digital technologies put the existing product liability regime to the test in several respects concerning notions of product, defect and producer. The scope of the product liability regime rests on the concept of product. For the purposes of the directive, products are defined as movable objects, even when incorporated into another movable or immovable object, and include electricity. So far, the distinction of products and services has not encountered insurmountable difficulties. However, emerging digital technologies, especially AI systems, challenge that clear distinction and raise open questions. In AI systems, products and services permanently interact and a sharp separation between them is unfeasible. It is also questionable whether software is covered by the legal concept of product or product component. It is particularly discussed whether the answer should be different for embedded and non-embedded software, including overthe-air software updates or other data feeds. In any case, where such updates or other data feeds are provided from outside the EEA, the victim may not have anybody to turn to within the EEA, as there will typically not be an intermediary importer domiciled within the EEA in the case of direct downloads. The second key element of the product liability regime is the notion of defect. Defectiveness is assessed on the basis of the safety expectations of an average consumer,80 taking into account all relevant circumstances. The interconnectivity of products and systems makes it hard to identify defectiveness. Sophisticated AI autonomous systems with self-learning capabilities also raise the question of whether unpredictable deviations in the decision-making path can be treated as defects. Even if they constitute a defect, the state-of-the-art defence may apply.

78 Commission Staff Working Document, Evaluation of Council Directive 85/374/EEC, SWD (2018) 157. 79 This was also acknowledged by the (Fifth) Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee on the Application of the Council Directive on the approximation of the laws, regulations, and administrative provisions of the Member States concerning liability for defective products (85/374/EEC), COM(2018) 246 final, 8 f. 80 ‘Safety which a person is entitled to expect’ (Article 6 paragraph 1 PLD).  

Expert Group on Liability and New Technologies

350

B. Liability for emerging digital technologies under existing laws in Europe

Additionally, the complexity and the opacity of emerging digital technologies complicate chances for the victim to discover and prove the defect and prove causation. As the PLD focuses on the moment when the product was put into circulation as the key turning point for the producer’s liability, this cuts off claims for anything the producer may subsequently add via some update or upgrade. In addition, the PLD does not provide for any duties to monitor the products after putting them into circulation.81 Highly sophisticated AI systems may not be finished products that are put on the market in a traditional way. The producer may retain some degree of control over the product’s further development in the form of additions or updates after circulation. At the same time, the producer’s control may be limited and non-exclusive if the product’s operation requires data provided by third parties or collected from the environment, and depends on self-learning processes and personalising settings chosen by the user. This dilutes the traditional role of a producer, when a multitude of actors contribute to the design, functioning and use of the AI product/system. This is related to another limitation of liability – most Member States adopted the so-called development risk defence, which allows the producer to avoid liability if the state of scientific and technical knowledge at the time when he put the product into circulation was not such as to enable the existence of the defect to be discovered (Article 7 lit e PLD). The defence may become much more important practically with regard to sophisticated AI-based products. It has been mentioned that the PLD regime protects life and health as well as consumer property. With regard to the latter, it is not clear whether it covers damage to data, as data may not be an ‘item of property’ within the meaning of Article 9 lit b PLD.

7. Contributory conduct While balancing liability in light of the victim’s own conduct contributing to their harm does not raise new problems in the era of emerging digital technologies, one should keep in mind that all challenges listed above with respect to the tortfeasor apply correspondingly to the victim. This is particularly true if the victim was involved in or somehow benefited from the operation of some smart system or other

81 On the manifold difficulties with the PLD today, see P Machnikowski, ‘Conclusions’, in P Machnikowski (ed), European Product Liability: An Analysis of the State of the Art in the Era of New Technologies (2016) 669 (691 ff).  

Final Report of the New Technologies Formation

III. Specific challenges to existing tort law regimes

351

interconnected digitalised device, e.g. by installing (or failing to install) updates, by modifying default system settings, or by adding their own digital content. Apart from collisions of autonomous vehicles, further obvious examples include the home owner who fails to properly install and combine multiple components of a smart home system despite adequate instructions. In the former case, two similar risks meet, whereas in the latter the risks of an emerging digital technology have to be weighed against failure to abide by the expected standard of care.

8. Prescription While there is a certain trend throughout Europe to reform the laws regarding prescription of tort claims,82 it is unproblematic to apply these rules to scenarios involving emerging digital technologies. However, one should be aware that particularly in jurisdictions where the prescription period is comparatively short,83 the complexities of these technologies, which may delay the fact-finding process, may run counter to the interests of the victim by cutting off their claim prematurely, before the technology could be identified as the source of her harm.

9. Procedural challenges In addition to the problems of substantive tort law already indicated, the application of liability frameworks in practice is also affected by challenges in the field of procedural law. Considering the tendency of case law experience in some Member States to alleviate the burden of proving causation in certain complex matters (such as medical malpractice),84 one could easily envisage that courts might be similarly supportive of victims of emerging digital technologies who have a hard time proving that the technology in question was the actual cause of their harm. However, again this is likely to differ from case to case and most certainly from Member State to Member State. As far as purely procedural issues are concerned, there may equally be problems, as well-established procedural law concepts like

82 See, e.g., BA Koch, ‘15 Years of Tort Law in Europe – 15 Years of European Tort Law?’, in E Karner/B Steininger (eds), European Tort Law 2015 (2016) 704 (719 f). 83 For example, only one year, as in Spain (Article 1968 of the Civil Code), as opposed to, e.g., three to six years elsewhere. 84 See, e.g., BA Koch, ‘Medical Liability in Europe: Comparative Analysis’, in BA Koch (ed), Medical Liability in Europe (2011) 611 (632 ff).  

Expert Group on Liability and New Technologies

352

B. Liability for emerging digital technologies under existing laws in Europe

prima facie evidence may be difficult to apply to situations involving emerging technological developments. The ensuing differences in the outcome of cases which result from differences in the procedural laws of the Member States may be alleviated at least in part by harmonising the rules on the burden of proof.

10. Insurance An obligatory insurance scheme for certain categories of AI/robots has been proposed as a possible solution to the problem of allocating liability for damage caused by such systems (sometimes combined with compensation funds for damage not covered by mandatory insurance policies).85 However, an obligatory insurance scheme cannot be considered the only answer to the problem of how to allocate liability and cannot completely replace clear and fair liability rules. Insurance companies form a part of the whole social ecosystem and need liability rules to protect their own interests in relation to other entities (redress rights). Moreover, in order to keep emerging digital technologies as safe as possible and, therefore, trustworthy, a duty of care should be affected by insurance as little as possible. Yet, at the same time, cases of very high or catastrophic risks need to be insured in order to secure compensation for potentially serious damage. Hence, the question relates to whether first-party or third-party insurance, or a combination of both, should be required or at least recommended and in which cases.86 Currently, EU law requires obligatory liability (third-party) insurance e.g. for the use of motor vehicles,87 air carriers and aircraft operators,88 or carriers of passengers by sea.89 Laws of the Member States require obligatory liability insurance in various other cases, mostly coupled with strict liability schemes, or for practising certain professions.

85 See points 57 et seq. of the EP Resolution cited in fn 4 above. 86 The whole insurance system is a combination of public and private obligatory or optional insurance that takes the form of first-party or third-party insurance. 87 Directive 2009/103/EC of the European Parliament and of the Council of 16 September 2009 relating to insurance against civil liability in respect of the use of motor vehicles, and the enforcement of the obligation to insure against such liability. 88 Regulation (EC) No 785/2004 of the European Parliament and of the Council of 21 April 2004 on insurance requirements for air carriers and aircraft operators. 89 Regulation (EC) No 392/2009 of the European Parliament and of the Council of 23 April 2009 on the liability of carriers of passengers by sea in the event of accidents. Final Report of the New Technologies Formation

III. Specific challenges to existing tort law regimes

353

New optional insurance policies (e.g. cyber-insurance) are offered to those interested in covering both first- and third-party risks. Overall, the insurance market is quite heterogeneous and can adapt to the requirements of all involved parties. However, this heterogeneity, combined with a multiplicity of actors involved in an insurance claim, can lead to high administrative costs both on the side of insurance companies and potential defendants, the lengthy processing of insurance claims, and unpredictability of the final result for the parties involved. Insurers traditionally use historical claims data to assess risk frequency and severity. In the future, more complex systems, using highly granular risk profiles based on data analytics, including by analysing data logged or streamed in real time, will gain ground. In the light of this, the issue of access to data for insurance companies is very pertinent. The cost efficiency of the claims process is also an important consideration.90

C. Perspectives on liability for emerging digital technologies The promise of benefits and remarkable opportunities for society enabled by a multitude of uses and applications of emerging digital technologies is incontestable. Despite these indisputable gains, the pervasive use of increasingly sophisticated systems and combinations of technologies, in multiple economic sectors and societal contexts, creates risks and can cause losses. The adequacy of current liability legal regimes in Europe to fully compensate damage caused by these technologies is, however, questionable.91 To that end, certain key concepts underpinning classical liability regimes need legal clarification. Furthermore, to deal with some situations, the formulation of specific rules, principles and concepts might also be necessary to accommodate legal liability regimes to new realities.

90 F Pütz et al, ‘Reasonable, Adequate and Efficient Allocation of Liability Costs for Automated Vehicles: A Case Study of the German Liability and Insurance Framework’, European Journal of Risk Regulation (2018) 9.3: 548-563. 91 Supra B.III. Expert Group on Liability and New Technologies

354

C. Perspectives on liability for emerging digital technologies

1. Challenges of emerging digital technologies for liability law ([1]–[2]) [1] Digitalisation brings fundamental changes to our environments, some of which have an impact on liability law. This affects, in particular, the (a) complexity, (b) opacity, (c) openness, (d) autonomy, (e) predictability, (f) data-drivenness, and (g) vulnerability of emerging digital technolgies. [2] Each of these changes may be gradual in nature, but the dimension of gradual change, the range and frequency of situations affected, and the combined effect, results in disruption. Digitalisation has changed and is still changing the world. The law of liability in European jurisdictions has evolved over the course of many centuries and has already survived many disruptive developments. It therefore does not come as a surprise that, in principle, the law of liability is able to also cope with emerging digital technologies. However, there are some fundamental changes, each of which may be only gradual in nature, but whose dimension and combined effect results in disruption.92 (a) Complexity: Modern-day hardware can be a composite of multiple parts whose interaction requires a high degree of technical sophistication. Combining it with an increasing percentage of digital components, including AI, makes such technology even more complex and shifts it far away from the archetypes of potentially harmful sources on which the existing rules of liability are based. Where, for example, an AV interacts with other AVs, a connected road infrastructure and various cloud services, it may be increasingly difficult to find out where a problem has its source and what ultimately caused an accident. The plurality of actors in digital ecosystems makes it increasingly difficult to find out who might be liable for the damage caused. Another dimension of this complexity is the internal complexity of the algorithms involved.

92 See also the Commission Staff Working Document (fn 8) 9 ff, 22 f.  



Final Report of the New Technologies Formation

1. Challenges of emerging digital technologies for liability law

355

(b) Opacity: The more complex emerging digital technologies become, the less those taking advantage of their functions or being exposed to them can comprehend the processes that may have caused harm to themselves or to others. Algorithms often no longer come as more or less easily readable code, but as a blackbox that has evolved through self-learning and which we may be able to test as to its effects, but not so much to understand. It is therefore becoming increasingly difficult for victims to identify such technologies as even a possible source of harm, let alone why they have caused it. Once a victim has successfully claimed damages from a tortfeasor, the tortfeasor may face similar difficulties at the redress level. (c) Openness: Emerging digital technologies are not completed once put into circulation, but by their nature depend upon subsequent input, in particular more or less frequent updates or upgrades. Often they need to interact with other systems or data sources in order to function properly. They therefore need to remain open by design, i.e. permit external input either via some hardware plug or through some wireless connection, and come as hybrid combinations of hardware, software, continuous software updates, and various continuous services. This shift from the classic notion of a product completed at a certain point in time to a merger of products and ongoing services has a considerable impact on, among other things, product liability. (d) Autonomy: Emerging new technologies increasingly perform tasks with less, or entirely without, human control or supervision. They are themselves capable of altering the initial algorithms due to self-learning capabilities that process external data collected in the course of the operation. The choice of such data and the degree of impact it has on the outcome is constantly adjusted by the evolving algorithms themselves. (e) Predictability: Many systems are designed to not only respond to pre-defined stimuli, but to identify and classify new ones and link them to a self-chosen corresponding reaction that has not been pre-programmed as such. The more external data systems are capable of processing, and the more they are equipped with increasingly sophisticated AI, the more difficult it is to foresee the precise impact they will have once in operation. (f) Data-drivenness: Emerging digital technologies increasingly depend on external information that is not pre-installed, but generated either by built-in sensors or communicated from the outside, either by regular data sources or by ad hoc suppliers. Data necessary for their proper functioning may, however, be Expert Group on Liability and New Technologies

356

C. Perspectives on liability for emerging digital technologies

flawed or missing altogether, be it due to communication errors or problems of the external data source, due to flaws of the internal sensors or the built-in algorithms designed to analyse, verify and process such data. (g) Vulnerability: Emerging digital technologies are typically subject to more or less frequent updates and operate in more or less constant interaction with outside information. The built-in features granting access to such input make these technologies particularly vulnerable to cybersecurity breaches. These may cause the system itself to malfunction and/or modify its features in a way more likely to cause harm.

2. Impact of these challenges and need for action ([3]–[4]) [3] While existing rules on liability offer solutions with regard to the risks created by emerging digital technologies, the outcomes may not always seem appropriate, given the failure to achieve: (a) a fair and efficient allocation of loss, in particular because it could not be attributed to those: – whose objectionable behaviour caused the damage; or – who benefitted from the activity that caused the damage; or – who were in control of the risk that materialised; or – who were cheapest cost avoiders or cheapest takers of insurance. (b) a coherent and appropriate response of the legal system to threats to the interests of individuals, in particular because victims of harm caused by the operation of emerging digital technologies receive less or no compensation compared to victims in a functionally equivalent situation involving human conduct and conventional technology; (c) effective access to justice, in particular because litigation for victims becomes unduly burdensome or expensive. [4] It is therefore necessary to consider adaptations and amendments to existing liability regimes, bearing in mind that, given the diversity of emerging digital technologies and the correspondingly diverse range of risks these may pose, it is impossible to come up with a single solution suitable for the entire spectrum of risks.

Final Report of the New Technologies Formation

2. Impact of these challenges and need for action

357

Existing liability regimes in all Member States already now provide answers to the question of whether the victim of any risk that materialises can seek compensation from another, and under what conditions.93 However, these answers may not always be satisfying when harm is caused by emerging digital technologies given the challenges, and for various reasons. One reason why existing rules on liability may produce unsatisfactory results is that loss resulting from emerging digital technologies is not allocated to the party who is the most appropriate to bear that loss. As a general rule, loss normally falls on the victim themselves (casum sentit dominus) unless there is a convincing reason for shifting it to another party to whom the loss can be attributed. Reasons for attributing loss to another party vary depending on which type of liability is at stake. Under fault-based liability, the pivotal point is that the tortfeasor’s objectionable and avoidable behaviour caused the damage, which in turn translates both into a corrective justice argument and an argument about providing the right incentives to avoid harm. Under many regimes of strict liability, the pivotal points are benefit and control, i.e. that the liable person exposed others to the risks of an activity from which the liable person benefitted and which was under their control. This again translates into arguments both of corrective justice and of the right incentives. Economic analysis re-phrased these elements by putting the stress on the cheapest cost avoider or the cheapest taker of insurance, with the cheapest cost avoider usually being precisely the person who could simply desist from objectionable behaviour, or who controls a risk and its extent. Illustration 2. For traditional road vehicles, it used to be the individual owner (O) who was the most appropriate person to be liable, where damage was caused by the vehicle’s operation. Regardless of whether or not the damage was caused by O’s intent or negligence, it was definitely O who benefitted from the operation in general, who had the highest degree of control of the risk by deciding when, where and how to use, maintain and repair the vehicle, and who was therefore also the cheapest cost avoider and taker of insurance. Where modern autonomous vehicles (AVs) are privately owned, it is still the individual owner who decides when to use the AV and puts the destination into the system, but all other decisions (route, speed, etc.) are taken by algorithms provided by the producer (P) of the AV or a third party acting on P’s behalf. P is also in charge of maintaining the vehicle. P may therefore be the much more appropriate person to be liable than O.

Existing rules on liability may also lead to inappropriate results for reasons related more to coherence and consistency, in particular taking into account the principle of functional equivalence, such as where compensation is denied in a

93 B.III. Expert Group on Liability and New Technologies

358

C. Perspectives on liability for emerging digital technologies

situation involving emerging digital technologies when there would be compensation in a functionally equivalent situation involving human conduct and conventional technology. Illustration 3. Hospital H uses an AI-based surgical robot. Despite the fact that H and its staff have discharged all possible duties of care, damage is caused to patient P by way of some malfunctioning of the robot nobody could have foreseen, and which is unrelated to the condition in which the robot was shipped. If P were not indemnified for the ensuing harm, this would be inconsistent with the outcome in the functionally equivalent situation in which H has employed a human doctor and is liable for that doctor’s comparable misconduct under national rules of vicarious liability (see C.8).

The application of traditional liability rules may also lead to unsatisfactory results because, while the victim might theoretically receive compensation, litigation would be unduly burdensome and expensive, leaving them without effective access to justice. This may be the case if the liability requirements they would have to prove either are entirely unsuitable for the risk posed by emerging digital technologies or too difficult to establish. Leaving the victim uncompensated or undercompensated in such cases may be undesirable, as it may effectively deprive the victim of basic protection with regard to significant legally protected interests of theirs (such as life, health, bodily integrity and property, or other important rights). In many situations, a particular outcome is not satisfactory for two or more of the above reasons. It is clear from the outset that no one-size-fits-all solution can (or should) be offered. Instead, it is necessary to consider a range of options, with the choice within that range to be determined by various factors. Various policy arguments have shown that strict liability of the operator of some emerging digital technology may be justified, given the competing interests of said operator and of the victim, as well as the victim’s alternatives to getting compensation ([9]-[12]). In the case of a product defect, the manufacturer of that product may be the appropriate addressee of claims arising out of such defects ([13]-[15]). However, adapting the notion of fault liability by specifying further duties of care ([16]-[17]), or by shifting the burden of proving fault ([22](b), [24](b), [27]), for example, may already resolve disruptive effects of emerging digital technologies in the field of tort law, if necessary and appropriate at all. Remaining gaps may often be filled by extending vicarious liability to the use of autonomous technology in lieu of human auxiliaries ([18]). If there are systemic practical difficulties in proving causation and other factors, it may be necessary to make some adjustments in this respect ([22], [24], [25]-[26], [29]-[30]). An insurance requirement may be necessary in some cases to ensure that victims will get compensation ([33]). Compensation funds can also play a complementary role ([34]). Final Report of the New Technologies Formation

3. Bases of liability

359

3. Bases of liability ([5]–[7]) [5] Comparable risks should be addressed by similar liability regimes, existing differences among these should ideally be eliminated. This should also determine which losses are recoverable to what extent. [6] Fault liability (whether or not fault is presumed), as well as strict liability for risks and for defective products, should continue to coexist. To the extent these overlap, thereby offering the victim more than one basis to seek compensation against more than one person, the rules on multiple tortfeasors ([31]) govern. [7] In some digital ecosystems, contractual liability or other compensation regimes will apply alongside or instead of tortious liability. This must be taken into account when determining to what extent the latter needs to be amended. In most cases, also with emerging digital technologies, more than one basis of liability may be invoked if the risks they carry with them materialise. These bases of liability may either all or in part be available to the immediate victim, or to the various parties involved. This raises the question of whether the first person who paid compensation to the victim can recover at least part of their compensation payment from another party. Illustration 4. For example, if the operator of an autonomous vehicle (O) is held strictly liable for any losses caused by its operation, but the producer of the autonomous vehicle (P) is also liable because the accident was caused by a product defect, O may pass on some or all of that risk on the recourse level to P, if it is O, or O’s insurer, who has paid damages to the victim in the first place.94

Drawing the line between liability in tort and contractual liability is often difficult. Doing so becomes all the more important in jurisdictions that do not allow concurrent claims under both regimes, such as France.95 Jurisdictions that do allow concurrent claims tend to overcome deficiencies of tort law by shifting tort cases into the realm of contractual liability, for example by creating quasi-contractual obligations with the prime purpose of allowing the beneficiaries of such obligations to avail themselves of the benefits of a contractual claimant.96 However,

94 See the choices made in the PLD in this respect. 95 See M Martín-Casals (ed), The Borderlines of Tort Law: Interactions With Contract Law (2019). 96 See, e.g., the concept of a contract with protective duties in relation to third parties, which was (and is still being) used in Austria to pursue direct claims of the victims of defective products against the manufacturer alongside the strict liability regime of the PLD. Expert Group on Liability and New Technologies

360

C. Perspectives on liability for emerging digital technologies

there is always a limited group of victims who benefit from such contract theories, and victims who are outside the scope of application may still face serious difficulties. To the extent that victims of emerging digital technologies already have claims under such contract theories, the liability gap created by the disruptive effects of these technologies may be narrow or even non-existent, at least with regard to the immediate victims of the risks of the technologies in question. However, those paying compensation to them under contract liability may still want to seek recourse against, for example, the manufacturer of the product they sold, which caused harm to their customers or users. The availability of a contractual claim of recourse against another party may also come into play in deciding whether or not the party in question may be the appropriate addressee of the victim’s tort claim.97 In certain damage scenarios, such as healthcare, there may be other systems in place to protect the immediate victims. This has to be taken into account when determining to what extent (and where exactly) emerging digital technologies pose challenges to existing liability regimes.

4. Legal personality ([8]) [8] For the purposes of liability, it is not necessary to give autonomous systems a legal personality Over the years, there have been many proposals for extending some kind of legal personality to emerging digital technologies, some even dating from the last century.98 In recent times, the EP report on ‘Civil Law Rules on Robotics’99 called on the Commission to create a legislative instrument to deal with liability caused by robots. It also asked the Commission to consider ‘a specific legal status for ro-

97 This is one reason why the manufacturer of the final product is typically singled out as the primary person to address product liability claims to, because they may have a contractual claim against the producer of a component (or at least have assigned the risk of harm to third parties internally). 98 See, e.g., L Solum, ‘Legal Personhood for Artificial Intelligences’, 70 NC L Rev 1231 (1992). 99 Footnote 4 above. Final Report of the New Technologies Formation

4. Legal personality

361

bots’, ‘possibly applying electronic personality’, as one liability solution.100 Even in such a tentative form, this proposal proved highly controversial.101 Legal personality comes in many forms, even for natural persons, such as children, who may be treated differently from adults. The best-known class of other-than-natural persons, corporations, have long enjoyed only a limited set of rights and obligations that allows them to sue and be sued, enter into contracts, incur debt, own property, and be convicted of crimes. Giving robots or AI a legal personality would not require including all the rights natural persons, or even companies, have. Theoretically, a legal personality could consist solely of obligations. Such a solution, however, would not be practically useful, since civil liability is a property liability, requiring its bearer to have assets. Still, the experts believe there is currently no need to give a legal personality to emerging digital technologies. Harm caused by even fully autonomous technologies is generally reducible to risks attributable to natural persons or existing categories of legal persons, and where this is not the case, new laws directed at individuals are a better response than creating a new category of legal person.102 Any sort of legal personality for emerging digital technologies may raise a number of ethical issues. More importantly, it would only make sense to go down that road if it helps legal systems to tackle the challenges of emerging digital technologies.103 Any additional personality should go hand-in-hand with funds assigned to such electronic persons, so that claims can be effectively brought against them. This would amount to putting a cap on liability and – as experience with corporations has shown – subsequent attempts to circumvent such restrictions by pursuing claims against natural or legal persons to whom electronic persons can be attributed, effectively ‘piercing the electronic veil’.104 In addition, in order to give a real dimension to liability, electronic agents would have to be able to acquire assets on their own. This would require the resolution of several legislative pro-

100 Id. 101 See the Open Letter to the European Commission Artificial Intelligence and Robotics (2018), . 102 R Abbott/A Sarch, ‘Punishing Artificial Intelligence: Legal Fiction or Science Fiction’, UC Davis Law Review, [forthcoming 2019, ]. 103 U Pagallo, ‘Apples, oranges, robots: four misunderstandings in today’s debate on the legal status of AI systems’, in Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376 (2018) 2133. See also G Wagner, ‘Roboter als Haftungssubjekte? Konturen eines Haftungsrechts für autonome Systeme’, in F Faust/H-B Schäfer (eds), Zivilrechtliche und rechtsökonomische Probleme des Internets und der künstlichen Intelligenz (2019) 1. 104 BA Koch, ‘Product Liability 2.0 – Mere Update or New Version?’ in S Lohsse/R Schulze/ D Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (2019) 99 (115). Expert Group on Liability and New Technologies

362

C. Perspectives on liability for emerging digital technologies

blems related to their legal capacity and how they act when performing legal transactions. Illustration 5. Imagine liability for a fully autonomous car were on the car instead of its operator. Victims of accidents would receive compensation only if insurance is taken out for the car and someone (who?) pays the premiums, or if someone (who?) provides the car with assets from which damages could be paid. If such assets did not suffice to fully compensate the victims of an accident, said victims would have a strong incentive to seek compensation from the person benefiting from the operation of the car instead. If the car’s assets were sufficient to pay the same level of compensation as under existing liability and insurance regimes, there would not be any cause for discussion, but in that case, giving the car a legal personality would be a mere formality and not really change the situation.

The experts wish to stress, however, that they only look at the liability side of things and do not take any kind of position on the future development of company law – whether an AI could act as a member of a board, for example.

5. Operator’s strict liability ([9]–[12]) [9] Strict liability is an appropriate response to the risks posed by emerging digital technologies, if, for example, they are operated in non-private environments and may typically cause significant harm. [10] Strict liability should lie with the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from their operation (operator). [11] If there are two or more operators, in particular (a) the person primarily deciding on and benefitting from the use of the relevant technology (frontend operator) and (b) the person continuously defining the features of the relevant technology and providing essential and ongoing backend support (backend operator), strict liability should lie with the one who has more control over the risks of the operation. [12] Existing defences and statutory exceptions from strict liability may have to be reconsidered in the light of emerging digital technologies, in particular if these defences and exceptions are tailored primarily to traditional notions of control by humans. Existing strict liability rules in the Member States may already apply to emerging digital technologies. The best example of this is liability regimes for motorised veFinal Report of the New Technologies Formation

5. Operator’s strict liability

363

hicles that will most likely already apply to autonomous cars, or for aircraft, (that may already include at least some drones). However, the situation in Europe still varies a lot. Some jurisdictions have more or less generous general clauses, or at least allow analogy to existing statutory regimes, whereas others do without the fault requirement in only very few, narrowly defined situations, but often expand the notion of fault. Strict liability typically only applies in cases of physical harm to persons or property, but not for pure economic loss. Even in the same jurisdiction, there can be considerable differences between the various strict liability regimes, as shown by the diverse range of defences available to the liable person, or by the legislator’s choice in favour of or against caps. The mere fact that technology is new is not justification enough for introducing strict liability. Nevertheless, emerging digital technologies that may typically cause significant harm105 comparable to the risks already subject to strict liability should also be subject to strict liability. This is because victims should be treated alike if they are exposed to and ultimately harmed by similar dangers. For the time being, this applies primarily to emerging digital technologies which move in public spaces, such as vehicles, drones, or the like. Smart home appliances will typically not be proper candidates for strict liability. It is in particular objects of a certain minimum weight, moved at a certain minimum speed, that are candidates for additional bases of strict liability, such as AI-driven delivery or cleaning robots, at least if they are operated in areas where others may be exposed to risk. Strict liability may not be appropriate for merely stationary robots (e.g. surgical or industrial robots), even if AI-driven, which are exclusively employed in a confined environment, with a narrow range of people exposed to risk, who in addition are protected by a different – including contractual – regime (in the illustrations below, patients protected by contractual liability or factory staff covered by workmen’s compensation schemes).106 Illustration 6. The sensors controlling the path of an AI-driven robot transporting heavy component parts in Factory F malfunction, causing the robot to leave its intended path, exit the factory and run into passer-by P on the street. Even if existing rules of strict motor vehicle liability may not apply in this case, P should still be able to seek compensation from F without having to prove that F or one of its staff is at fault.

If the relevant risk threshold for an emerging digital technology is reached and it therefore seems appropriate to make the operation of this technology subject to

105 The significance being determined by the interplay of the potential frequency and the severity of possible harm. 106 See also Illustration 3 above. Expert Group on Liability and New Technologies

364

C. Perspectives on liability for emerging digital technologies

a strict liability regime, said regime should share the same features as other nofault liabilities for comparable risks. This also applies to the question which losses are recoverable to what extent, including whether caps should be introduced and whether non-pecuniary damage is recoverable. The introduction of strict liability should offer victims easier access to compensation, without excluding, of course, a parallel fault liability claim if its requirements are fulfilled.107 Furthermore, while strict liability will typically channel liability onto the liable person (for example, the operator of the technology), this person will retain the right to seek recourse from others contributing to the risk, such as the producer. The experts have discussed extensively whether strict liability for emerging digital technologies should rather be on the owner/user/keeper of the technology than on its producer. It has been pointed out, in particular in the context of autonomous cars, that while the vast majority of accidents used to be caused by human error in the past, most accidents will be caused by the malfunctioning of technology in the future (though not necessarily of the autonomous car itself). This in turn could mean that it would not be appropriate to hold the owner/user/keeper strictly liable in the first place, because it is the producer who is the cheapest cost avoider and who is primarily in a position to control the risk of accidents. On the other hand, it is still the owner/user/keeper who decides when, where and for which purposes the technology is used, and who directly benefits from its use. Also, if strict liability for operating the technology (besides product liability) were on the producer, the cost of insurance would be passed on to the owners anyway through the price mechanism. On balance the NTF of the Expert Group does not consider the traditional concepts of owner/user/keeper helpful in the context of emerging digital technologies. Rather, they prefer the more neutral and flexible concept of ‘operator’, which refers to the person who is in control of the risk connected with the operation of emerging digital technologies and who benefits from such operation. ‘Control’ is a variable concept, though, ranging from merely activating the technology, thus exposing third parties to its potential risks, to determining the output or result (such as entering the destination of a vehicle or defining the next tasks of a robot), and may include further steps in between, which affect the details of the operation from start to stop. However, the more sophisticated and more autonomous a system, the less someone exercises actual ‘control’ over the details of the operation, and defining and influencing the algorithms, for example by continuous updates, may have a greater impact than just starting the system.

107 See Illustration 9 below. Final Report of the New Technologies Formation

5. Operator’s strict liability

365

With emerging digital technologies, there is often more than just one person who may, in a meaningful way, be considered as ‘operating’ the technology. The owner/user/keeper may operate the technology on the frontend, but there is often also a central backend provider who, on a continuous basis, defines the features of the technology and provides essential backend support services. This backend operator may have a high degree of control over the operational risks others are exposed to. From an economic point of view, the backend operator also benefits from the operation, because that operator profits from data generated by the operation, or that operator’s remuneration is directly calculated on the basis of the duration, continuous nature or intensity of the operation, or because a one-off payment this backend operator has received reflects the estimated overall duration, continuous nature and intensity of the operation. Illustration 7. An AV may be privately owned by an individual who decides whether to use the AV for shopping or for going on a business trip, and how often, when and where. This individual is the frontend operator. The producer of the AV or another service provider is likewise controlling the AV on a continuous basis, e.g. by continuously providing cloud navigation services, continuously updating map data or the AV software as a result of supervised fleet machine learning, and deciding when the AV needs what kind of maintenance. This person is the backend operator. Of course frontend and backend operator may also be the same person, such as in a ‘mobility as a service’ scheme (MaaS), where an AV is operated by a fleet operator who is also the backend operator.

Where there is more than one operator, such as a frontend and a backend operator, the experts find that strict liability should be on the one who has more control over the risks posed by the operation. While both control and benefit are decisive for qualifying a person as operator, the benefit is often very difficult to quantify, so relying only on benefit as the decisive factor for deciding who, out of two operators, should be liable would lead to uncertainty. Very often, the frontend operator will have more control, but where emerging digital technologies become more backend-focused, there may be cases where so much continuous control over the technology remains with the backend operator that – despite the fact that the technology is sold to individual owners – it is more convincing to hold the backend operator liable as the person primarily in a position to control, reduce and insure the risks associated with the use of the technology. Ideally, in order to avoid uncertainty, the legislator should define which operator is liable under which circumstances, and all other matters that need to be regulated (concerning insurance for example). For instance, the legislator could decide that for AVs with a level of automation of 4 or 5, it is the provider running the system and who enters the AV in the national registry who is liable. This proExpert Group on Liability and New Technologies

366

C. Perspectives on liability for emerging digital technologies

vider would therefore also take out insurance and could pass on the premiums through the fees paid for its services. Where several providers fulfil the function of backend operators, one of them would have to be designated as responsible operator for every AV. What has been said so far can, in most Member States, largely be implemented by way of a simple extension of existing schemes of strict liability. However, as these schemes stand today in many Member States, they include a range of defences, exceptions and exclusions that may not be appropriate for emerging digital technologies, because they reflect a focus on continuous control by humans for example. Illustration 8. Several national traffic liability schemes focus on the existence of a driver or allow for a defence in case of an unavoidable event or similar notions. These concepts do not translate properly into risk scenarios involving emerging digital technologies because the driver of an AV more resembles a passenger and because liability (or the exclusion of it) can no longer be linked to human control, which is typically missing entirely, at least with level 5 AVs.

6. Producer’s strict liability ([13]–[15]) [13] Strict liability of the producer should play a key role in indemnifying damage caused by defective products and their components, irrespective of whether they take a tangible or a digital form. [14] The producer should be strictly liable for defects in emerging digital technologies even if said defects appear after the product was put into circulation, as long as the producer was still in control of updates to, or upgrades on, the technology. A development risk defence should not apply. [15] If it is proven that an emerging digital technology has caused harm, the burden of proving defect should be reversed if there are disproportionate difficulties or costs pertaining to establishing the relevant level of safety or proving that this level of safety has not been met. This is without prejudice to the reversal of the burden of proof referred to in [22] and [24]. In the opinion of the NTF of the Expert Group, the principle of producer responsibility, adopted in relation to traditional products, should also apply to emerging digital technologies. The motives behind it, such as a fair distribution of the risks and benefits associated with commercial production, the spreading of the costs of individual harm to all buyers of a given type of product, and prevenFinal Report of the New Technologies Formation

6. Producer’s strict liability

367

tion, are fully valid even if the product or one of its essential components is in digital form. It is in line with the principle of functional equivalence (see [3](b)), that damage caused by defective digital content should trigger the producer’s liability because digital content fulfils many of the functions tangible movable items used to fulfil when the PLD was drafted and passed. This is all the more true for defective digital elements of other products, some of which come separately from the tangible item (for example, as a control app to be downloaded onto the user’s smartphone), or as over-the-air updates after the product has been put into circulation (security updates for example), or as digital services provided on a continuous basis during the time the product is being used (for example, navigation cloud services). When the defect came into being as a result of the producer’s interference with the product already put into circulation (by way of a software update for example), or the producer’s failure to interfere, it should be regarded as a defect in the product for which the producer is liable. The point in time at which a product is placed on the market should not set a strict limit on the producer’s liability for defects where, after that point in time, the producer or a third party acting on behalf of the producer remains in charge of providing updates or digital services. The producer should therefore remain liable where the defect has its origin (i) in a defective digital component or digital ancillary part or in other digital content or services provided for the product with the producer’s assent after the product has been put into circulation; or (ii) in the absence of an update of digital content, or of the provision of a digital service which would have been required to maintain the expected level of safety within the time period for which the producer is obliged to provide such updates. Only recently, the EU has confirmed in Directive (EU) 2019/771 on the sale of goods that a seller is also liable for such digital elements being in conformity with the contract, including for updates provided for as long a period as the consumer may reasonably expect, and Directive (EU 2019/770 establishes a similar regime for digital content and digital services. The proposed features of a producer’s strict liability are very much in the same vein and follow very much the same logic, though on different grounds. As indicated above, emerging digital technologies are characterised by limited predictability. This phenomenon will intensify with the dissemination of machine learning. The interconnectedness of devices, as well as threats to cybersecurity, also contribute to difficulties in predicting the product’s performance. A defect in digital content or in a product with digital elements may therefore result from the impact of the environment in which the product operates or from the product’s evolution, for which the manufacturer only created a general framework Expert Group on Liability and New Technologies

368

C. Perspectives on liability for emerging digital technologies

but which they did not design in detail. In view of the need to share benefits and risks efficiently and fairly, the development risk defence, which allows the producer to avoid liability for unforeseeable defects, should not be available in cases where it was predictable that unforeseen developments might occur. Features of emerging digital technologies, such as opacity, openness, autonomy and limited predictability (see [1]), may often result in unreasonable difficulties or costs for the victim to establish both what safety an average user is entitled to expect, and the failure to achieve this level of safety. At the same time, it may be significantly easier for the producer to prove relevant facts. This asymmetry justifies the reversal of the burden of proof. The victim should also benefit from an alleviation of evidentiary burden with regard to the causal relationship between a defect and the damage (see [26]). Producers’ strict liability for defective products should be supplemented with fault-based liability for failure to discharge monitoring duties (see [17](b)).

7. Fault liability and duties of care ([16]–[17]) [16] Operators of emerging digital technologies should have to comply with an adapted range of duties of care, including with regard to (a) choosing the right system for the right task and skills; (b) monitoring the system; and (c) maintaining the system. [17] Producers, whether or not they incidentally also act as operators within the meaning of [10], should have to: (a) design, describe and market products in a way effectively enabling operators to comply with the duties under [16]; and (b) adequately monitor the product after putting it into circulation. For the use of more traditional technologies, it is already recognised that their operators have to discharge a range of duties of care. They relate to the choice of technology, in particular in light of the tasks to be performed and the operator’s own skills and abilities; the organisational framework provided, in particular with regard to proper monitoring; and maintenance, including any safety checks and repair. Failure to comply with such duties may trigger fault liability regardless of whether the operator may also be strictly liable for the risk created by the technology.

Final Report of the New Technologies Formation

7. Fault liability and duties of care

369

Illustration 9. Despite adverse weather conditions due to a heavy storm, which were entirely foreseeable, retailer (R) continues to employ drones to deliver goods to customers. One of the drones is hit by a strong wind, falls to the ground and severely injures a passer-by.108 R may not only be strictly liable for the risks inherent in operating drones, but also for its failure to interrupt the use of such drones during the storm.

In many national legal systems, courts have raised the relevant duty of care to a point where it is difficult to draw the line between fault liability and strict liability. With emerging digital technologies, such duties of care – despite all new opportunities and safety-enhancing technologies these systems may feature – are often magnified even more. Illustration 10. Airline A buys a plane from producer P. A new AI element of the auto pilot may, under very exceptional circumstances, cause the plane to crash if the software is not manually disabled by the pilot. Airline A has a duty of care to make itself familiar with the new feature, to monitor the plane and to make sure pilots receive appropriate training and exchange information about and experience of dealing with the new software. If A breaches this duty, A may be liable under fault liability (without prejudice to existing international legal instruments that may limit A’s liability).

The more advanced technologies become, the more difficult it is for operators to develop the right skills and discharge all duties. While the risk of insufficient skills should still be borne by the operators, it would be unfair to leave producers entirely out of the equation. Rather, producers have to design, describe and market products in a way effectively enabling operators to discharge their duties. Illustration 11. In Illustration 10, it is primarily P who has to alert its customer (A) to the particular features and risks of the software in question, and possibly to offer the necessary training courses, and to monitor the system once it is on the market.

Under many national jurisdictions, a general product monitoring duty on the part of producers has already been developed for the purposes of tort law. In the light of the characteristics of emerging digital technologies, in particular their openness and dependency on the general digital environment, including the emergence of new malware, such a monitoring duty would also be of paramount importance.

108 This Illustration is inspired by a hypothetical of the Commission Staff Working Document (fn 8) 12. Expert Group on Liability and New Technologies

370

C. Perspectives on liability for emerging digital technologies

8. Vicarious liability for autonomous systems ([18]–[19]) [18] If harm is caused by autonomous technology used in a way functionally equivalent to the employment of human auxiliaries, the operator’s liability for making use of the technology should correspond to the otherwise existing vicarious liability regime of a principal for such auxiliaries. [19] The benchmark for assessing performance by autonomous technology in the context of vicarious liability is primarily the one accepted for human auxiliaries. However, once autonomous technology outperforms human auxiliaries, this will be determined by the performance of comparable available technology which the operator could be expected to use, taking into account the operator’s duties of care ([16]). One option proposed for addressing the risks of emerging digital technology is the potential expansion of the notion of vicarious liability, leaving the respective national regime of liability for others intact, but expanding it (either directly or by way of analogy) to functionally equivalent situations where use is made of autonomous technology instead of using a human auxiliary.109 This may complement strict liability within the meaning of [9]-[12], and fault liability based on the notion of enhanced duties of care within the meaning of [16].110 Illustration 12. A hospital uses an AI-driven surgical robot. Despite the fact that the hospital has complied with all possible duties of care, a patient is harmed because the robot malfunctions in a way nobody could have foreseen. The hospital should be liable, in any case, under the principle outlined in [18].

The scope and conditions for the application of vicarious liability vary from one country to another, as a result of the different ways national legal systems have developed and the resulting broader or narrower scope of application of strict liability they adopted. However, the development of emerging digital technologies, in particular systems with a high degree of decision-making autonomy, requires that the requirements of equivalence be respected (see 2[3](b)). Where the use of a human auxiliary would give rise to the liability of a principal, the use of a digital technology tool instead should not allow the principal to avoid liability. Rather, it should give rise to such liability to the same extent.

109 See B.III.4 above. 110 In many legal systems, some or all types of vicarious liability are in any case considered a subcategory of the former or the latter. Final Report of the New Technologies Formation

9. Logging by design

371

However, as the laws stand in many jurisdictions, the notion of vicarious liability at present requires the auxiliary to have misbehaved (though as assessed according to the standards applicable to the principal). In the case of a machine or technology, this triggers the question according to which benchmarks such ‘conduct’ should be assessed. The experts discussed this in some depth, but did not come to a final conclusion. However, the most convincing answer seemed to be that the benchmark for assessing performance by autonomous technology should primarily be the benchmark accepted for human auxiliaries, but once autonomous technology outperforms human auxiliaries in terms of preventing harm, the benchmark should be determined by the performance of comparable technology that is available on the market.111 As there is usually a broad range of technologies available, which may feature very different safety benchmarks, in choosing the appropriate point of comparison, the same principles should apply as with traditional technologies (such as X-ray machines or other equipment), i.e. reference should be made to the operator’s duty of care with regard to the choice of system (see [16](a)). Illustration 13. In the example of the surgical robot (Illustration 12), it is not difficult to establish relevant misconduct where, for example, the cut made by the robot is twice as long as one a human surgeon would have made. If the cut is longer than the best robots on the market would have made, but still shorter than that of a human surgeon, the question of whether the hospital should have bought a better robot must be answered according to the same principles as the question of whether a hospital should have bought a better X-ray machine or employed extra doctors.

9. Logging by design ([20]–[23]) [20] There should be a duty on producers to equip technology with means of recording information about the operation of the technology (logging by design) if such information is typically essential for establishing whether a risk of the technology materialised, and if logging is appropriate and proportionate, taking into account, in particular, the technical feasibility and the costs of logging, the availability of alternative means of gathering such information, the type and magnitude of the risks posed by the technology, and any adverse implications logging may have on the rights of others.

111 R Abbott, 86 Geo Wash L Rev 1 (2018). Expert Group on Liability and New Technologies

372

C. Perspectives on liability for emerging digital technologies

[21] Logging must be done in accordance with otherwise applicable law, in particular data protection law and the rules concerning the protection of trade secrets. [22] The absence of logged information or failure to give the victim reasonable access to the information should trigger a rebuttable presumption that the condition of liability to be proven by the missing information is fulfilled. [23] If and to the extent that, as a result of the presumption under [22], the operator were obliged to compensate the damage, the operator should have a recourse claim against the producer who failed to equip the technology with logging facilities. Emerging digital technologies not only give rise to unprecedented complexity and opacity. They also offer unprecedented possibilities of reliable and detailed documentation of events that may enable the identification inter alia of what has caused an accident. This can usually be done using log files, which is why it seems desirable to impose, under certain circumstances, a duty to provide for appropriate logging and to disclose the data to the victim in readable format. Any requirements must definitely be suitable for the goals to be achieved and proportionate, taking into account, in particular, the technical feasibility and costs of logging, the values at stake, the magnitude of the risk, and any adverse implications for the rights of others. Logging would have to be done in such a way that no interested party could manipulate the data and that the victim and/or the person who compensates the victim in the first place, for example an insurance provider, has access to it. Furthermore, it goes without saying that logging must be done in accordance with otherwise applicable law, notably on data protection and the protection of trade secrets. Illustration 14. There would be a logging duty in the case of AVs. Traffic accidents occur rather frequently and often cause severe harm to the life and health of humans. Motor vehicles are very sophisticated and expensive anyway, so adding logging technology should not significantly increase the costs of production. There is a lot of data that can reasonably be logged and will serve to reconstruct events and causal chains that are both essential for allocating liability (for example by finding out which AV has caused the crash by not replying to a signal sent by the other AV) and could hardly be reconstructed otherwise. Illustration 15. Logging would not be advisable, however, in the case of an AI-equipped doll for children. The risks associated with the doll are not of a kind where logging would be a suitable response. With regard to the risk of hidden merchandising, meaning that the doll manipulates the child’s mind by mentioning and repeating certain product brands, the negative implications of logging (which would have to include, to a certain extent, the recording of Final Report of the New Technologies Formation

10. Safety rules

373

conversations) for data protection would outweigh any possible benefit. With regard to the risk of a stranger hacking into the doll, the proper response is more cybersecurity to prevent this, not a duty to log.

Failure to comply with a logging and disclosure duty should lead to a rebuttable presumption that the information would, if logged and disclosed, have revealed that the relevant element of liability is fulfilled. Illustration 16. Take the example of a crash between A’s AV and B’s AV, injuring B. The traffic situation was one where, normally, the two AVs would exchange data and ‘negotiate’ which AV enters the lane first. When sued by B, A refuses to disclose the data logged in her AV’s recordings. It is therefore presumed that her AV sent a signal telling B’s AV to enter the lane first, but nevertheless went first itself.

If a product used by the operator failed to contain a logging option (for example, in violation of mandatory regulatory requirements or in contrast to other products of such kind) and the operator is, for this reason, exposed to liability, the operator should be able to pass on the loss resulting from her inability to comply with the duty of disclosing logged data to the victim (typically resulting in the operator’s liability towards the victim) to the producer. This can be achieved in various ways, including by allowing a separate claim, or by subrogation. Illustration 17. Imagine that, in Illustration 16, it is not that A refused to disclose the data, but that A’s AV failed to log the kind of data in question. If A had to pay damages to B for this reason only, she should also be able to sue the producer.

10. Safety rules ([24]) [24] Where the damage is of a kind that safety rules were meant to avoid, failure to comply with such safety rules, including rules on cybersecurity, should lead to a reversal of the burden of proving (a) causation, and/or (b) fault, and/or (c) the existence of a defect. With enhanced complexity, openness and vulnerability, there comes a greater need to introduce new safety rules. Digital product safety differs from product safety in traditional terms in a number of ways, including by taking into account

Expert Group on Liability and New Technologies

374

C. Perspectives on liability for emerging digital technologies

any effect a product may have on the user’s digital environment. Even more importantly, cybersecurity has become essential.112 As to the consequences of compliance or non-compliance with such rules, the experts considered two different solutions. One solution was that failure to comply with the rules may lead to a reversal of the burden of proof concerning key elements of liability, including causation and fault. The other solution was that compliance with the rules leads to a presumption of the absence of causation or fault. The experts decided in favour of the first solution, because it is better suited to addressing the difficulties of victims when it comes to proving the elements of liability in settings that involve emerging digital technologies. It is in particular the pace at which these technologies are evolving, and the necessity of imposing a duty on providers to monitor the market and react more quickly to new threats than any rulemaker could, that made it seem inappropriate to have a presumption of the absence of causation or fault where a provider complied with the rules. Illustration 18. Imagine there is a new rule on cybersecurity of IoT household equipment, designed to prevent hacking and the resulting harm. The victim’s private Wi-Fi is hacked in a way typical of cybersecurity gaps in IoT equipment. Where the victim can show that a water kettle produced by P failed to comply with the standard of safety under adopted safety rules, the victim could sue P, and the onus would be on P to prove that the damage had been caused by a different device.

It should be stressed that this refers only to rules adopted by the lawmaker, such as those adopted under the ‘New Regulatory Approach’, and not to mere technical standards developing in practice. The reversal of the burden of proof discussed here is essential in the area of fault-based liability. In the case of producer liability, a similar principle is already applied in many jurisdictions in the context of national PLD implementations. It is assumed that failure to meet a safety standard means that the product does not provide the level of safety that the consumer is entitled to expect. Similar reasoning should apply to the liability of the producer of an emerging digital technology ([13]–[15]).

11. Burden of proving causation ([25]–[26]) [25] As a general rule, the victim should continue to be required to prove what caused her harm.

112 Cf the Commission Staff Working Document (fn 8) 20. Final Report of the New Technologies Formation

11. Burden of proving causation

375

[26] Without prejudice to the reversal of the burden of proof proposed in [22] and [24](a), the burden of proving causation may be alleviated in light of the challenges of emerging digital technologies if a balancing of the following factors warrants doing so: (a) the likelihood that the technology at least contributed to the harm; (b) the likelihood that the harm was caused either by the technology or by some other cause within the same sphere; (c) the risk of a known defect within the technology, even though its actual causal impact is not self-evident; (d) the degree of ex-post traceability and intelligibility of processes within the technology that may have contributed to the cause (informational asymmetry); (e) the degree of ex-post accessibility and comprehensibility of data collected and generated by the technology; (f) the kind and degree of harm potentially and actually caused. As is already the standard rule in all jurisdictions, whoever demands compensation from another should in general prove all necessary requirements for such a claim, including in particular the causal link between the harm to be indemnified on the one hand and the activities or risks within the sphere of the addressee of the claim may trigger the latter’s liability on the other. This general principle is supported inter alia by concerns of fairness and results from the need to consider and balance the interests of both sides. However, given the practical implications of the complexity and opacity of emerging digital technologies in particular, victims may be in a weaker position to establish causation than in other tort cases, where the events leading to the harm can be more easily analysed in retrospect, even from the victim’s point of view. As is true in all jurisdictions, courts have already in the past found ways to alleviate the burden of proving causation if the claimant’s position is deemed weaker than in typical cases.113 This includes procedural options such as allowing

113 Cf the ruling in CJEU 21.6.2017 C-621/15 Sanofi Pasteur, ECLI:EU:C:2017:484, where the Court greenlighted a rather far-reaching presumption of causation in French court practice on vaccine damage, as long as it did not amount to a full-fledged reversal of the burden of proof, which would have infringed Article 4 of the PLD, which was at stake. Expert Group on Liability and New Technologies

376

C. Perspectives on liability for emerging digital technologies

prima facie evidence,114 applying the theory of res ipsa loquitur,115 or lowering the standard of proof in certain categories of cases.116 Some jurisdictions are also prepared to even shift the burden of proving causation entirely if the basis for holding the defendant liable can be proven as particularly strong by the claimant (such as the defendant’s grave misconduct), but the causal link between such faulty behaviour and the claimant’s harm is merely suspected, but not proven, by the evidence available to the claimant.117 Yet another method of aiding the claimant to prove the cause of harm is by focusing on whoever is in control of key evidence but fails to produce it, for example, if the defendant is or should be able to submit internal evidence such as design blueprints, internal expertise, log files or other recordings, but does not produce such evidence in court, either strategically or because the evidence was lost or never generated. Promoting any specific measure would run the risk of interfering with national rules of procedure in particular. However, in order to offer guidance on the further development and approximation of laws, and in order to allow for a more coherent and comparable line of reasoning, the experts think that lowering the

114 Unlike in a fully-fledged reversal of the burden of proof, prima facie evidence is meant to resolve uncertainties rather than bridge non liquet situations. The claimant still has to prove (in compliance with ordinary evidentiary standards) some links in the alleged chain of causation, but is spared proving all of them if experience has shown that the missing link is typically given in other similar cases. The defendant can rebut this by proving (again adhering to traditional standards) that there is a (mere) genuine possibility of a turn of events deviating from the one expected according to said experience, so that the missing link may indeed have not been given in the present case. 115 Res ipsa loquitur is the inference of negligence from the very nature of a harmful event, where the known circumstances are such that no other explanation for the accident seems possible than negligence within the sphere of the defendant, who had been in full control of the incident that may have caused the harm, such as a hospital where the patient has some surgical instrument in her body after an operation. Cf the English case of Byrne v Boadle, (1863) 2 H & C 722, 159 Eng Rep 299, where a barrel of flour fell out of a warehouse onto a pedestrian passing by, who was not required to prove the negligence of the flourmonger, as barrels do not fall out of such premises in the absence of fault within the latter’s sphere. The dealer could in theory have rebutted this by proving some external cause, though. 116 The latter can often be seen in medical malpractice cases. See BA Koch, ‘Medical Liability in Europe: Comparative Analysis’, in BA Koch (ed), Medical Liability in Europe (2011) 611 (nos 46 ff). 117 Again, this is the practice in medical malpractice in countries like Germany, see § 630h paragraph 5 BGB, according to which it is presumed that a treatment error was the cause of the deterioration in the patient’s condition if such an error was grave and in principle prone to causing such harm. See also the Dutch omkeringsregel; cf A Keirse, ‘Going Dutch: How to Address Cases of Causal Uncertainty’, in I Gilead/M Green/BA Koch (eds), Proportional Liability: Analytical and Comparative Perspectives (2013) 227 (232).  

Final Report of the New Technologies Formation

11. Burden of proving causation

377

bar for the claimant to prove causation may be advisable for victims of emerging digital technologies if the following factors are at play. – First, the technology itself may be known to have certain potentially harmful features, which could be taken into account even though it is not (yet) proven that such risks have indeed materialised. If the claimant can prove that there was a defect in a product incorporating emerging digital technologies, thereby creating an extraordinary risk in addition to the ones commonly associated with flawless products, but – again – the harm caused cannot be (fully) traced to said defect, this might still be considered in the overall assessment of how to implement the burden of proving causation. – If there are multiple possible causes and it remains unclear what exactly triggered the harm (or which combination of potential causes at which percentage of probability), but if the likelihood of all possible causes combined, that are attributable to one party (e.g. the operator) exceeds a certain threshold (e. g. 50 % or more), this may also contribute to placing the burden of producing evidence rebutting such first-hand impressions onto that party.  

Illustration 19. A small delivery robot operated by retailer R injures a pedestrian on the street. It remains unclear which of the following possible causes triggered the accident: the robot may have been defective from the start; R may have failed to install a necessary update that would have prevented the accident; R’s employee E may have overloaded the robot; hacker H may have intentionally manipulated the robot; some teenagers may have jumped onto the robot for fun; a roof tile may have fallen off a nearby building, and so on.118 If the likelihood of all possible causes that are attributable to R significantly exceeds the likelihood of all other possible causes, the onus should be on R to prove that none of the causes within its own sphere triggered the accident.



Considering further aspects that relate to the analysis of the causal events and who is (or should be) predominantly in control of the expertise and evidence contributing to such analysis, one could consider the informational asymmetry typically found between those developing and producing emerging digital technologies on the one hand and third-party victims on the other hand as another argument in the overall assessment of who should bear the burden of proving causation and to what extent. This includes the technology itself, but also potential evidence generated by such technology on the occasion of the harmful event. The latter not only considers who can retrieve such data, but also who can read and interpret it (particularly if it is encrypted or only intelligible with specific expert knowledge). One specific aspect in this

118 Cf the hypothetical used by the Commission Staff Working Document (fn 8) 12. Expert Group on Liability and New Technologies

378



C. Perspectives on liability for emerging digital technologies

context is if an item that was involved in the harmful event did (or according to industry standards should) have some logging device installed, which could have collected information that is capable of shedding light on what actually happened.119 Finally, as is already commonly used as one weighty argument in the overall balance of interests in tort cases, the type and extent of harm may also contribute to deciding to what extent it should still be the victim who proves the cause of her damage.120

12. Burden of proving fault ([27]) [27] If it is proven that an emerging digital technology caused harm, and liability therefor is conditional upon a person’s intent or negligence, the burden of proving fault should be reversed if disproportionate difficulties and costs of establishing the relevant standard of care and of proving their violation justify it. This is without prejudice to the reversal of the burden of proof proposed in [22] and [24](b). When the damage results from an activity in which emerging digital technologies play a role, the victim may face significant difficulties in proving facts that substantiate her damages claim based on negligence or fault. This justifies rethinking the traditional approach to proving these conditions of liability. Adopting any rule concerning the distribution of the burden of proving fault requires explaining fault in the first place. There is a variety of meanings attached to this word in various legal systems, ranging from equating fault with wrongfulness of conduct to understanding fault as purely individual and subjective blameworthiness.121 Thus fault-based liability requires: a) always a breach of a certain duty of care (standard of conduct); b) in some (probably most) jurisdictions, an intent to breach this duty of care or negligence in so doing; c) in some (probably the minority of) jurisdictions, a negative ethical assessment of the tortfeasor’s conduct as subjectively reprehensible.

119 See also [22]. 120 As expressed by Article 2:101 paragraph 1 PETL, ‘[t]he scope of protection of an interest depends on its nature; the higher its value, the precision of its definition and its obviousness, the more extensive is its protection.’ 121 See P Widmer, ‘Comparative Report on Fault as a Basis of Liability and Criterion of Imputation (Attribution)’, in P Widmer (ed), Unification of Tort Law: Fault (2005) 331 ff.  

Final Report of the New Technologies Formation

12. Burden of proving fault

379

The standard of conduct may be set by the statute or otherwise normatively prescribed in the form of regulatory measures or standards and norms enacted by competent authorities. However, it may also be established ex post by the court, on the basis of general criteria such as reasonableness, diligence, etc. Emerging digital technologies, in particular the presence of AI, change the structure of fault-based liability. The two most prominent examples of applying fault-based liability to AI-related damage are the liability of the producer for damage caused by the product he has produced, should have monitored, etc. (liability outside the scope of a strict liability regime such as the one envisaged under [13]–[15] above) and liability of the user (operator) for damage caused by him while using an AI-driven tool. In the case of the producer’s liability (outside strict product liability), the direct cause of damage is a product, but the damaging features of the product are the effect of the producer’s negligence in designing, manufacturing, marketing, monitoring, etc., the product. Thus, proving fault requires proving that the product was not of a required quality and that the producer intentionally or negligently breached an applicable standard of conduct with regard to this product. The advance of emerging digital technologies increases evidentiary difficulties in relation to: – the quality requirements for the product and details of its actual operation that has led to the damage; – breach of a duty of care on the part of the producer with regard to the product (including the applicable standard of conduct); – facts that allow the court to establish that breach of the duty of care was intentional or negligent. As far as the user’s liability is concerned, the overall structure of liability for actions performed using tools is the following: an actor + the use of a tool



victim

an act

causation

damage

[wrongfulness/fault]

[protected interest]

The challenge of the fault analysis in the traditional model is the assessment of the actor’s behaviour with regard to: (i) his decision to act; (ii) his decision to use a tool at all, (iii) his choice of tool, (iv) his way of using it or controlling or monitoring its operation. Thus the actor is at fault if: (i) his decision on the action itself is wrong and there is intent or negligence in making this decision, or (ii) his decision about Expert Group on Liability and New Technologies

380

C. Perspectives on liability for emerging digital technologies

using a tool in the action instead of performing it himself is wrong and there is intent or negligence in making this decision, or (iii) his choice of tool is faulty (he chooses the kind of tool that is unsuitable to the task or the right tool he has chosen subsequently malfunctions) and there was intent or negligence in making this choice, or (iv) he uses his tool or controls/monitors its operation incorrectly and there is intent or negligence in this behaviour. Under the general rule of liability, the burden of proving both breach of a duty of care and intent or negligence lies with the victim. In the traditional model, the proper functioning of the tool and the expected outcome of its operation are known and easy to establish and details of the tool’s actual performance are usually not too difficult to examine. Because of their fast development and their features, described above (opacity, openness, autonomy and limited predictability), emerging digital technologies used as tools add further layers of complexity to the fault-based liability model, challenging the operation of fault-based liability rules on two levels: a) a structural level: the autonomy and self-learning capacity of the technology may be seen as breaking the causal link between the actor’s conduct and the damage – this is the problem of attribution of the operation and its outcome to a person, which should be solved by legally ascribing all the emerging digital technology’s actions and their effects to the operator of the technology (cf [18]). b) a practical, fact-finding level: facts on which liability is dependent may be hard to discover and communicate to the court. The difficulty may be: – finding out and explaining to another person how a given set of input data resulted in the outcome of the AI-operated process and that this amounted to a deficiency in the system; – showing the tortfeasor breached a standard (level) of care in deciding to use this particular emerging digital technology in this concrete situation, or in operating/monitoring it; – establishing that the breach of this standard was intentional or negligent. In theory, the claimant has to prove that the defendant breached an applicable standard (level) of care and did so intentionally or negligently. In practice, however, if the standard of care has not been normatively prescribed (by a statute or otherwise), the claimant’s burden extends to proving (or persuading) what level of care should apply to the defendant’s behaviour. The lack of a clear standard therefore puts the party with the burden of proving the existence of the standard, or its breach, at a disadvantage.

Final Report of the New Technologies Formation

13. Causes within the victim’s own sphere

381

The question is thus whether all these evidentiary difficulties should remain with the victim or all or some of them, in all or in specific circumstances, should affect the defendant. Items of proof, the burden for which normally is on the claimant, but could be allocated to the defendant are: – breach of a duty of care by the defendant (the producer, with regard to designing, manufacturing, monitoring, etc., and the user with regard to the choice of technology and operating/monitoring it), – intention or negligence of the defendant, – substandard qualities of the technology, – incorrect functioning of the technology. In various legal systems, various factors are recognised as justifying modification of the burden of proof in favour of the claimant, in particular: a) high likelihood of fault, b) the parties’ practical ability to prove fault, c) violation of statutory obligation by the defendant, d) particular dangerousness of the defendant’s activity that resulted in damage, e) nature and scope of the damage. There are also various legal techniques for doing this, from the statutory reversal of the burden of proof to all sorts of procedural tools such as prima facie evidence, presumptions in fact, adverse inference and so on. Features of emerging digital technologies such as opacity, openness, autonomy and limited predictability may often result in unreasonable difficulties or costs for the plaintiff to prove facts necessary for the establishment of fault. At the same time, the proof of relevant facts may be much easier for the defendant (producer or operator of the technology). This asymmetry justifies the reversal of the burden of proof. While, as mentioned above, in many cases courts may achieve similar results with various procedural arrangements, the introduction of a clear rule will ensure the desired convergence and predictability in the application of the law.

13. Causes within the victim’s own sphere ([28]) [28] If a cause of harm is attributable to the victim, the reasons for holding another person liable should apply correspondingly when determining if and to what extent the victim’s claim for compensation may be reduced.

Expert Group on Liability and New Technologies

382

C. Perspectives on liability for emerging digital technologies

While jurisdictions throughout Europe already now acknowledge that conduct or some other risk within the victim’s own sphere may reduce or even exclude her claim for compensation vis-à-vis another, it seems important to state that whatever the NTF of the Expert Group proposes to enhance the rules on liability for emerging digital technologies should apply accordingly if such technologies are being used within the victim’s own sphere. This is in line with the so-called ‘mirror image’ rule of contributory conduct.122 Therefore, if two AVs collide, for example, the above-mentioned criteria for identifying the liable operator ([10]–[11]) should apply correspondingly to determining what effect the impact of the victim’s own vehicle on her loss has on the liability of the other AV’s operator.

14. Commercial and technological units ([29]–[30]) [29] Where two or more persons cooperate on a contractual or similar basis in the provision of different elements of a commercial and technological unit, and where the victim can demonstrate that at least one element has caused the damage in a way triggering liability but not which element, all potential tortfeasors should be jointly and severally liable vis-à-vis the victim. [30] In determining what counts as a commercial and technological unit within the meaning of [29] regard is to be had to (a) any joint or coordinated marketing of the different elements; (b) the degree of their technical interdependency and interoperation; and (c) the degree of specificity or exclusivity of their combination. Among the many challenges for victims of emerging digital technologies is the challenge of showing what part of a complex digital ecosystem has caused the damage. This may be particularly hard where different elements have been provided by different parties, creating a significant risk for the victim of suing the wrong party and ending up with no compensation and high litigation costs. It is therefore justified to have special rules for situations where two or more parties cooperate on a contractual or similar basis in the provision of different elements of one and the same digital ecosystem, forming a commercial and technological unit.

122 Cf comment 5 on Article VI-5:102 DCFR (the mirror principle) and M Martín-Casals/U Magnus, ‘Comparative Conclusions’, in M Martín-Casals/U Magnus (eds), Unification of Tort Law: Contributory Negligence (2004) 259 (263 ff), highlighting that this mirror is quite ‘blurred’ (at 264).  

Final Report of the New Technologies Formation

383

14. Commercial and technological units

In these situations, all potential tortfeasors should be jointly and severally liable towards the victim where the victim can demonstrate that at least one element has caused the damage in a way triggering liability, but not which element. Illustration 20. A smart alarm system produced by manufacturer A was added to a smart home environment produced by B and set up and installed by C. This smart home hub runs on an ecosystem developed by provider D. A burglary occurs, but the police are not duly alerted by the alarm system, so significant damage is caused.123 A, B and D are linked by sophisticated contractual arrangements concerning the interoperation of the relevant components each of them supplies and any related marketing. If it can be shown that the malfunctioning was not caused by C (or an external cause), but if it remains unclear what the situation is between A, B and D, the home owner should be able to sue A, B and D jointly. Any one of them is free to prove in proceedings that it was not the commercial and technological unit that caused the malfunctioning, but if not, the home owner can hold them jointly and severally liable.

The rationale behind this is, on the one hand, that there might be serious undercompensation of victims in an emerging digital technologies scenario as compared with the functionally equivalent situation of the past when alarm systems used to be manufactured by one clearly identifiable producer (and any responsibility on the part of the suppliers of components would have come on top of that) without any significant interaction with the other components of an ecosystem. This may even create false incentives, as providers might be tempted to artificially split up the ecosystems they provide into independent components, thereby obscuring causal links and diluting responsibility. In any case, it should not be the victim who ultimately bears the risk of a particular internal structure on the provider’s side in a situation where there could just as well have been one provider. It is also more efficient to hold all potential injurers liable in such cases, as the different providers are in the best position to control risks of interaction and interoperability and to agree upfront on the distribution of the costs of accidents. It may be difficult, in borderline cases, to define what still qualifies as a commercial and technological unit. Factors to be taken into account will be, primarily, any joint or coordinated marketing of the elements, but also the degree of technical interdependency and interoperation between the elements and the degree of specificity or exclusivity of their combination. Illustration 21. Imagine there was, in Illustration 20, also network provider E who could have caused the problem because of a temporary interruption of the internet connection. However, smart home equipment normally just needs network connectivity, but not network connectivity from a particular provider, and enhanced cooperation between A, B and D on the one hand and E on the other cannot be expected by the consumer. Things might be differ-

123 Based on the example used in the Commission Staff Working Document (fn 8) 15 f.  

Expert Group on Liability and New Technologies

384

C. Perspectives on liability for emerging digital technologies

ent in the rather exceptional case that this was in fact offered as a package, with E marketing her services on the strength of their being particularly reliable as a basis for this type of smart home ecosystem.

Commercial and technological units may also become relevant at the stage of redress between multiple tortfeasors, whether or not the notion of commercial and technological units had already been relied on by the victim (see [31]).

15. Redress between multiple tortfeasors ([31]) [31] Where more than one person is liable for the same damage, liability to the victim is usually solidary (joint). Redress claims between tortfeasors should only be for identified shares (several), unless some of them form a commercial and/or technological unit ([29]-[30]), in which case the members of this unit should be jointly and severally liable for their cumulative share also to the tortfeasor seeking redress. One of the most pressing problems for victims in modern digital ecosystems is that, due to enhanced complexity and opacity, they often cannot find out and prove which of several elements has actually caused an accident (the classic alternative causation scenario). Illustration 22. A patient’s artery is cut by an AI-driven surgical robot either due to a failure of the surgeon operating the robot, or due to the wrong execution of the surgeon’s movements by the robot. If so, neither of the two potential causes satisfies the conditio sine qua non test (‘but for’ test), because if either one of them is hypothetically disregarded, the damage may still have been caused by the remaining respective other event(s). The consequence would be that neither of these suspected reasons why the victim was harmed could trigger liability, so the victim could – at least in some legal systems – end up without a claim for compensation, despite the known certainty that one of the two or more events was indeed the cause of damage.

Legal systems in the Member States react very differently to such scenarios, and each solution has its own drawbacks.124 Where a person caused damage to the victim and the same damage is also attributable to another person, the liability of

124 The PETL have opted for the solution that each of multiple potential tortfeasors should only be held liable for a share of the total loss that corresponds to the probability that it might have been them, which – in cases where this share cannot be determined – typically means per capita: Article 3.103 paragraph 1 PETL provides: ‘In case of multiple activities, where each of them alone would have been sufficient to cause the damage, but it remains uncertain which one in fact Final Report of the New Technologies Formation

15. Redress between multiple tortfeasors

385

multiple tortfeasors is normally joint liability,125 i.e. the victim may request payment of the full sum or part of the sum from any of the multiple tortfeasors, at the victim’s discretion, but the total sum requested may not exceed the full sum due. There may exceptionally be situations where there is a reasonable basis for attributing only part of the damage to each of the tortfeasors, in which case liability may also be several.126 At the redress stage, liability of other tortfeasors towards the tortfeasor who has paid damages to the victim is normally several, i.e. other tortfeasors are liable only for their individual share of responsibility for the damage.127 There is no reason to deviate from these principles in the context of emerging digital technologies, and this is why [31] suggests several liability at the redress stage as a general rule. However, the complexity and opacity of emerging digital technology settings that already make it difficult for a victim to get relief in the first place also make it difficult for the paying tortfeasor to identify shares and seek redress from the other tortfeasors. However, despite complexity and opacity, it is often possible to identify two or several tortfeasors who form a commercial and/or technological unit (see [29]-[30]). This should be relevant at the redress stage too, i.e. members of that unit should be liable jointly to indemnify another tortfeasor who is not a member of the unit and has paid damages to the victim exceeding his share. Illustration 23. The producer of hardware has a contract with a software provider and another one with the provider of several cloud services, all of which have caused the damage, and all of which collaborate on a contractual basis. Where another tortfeasor has paid compensation to the victim and seeks redress, the three parties may be seen as a commercial unit, and the paying tortfeasor should be able to request payment of the whole cumulative share from any of the three parties.

As has been explained in the context of [29]-[30]), this is also in the interests of efficiency, as parties are incentivised to make contractual arrangements for tort claims in advance.

caused it, each activity is regarded as a cause to the extent corresponding to the likelihood that it may have caused the victim’s damage.’ (emphasis added). This proportional (or several) liability leads to an overall fairer outcome when looking at all parties involved, but the victim is at least worse off insofar as she will have to collect compensation from all potential injurers and bear the risk of each injurer’s insolvency. See the comparative in-depth analysis of this way of dealing with causal uncertainty in I Gilead/MD Green/BA Koch (eds), Proportional Liability: Analytical and Comparative Perspectives (2013). 125 Cf Article 9:101 paragraph 1 PETL. 126 Cf Article 9:101 paragraph 3 PETL. 127 Cf Article 9:102 paragraph 4 PETL. Expert Group on Liability and New Technologies

386

C. Perspectives on liability for emerging digital technologies

16. Damage to data ([32]) [32] Damage caused to data may lead to liability where (a) liability arises from contract; or (b) liability arises from interference with a property right in the medium on which the data was stored or with another interest protected as a property right under the applicable law; or (c) the damage was caused by conduct infringing criminal law or other legally binding rules whose purpose is to avoid such damage; or (d) there was an intention to cause harm. In terms of damage caused, the emergence of digital technologies has brought about some gradual shifts, but only little disruptive change. There is one exception, which is strictly speaking likewise a gradual change, but whose dimension is such that it may be considered disruptive: the significance of damage to data, such as by the deletion, deterioration, contamination, encryption, alteration or suppression of data. With much of our lives and our ‘property’ becoming digital, it is no longer appropriate to limit liability to the tangible world. However, neither is it appropriate to simply equate data with tangible property for the purposes of liability. Most legal systems do not have much of an issue when it comes to contractual liability, in particular where there was negligence of the contracting partner. Illustration 24. A stores all her files in cloud space provided by cloud space provider C on the basis of a contract. C failed to properly secure the cloud space, which is why an unknown hacker deletes all of A’s photos. C will normally be liable to A on a contractual basis. Liability would in any case be for the economic loss, e.g. any costs A has to incur for restoring the files. Whether or not A would receive compensation for the non-economic loss associated with the loss of family memories would depend on the national legal system in question.

Things are less obvious for liability in tort, at least in a number of jurisdictions. For a long time, some jurisdictions have been solving the issue by considering damage to data as damage to the physical medium on which the data was stored. This should still be possible. Illustration 25. Imagine A had stored all her files on her personal computer’s hard disk drive at home. Neighbour B negligently damages the computer, making the files illegible. Irrespective of the qualification of damage to data, this was in any case unlawful damage to A’s tangible property (the hard disk drive), and already for this reason B would be liable.

Final Report of the New Technologies Formation

16. Damage to data

387

However, this approach does not lead to satisfactory results where the owner of the medium is not identical with the person who has a protected legal interest in the data. The most difficult question is what amounts to a protected legal interest that is sufficiently akin to property. The NTF of the Expert Group discussed in some depth whether there should also be liability in tort where the relevant data was protected by intellectual property law or a similar regime, such as database protection or trade secret protection. However, at the end of the day it does not seem to make sense to focus on IP protection, because the reasons the legislator introduces IP rights for intellectual achievement have little to do with the reasons a particular copy on a particular medium should be protected. Illustration 26. A has all her files stored in cloud space provided by C. Without any negligence on C’s part, B negligently damages C’s servers and all of A’s files are deleted. It is not clear why it should make a difference to B’s liability whether (a) the files contained text or photos to which A held the copyright, (b) the files contained text or photos to which third parties held the copyright, or (c) the files contained machine data of great economic value, to which nobody held any copyright or other IP right.

Depending on the applicable legal system, there may, however, be other legal interests that are protected with third-party effect (not only against a contracting party or other particular party), such as possession. Data being sufficiently akin to property is just one of the justifications for recognising tort liability where data has been damaged. Alternatively, there should be liability where the damage has been caused by conduct amounting to a criminal act, in particular an activity that is unlawful under international law such as the Budapest Convention on Cybercrime,128 or where it has infringed other conduct-related rules such as product safety legislation whose purpose is to avoid such damage. Illustration 27. If B in Illustration 26 hacks the cloud space and deletes A’s files, this normally qualifies as criminal conduct and B should be liable.

This purpose should ideally be expressed by the language of such legislation. One example, where it has been made very clear, is the General Data Protection Regulation (GDPR). Article 82 explicitly states that there is liability where damage has been caused by infringing the requirements of the GDPR.

128 Convention on Cybercrime, Council of Europe Treaty No. 185, 23 November 2001 (‘Budapest Convention’). Expert Group on Liability and New Technologies

388

C. Perspectives on liability for emerging digital technologies

In defining such conduct-related rules the law should give due consideration, in particular, to the ubiquity of data and its significance as an asset. While it would theoretically be possible to introduce, for example, a standard stating very broadly that it is generally prohibited to access, modify etc. any data controlled by another person and to attach liability if this standard is breached, this might result in excessive liability risks because all of us are, in one way or another, constantly accessing and modifying data controlled by others. Last but not least, most jurisdictions would agree that damage to data should lead to liability where the tortfeasor was acting with an intention to cause harm.

17. Insurance ([33]) [33] The more frequent or severe potential harm resulting from emerging digital technology, and the less likely the operator is able to indemnify victims individually, the more suitable mandatory liability insurance for such risks may be. Statutory strict liability regimes in particular129 often come with a requirement that the person to whom the risk is attributable must take out insurance cover against her risk of liability. This is typically explained with a need to protect future victims against the risk of the liable person’s insolvency.130 However, from an economic analysis point of view, the insurance requirement rather fosters the internalisation of the costs of the activities that the liable person (permissibly) pursues.131 Either way, compulsory liability insurance should not be introduced without a careful analysis of whether it is really needed, rather than automatically linked

129 Mandatory liability insurance is by no means exclusively linked to strict liability; see the extensive list of statutory insurance requirements for both strict and fault liability in A Fenyves et al (eds), Compulsory Liability Insurance from a European Perspective (2016) 445 ff. 130 D Rubin, ‘Conclusions’, in A Fenyves (fn 129) 431. Cf also the Commission Staff Working Document (fn 8) 21: ‘In order to facilitate the victim’s compensation and protecting the victim from the risk of insolvency of the liable person, it could be discussed, among other solutions, whether various actors in the value chain should be required to take out insurance coverage as it is the case today for cars.’ 131 M Faure, ‘Economic Criteria for Compulsory Insurance’, The Geneva Papers on Risk and Insurance 31 (2006) 149, who also highlights at 158 ‘that lawyers often view especially third-party insurance as an instrument of victim protection, whereas economists would stress the fact that insurance is an instrument to remove risk from the risk-averse injurer or to cure the risk of underdeterrence’.  

Final Report of the New Technologies Formation

17. Insurance

389

to a certain activity. After all, the tortfeasor may be able to compensate victims of her activities out of her own funds if the overall losses to be expected can be covered even without insurance. Also, the market may simply not offer insurance cover for a certain risk, particularly if it is difficult to calculate due to missing experience, which is quite likely with new technologies (and may therefore also be a problem with emerging digital technologies). Requiring insurance in the latter situation may effectively prevent the deployment of the technology if this requires proof of insurance despite the fact that no-one on the market is willing to underwrite such yet unknown risks. This may in part be remedied by capping liability for certain risks at a predetermined (though regularly adjusted) amount, as is often the case with statutory strict liability regimes. One could also imagine a less specific requirement to provide cover (so not necessarily by taking out insurance, but also other financial securities).132 Nevertheless, as experience in at least some fields (mostly motorised traffic) has shown, mandatory liability insurance can work well and is indeed appropriate under certain conditions. From an insurance perspective, certain sectors are the most suited to compulsory insurance schemes, including transportation, industries with a high potential for personal injury and/or environmental harm, hazardous activities and certain professional sectors.133 Therefore, it may indeed be advisable to make liability insurance cover compulsory for certain emerging digital technologies. This is particularly true for highly significant risks (which may either lead to substantial harm134 and/or cause frequent losses), where it seems unlikely that potential injurers will be capable of compensating all victims themselves (either out of their own funds, with the help of alternative financial securities, or through voluntary self-insurance). If mandatory liability insurance is introduced, the insurer should have a recourse claim against the tortfeasor. In risk scenarios comparable to those of motorised traffic, a direct action of victims against the insurer may also be advisable.135

132 D Rubin (fn 130) 436; M Faure (fn 131) 162 f. 133 B Tettamanti/H Bär/J-C Werz, ‘Compulsory Liability Insurance in a Changing Legal Environment – An Insurance and Reinsurance Perspective’, in A Fenyves (fn 129) 343 (359). 134 However, one should keep in mind that the risk of extremely high or even catastrophic losses may not be (fully) insurable and require, for example, a public-private partnership, as experience in the US has shown with covering the risks of nuclear power plants (). 135 Cf Article 15:101 Principles of European Insurance Contract Law (PEICL).  

Expert Group on Liability and New Technologies

390

C. Perspectives on liability for emerging digital technologies

18. Compensation funds ([34]) [34] Compensation funds may be used to protect tort victims who are entitled to compensation according to the applicable liability rules, but whose claims cannot be satisfied. If liability regimes described above (producer’s and operator’s strict liability and wrongdoer’s fault-based liability) function properly, there is no need to establish new kinds of compensation funds, funded and operated by the state or other institutions and aiming to compensate victims for losses suffered as a result of operating emerging digital technologies. It is advisable, however, to ensure that in the areas where compulsory liability insurance is introduced, a compensation fund is also in place to redress damage caused by an unidentified or uninsured technology.136 Article 10 of the Motor Insurance Directive may serve as a model for such a scheme. As hacking is a serious threat to users of software-based technologies and traditional tort law rules may often prove insufficient because of the victim’s inability to identify the tortfeasor, it may be advisable to introduce a non-fault compensation scheme equivalent to that applicable to victims of violent crimes,137 if and to the extent that a cybercrime constitutes an offence equivalent to the latter. Persons who have suffered serious personal injuries as a result of cybercrime could therefore be treated the same way as victims of violent crime.

136 This is also supported by the EP Resolution on ‘Civil Law Rules on Robotics’ (fn 4), no 59 lit b. However, Parliament also suggests to expand the scope of a compensation fund in lit c, combining it with limited liability of those contributing to such a fund. In lit d, Parliament considers ‘to create a general fund for all smart autonomous robots’ or individual funds per category of robots. Such proposals are also put forward in academic writing, see, for example, K Abraham/R Rabin, ‘Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era’ . 137 Under national implementations of Council Directive 2004/80/EC of 29 April 2004 relating to compensation to crime victims. Final Report of the New Technologies Formation

Annex: The New Technologies Formation of the Expert Group on Liability for New Technologies Members – – – – – – – – – – – – – – – –

Ryan ABBOTT (United States of America) Georg BORGES (Germany) Eugenia DACORONIA (Greece) Nathalie DEVILLIER (France) Marlena JANKOWSKA-AUGUSTYN (Poland) Ernst KARNER (Austria) Bernhard Alexander KOCH (Austria) Alžběta KRAUSOVA (Czech Republic) Piotr MACHNIKOWSKI (Poland) Maria Lillà MONTAGNANI (Italy) Marie MOTZFELDT (Denmark) Finbarr MURPHY (Ireland) Ugo PAGALLO (Italy) Teresa RODRIGUEZ DE LAS HERAS BALLELL (Spain) Gerald SPINDLER (Germany) Christiane WENDEHORST (Austria)

Institutional Observers from the Product Liability Formation – –

Cooley (UK) LLP (United Kingdom) Universidad Carlos III de Madrid (Spain)

Expert Group on Liability and New Technologies

Publications Principles of European Tort Law Series Volume 1: The Limits of Liability: Keeping the Floodgates Shut Edited by Jaap Spier Kluwer Law International, The Hague Hardcover. ISBN 90-411-0169-1 1996, 162 pp Volume 2: The Limits of Expanding Liability: Eight Fundamental Cases in a Comparative Perspective Edited by Jaap Spier Kluwer Law International, The Hague Hardcover. ISBN 90-411-0581-6 1998, 244 pp Volume 3: Unification of Tort Law: Wrongfulness Edited by Helmut Koziol Kluwer Law International, The Hague Hardcover. ISBN 90-411-1019-4 1998, 144 pp Volume 4: Unification of Tort Law: Causation Edited by Jaap Spier Kluwer Law International, The Hague Hardcover. ISBN 90-411-1325-8 2000, 161 pp Volume 5: Unification of Tort Law: Damages Edited by Ulrich Magnus Kluwer Law International, The Hague Hardcover. ISBN 90-411-1481-5 2001, 225 pp Volume 6: Unification of Tort Law: Strict Liability Edited by Bernhard A Koch and Helmut Koziol Kluwer Law International, The Hague Hardcover. ISBN 90-411-1705-9 2002, 444 pp Volume 7: Unification of Tort Law: Liability for Damage Caused by Others Edited by Jaap Spier Kluwer Law International, The Hague Hardcover. ISBN 90-411-2185-4 2003, 335 pp Volume 8: Unification of Tort Law: Contributory Negligence Edited by Ulrich Magnus and Miquel Martín-Casals Kluwer Law International, The Hague Hardcover. ISBN 90-411-2220-6 2004, 300 pp

394

Publications

Volume 9: Unification of Tort Law: Multiple Tortfeasors Edited by WV Horton Rogers Kluwer Law International, The Hague Hardcover. ISBN 90-411-2319-9 2004, 313 pp Volume 10: Unification of Tort Law: Fault Edited by Pierre Widmer Kluwer Law International, The Hague Hardcover. ISBN 90-411-2098-X 2005, 391 pp

Tort and Insurance Law Series Volume 1: Cases on Medical Malpractice in a Comparative Perspective Edited by Michael Faure and Helmut Koziol Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-6056-5 2001, 331 pp Volume 2: Damages for Non-Pecuniary Loss in a Comparative Perspective Edited by WV Horton Rogers Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-6057-2 2001, 318 pp Volume 3: The Impact of Social Security Law on Tort Law Edited by Ulrich Magnus Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-6083-1 2003, 312 pp Volume 4: Compensation for Personal Injury in a Comparative Perspective Edited by Bernhard A Koch and Helmut Koziol Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-6082-4 2003, 501 pp Volume 5: Deterrence, Insurability and Compensation in Environmental Liability. Future Developments in the European Union Edited by Michael Faure Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-6891-2 2003, 408 pp Volume 6: Der Ersatz frustrierter Aufwendungen. Vermögens- und Nichtvermögensschaden im österreichischen und deutschen Recht by Thomas Schobel Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-6093-0 2003, 342 pp

Publications

395

Volume 7: Liability for and Insurability of Biomedical Research with Human Subjects in a Comparative Perspective Edited by Jos Dute, Michael G Faure and Helmut Koziol Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5812-8 2004, 445 pp Volume 8: No-Fault Compensation in the Health Care Sector Edited by Jos Dute, Michael G Faure and Helmut Koziol Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5821-0 2004, 492 pp Volume 9: Pure Economic Loss Edited by Willem H van Boom, Helmut Koziol and Christian A Witting Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5790-9 2004, 214 pp Volume 10: Liber Amicorum Pierre Widmer Edited by Helmut Koziol and Jaap Spier Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5792-3 2003, 376 pp Volume 11: Terrorism, Tort Law and Insurance. A Comparative Survey Edited by Bernhard A Koch Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5795-4 2004, 313 pp Volume 12: Abschlussprüfer. Haftung und Versicherung Edited by Helmut Koziol and Walter Doralt Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5822-7 2004, 180 pp Volume 13: Persönlichkeitsschutz gegenüber Massenmedien/The Protection of Personality Rights against Invasions by Mass Media Edited by Helmut Koziol and Alexander Warzilek Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5841-8 2005, 713 pp Volume 14: Financial Compensation for Victims of Catastrophes. A Comparative Legal Approach Edited by Michael Faure and Ton Hartlief Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5849-4 2006, 466 pp Volume 15: Entwurf eines neuen österreichischen Schadenersatzrechts Edited by Irmgard Griss, Georg Kathrein and Helmut Koziol Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5865-4 2006, 146 pp

396

Publications

Volume 16: Tort Law and Liability Insurance Edited by Gerhard Wagner Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5850-0 2005, 361 pp Volume 17: Children in Tort Law. Part I: Children as Tortfeasors Edited by Miquel Martín-Casals Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5848-7 2006, 476 pp Volume 18: Children in Tort Law. Part II: Children as Victims Edited by Miquel Martín-Casals Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5869-2 2007, 320 pp Volume 19: Tort and Regulatory Law Edited by Willem H van Boom, Meinhard Lukas and Christa Kissling Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5870-8 2007, 477 pp Volume 20: Shifts in Compensating Work-Related Injuries and Diseases Edited by Saskia Klosse and Ton Hartlief Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5913-2 2007, 236 pp Volume 21: Shifts in Compensation for Environmental Damage Edited by Michael Faure and Albert Verheij Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5911-8 2007, 338 pp Volume 22: Shifts in Compensation between Private and Public Systems Edited by Willem H van Boom and Michael Faure Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5912-5 2007, 246 pp Volume 23: Tort Law of the European Community Edited by Helmut Koziol and Reiner Schulze Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5935-4 2008, 693 pp Volume 24: Economic Loss Caused by Genetically Modified Organisms. Liability and Redress for the Adventitious Presence of GMOs in Non-GM Crops Edited by Bernhard A Koch Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5937-8 2008, 747 pp

Publications

Volume 25: Punitive Damages. Common Law and Civil Law Perspectives Edited by Helmut Koziol and Vanessa Wilcox Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-6117-3 2009, 335 pp Volume 26: Aggregation and Divisibility of Damage Edited by Ken Oliphant Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-6116-6 2009, 568 pp Volume 27: Damage Caused by Genetically Modified Organisms. Comparative Survey of Redress Options for Harm to Persons, Property or the Environment Edited by Bernhard A Koch de Gruyter, Berlin/New York Hardcover. ISBN 978-3-89949-811-0 eBook. ISBN 978-3-89949-812-7 2010, 954 pp Volume 28: Loss of Housekeeping Capacity Edited by Ernst Karner and Ken Oliphant de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-89949-813-4 eBook. ISBN 978-3-89949-814-1 2012, 339 pp Volume 29: Medical Liability in Europe. A Comparison of Selected Jurisdictions Edited by Bernhard A Koch de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-026010-6 eBook. ISBN 978-3-11-026016-8 2011, 701 pp Volume 30: Tort Law in the Jurisprudence of the European Court of Human Rights Edited by Attila Fenyves, Ernst Karner, Helmut Koziol and Elisabeth Steiner de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-025966-7 eBook. ISBN 978-3-11-026000-7 2011, 906 pp Volume 31: Employers’ Liability and Workers’ Compensation Edited by Ken Oliphant and Gerhard Wagner de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-026996-3 eBook. ISBN 978-3-11-027021-1 2012, 619 pp Volume 32: Medical Malpractice and Compensation in Global Perspective Edited by Ken Oliphant and Richard W Wright de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-026997-0 eBook. ISBN 978-3-11-027023-5 2013, 573 pp

397

398

Publications

Volume 33: Proportional Liability: Analytical and Comparative Perspectives Edited by Israel Gilead, Michael D Green and Bernhard A Koch de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-028253-5 eBook. ISBN 978-3-11-028258-0 2013, 376 pp Volume 34: Mass Torts in Europe: Cases and Reflections Edited by Willem H van Boom and Gerhard Wagner de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-034945-0 eBook. ISBN 978-3-11-034946-7 2014, 311 pp Volume 35: Compulsory Liability Insurance from a European Perspective Edited by Attila Fenyves, Christa Kissling, Stefan Perner and Daniel Rubin de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-048469-4 eBook. ISBN 978-3-11-048617-9 2016, 564 pp Volume 36: Directors’ & Officers’ (D & O) Liability Edited by Simon Deakin, Helmut Koziol and Olaf Riss de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-048971-2 eBook. ISBN 978-3-11-049149-4 2018, 1008 pp

European Tort Law Yearbook European Tort Law 2001 Edited by Helmut Koziol and Barbara C Steininger Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-6085-5 2002, 571 pp European Tort Law 2002 Edited by Helmut Koziol and Barbara C Steininger Verlag Österreich, Vienna Softcover. ISBN 3-211-00486-6 2003, 596 pp European Tort Law 2003 Edited by Helmut Koziol and Barbara C Steininger Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5825-8 2004, 493 pp

Publications

European Tort Law 2004 Edited by Helmut Koziol and Barbara C Steininger Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5847-0 2005, 674 pp European Tort Law 2005 Edited by Helmut Koziol and Barbara C Steininger Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5871-5 2006, 711 pp European Tort Law 2006 Edited by Helmut Koziol and Barbara C Steininger Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5908-8 2008, 576 pp European Tort Law 2007 Edited by Helmut Koziol and Barbara C Steininger Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5938-5 2008, 661 pp European Tort Law 2008 Edited by Helmut Koziol and Barbara C Steininger Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-6120-3 2009, 708 pp European Tort Law 2009 Edited by Helmut Koziol and Barbara C Steininger de Gruyter, Berlin/New York Hardcover. ISBN 978-3-11-024606-3 eBook. ISBN 978-3-11-024607-0 2010, 735 pp European Tort Law 2010 Edited by Helmut Koziol and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-023941-6 eBook. ISBN 978-3-11-023942-3 2011, 706 pp European Tort Law 2011 Edited by Ken Oliphant and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2012, 789 pp

399

400

Publications

European Tort Law 2012 Edited by Ken Oliphant and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2013, 816 pp European Tort Law 2013 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2014, 824 pp European Tort Law 2014 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7791 2015, 758 pp European Tort Law 2015 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7791 2016, 764 pp European Tort Law 2016 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2017, 747 pp European Tort Law 2017 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2018, 762 pp European Tort Law 2018 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2019, 803 pp

Publications

European Tort Law 2019 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2020, 797 pp European Tort Law 2020 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2021, 778 pp European Tort Law 2021 Edited by Ernst Karner and Barbara C Steininger de Gruyter, Berlin/Boston Hardcover. ISSN 2190-7773 eBook. ISSN 2190-7781 2022, 777 pp

European Group on Tort Law Principles of European Tort Law. Text and Commentary Edited by the European Group on Tort Law Verlag Österreich, Vienna Softcover. ISBN 978-3-7046-5837-1 2005, 282 pp The Liability of Public Authorities in Comparative Perspective Edited by Ken Oliphant Intersentia, Cambridge/Antwerp/Portland Softcover. ISBN 978-1-78068-238-9 2016, 887 pp The Borderlines of Tort Law: Interactions with Contract Law Edited by Miquel Martín-Casals Intersentia, Cambridge/Antwerp/Portland Softcover. ISBN 978-1-78068-248-8 2019, 846 pp Prescription in Tort Law. Analytical and Comparative Perspectives Edited by Israel Gilead and Bjarte Askeland Intersentia, Cambridge/Antwerp/Portland Softcover. ISBN 978-1-78068-963-0 2020, 746 pp

401

402

Publications

Digest of European Tort Law Volume 1: Essential Cases on Natural Causation Edited by Bénédict Winiger, Helmut Koziol, Bernhard A Koch and Reinhard Zimmermann Verlag Österreich, Vienna Hardcover. ISBN 978-3-7046-5885-2 2007, 632 pp Volume 2: Essential Cases on Damage Edited by Bénédict Winiger, Helmut Koziol, Bernhard A Koch and Reinhard Zimmermann de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-024848-7 eBook. ISBN 978-3-11-024849-4 2011, 1175 pp Volume 3: Essential Cases on Misconduct Edited by Bénédict Winiger, Ernst Karner and Ken Oliphant de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-053434-4 eBook. ISBN 978-3-11-053567-9 2018, 1286 pp

Sino-European Legal Studies The Aims of Tort Law. Chinese and European Perspectives Edited by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0127-0 2017, 230 pp The Legal Protection of Personality Rights. Chinese and European Perspectives Edited by Ken Oliphant, Zhang Pinghua and Chen Lei Brill | Nijhoff, Leiden/Boston Hardcover. ISBN 978-9-0042-7629-1 eBook. ISBN 978-9-0043-5171-4 2018, 227 pp Tortious and Contractual Liability. Chinese and European Perspectives Edited by Ernst Karner Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0275-8 2021, 263 pp

Publications

World Tort Law Society Product Liability. Fundamental Questions in a Comparative Perspective Edited by Helmut Koziol, Michael D Green, Mark Lunney, Ken Oliphant and Yang Lixin de Gruyter, Berlin/Boston Hardcover. ISBN 978-3-11-054600-2 eBook. ISBN 978-3-11-054581-4 2017, 610 pp

Others European Tort Law. Basic Texts Edited by Ernst Karner, Ken Oliphant and Barbara C Steininger Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0170-6 2018, 2nd Edition, 466 pp Grundfragen des Schadenersatzrechts by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-902638-28-1 2010, 371 pp Grundfragen des Schadenersatzrechts aus rechtsvergleichender Sicht Edited by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0048-8 2014, 941 pp Basic Questions of Tort Law from a Germanic Perspective by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-902638-85-4 2012, 380 pp Basic Questions of Tort Law from a Comparative Perspective Edited by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0040-2 2015, 867 pp Comparative Stimulations for Developing Tort Law Edited by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0060-0 2015, 273 pp

403

404

Publications

Harmonisation and Fundamental Questions of European Tort Law by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0128-7 2017, 168 pp Austrian Private Law. An Overview in Comparison with German Law and with References to English and French Law by Gabriele Koziol and Helmut Koziol, in cooperation with Andrew Bell and Samuel Fulli-Lemaire Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0156-0 2017, 119 pp Grundzüge des japanischen Schadenersatzrechts by Keizo Yamamoto Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0102-7 2018, 220 pp Basic Features of Japanese Tort Law by Keizo Yamamoto Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0188-1 2019, 202 pp Tort Liability Law of China by Lixin Yang Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0088-4 2018, 384 pp Essays in Honour of Helmut Koziol Edited by Ernst Karner, Ulrich Magnus, Jaap Spier and Pierre Widmer Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0228-4 2020, 262 pp Essays in Honour of Jaap Spier Edited by Helmut Koziol and Ulrich Magnus Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0101-0 2016, 325 pp Politikerhaftung. The Liabilities of Politicians Edited by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0242-0 2020, 487 pp Die Haftung von Eisenbahnverkehrs- und Infrastrukturunternehmen im Rechtsvergleich Edited by Helmut Koziol Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0191-1 2019, 318 pp

Publications

405

Schadenersatzrechtliche Verantwortlichkeit im internationalen Eisenbahnverkehr by Gregor Christandl and Olaf Riss Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0273-4 2021, 161 pp Die Vereinheitlichung der Haftung von Eisenbahnunternehmen in der Europäischen Union by Helmut Koziol Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0274-1 2021, 229 pp Medienpolitik und Recht. Media Governance, Wahrhaftigkeitspflicht und sachgerechte Haftung Edited by Helmut Koziol, Josef Seethaler and Thomas Thiede Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-902638-36-6 2010, 214 pp Medienpolitik und Recht II. Presserat, WikiLeaks und Redaktionsgeheimnis Edited by Helmut Koziol, Josef Seethaler and Thomas Thiede Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-902638-63-2 2013, 216 pp Tatsachenmitteilungen und Werturteile. Freiheit und Verantwortung Edited by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0171-3 2018, 197 pp Österreichisches Haftpflichtrecht Band I. Allgemeiner Teil by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0225-3 2020, 4th Edition, 1002 pp Österreichisches Haftpflichtrecht Band II. Haftung für eigenes und fremdes Fehlverhalten by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0173-7 2018, 3rd Edition, 1074 pp Österreichisches Haftpflichtrecht Band III. Gefährdungs-, Produkt- und Eingriffshaftung by Helmut Koziol, Peter Apathy and Bernhard A Koch Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0022-8 2014, 3rd Edition, 596 pp Mangelfolgeschäden in Veräußerungsketten. Am Beispiel der Aus- und Einbaukosten by Ernst Karner and Helmut Koziol Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-902638-84-7 2012, 96 pp

406

Publications

Zur Anwendbarkeit des UN-Kaufrechts bei Werk- und Dienstleistungen. Am Beispiel der Maschinen- und Industrieanlagenlieferungsverträge by Ernst Karner and Helmut Koziol Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0052-5 2015, 91 pp Obsoleszenzen im österreichischen Recht. Geltendes Recht, Schutzlücken und Reformbedarf by Helmut Koziol Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0108-9 2016, 137 pp Vertragsstrafe und Schadenspauschalierung by Ernst Karner and Alexander Longin Jan Sramek Verlag, Vienna Softcover. ISBN 978-3-7097-0234-5 2020, 240 pp Haftungsfreizeichnungen zulasten Dritter. Im Zwischenbereich von Vertrag und Delikt by Johannes Angyan Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0257-4 2021, 351 pp Recht auf Anpassung des Werkvertrags by Alexander Longin Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0258-1 2021, 396 pp Mehrstufiger Warenverkehr. Spezifische Probleme bei Vertragsketten by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0284-0 2021, 317 pp Rechtsunkenntnis und Schadenszurechnung by Helmut Koziol Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0309-0 2022, 189 pp Erlaubte Notstandshandlungen und zivilrechtliche Ausgleichsansprüche by David Messner-Kreuzbauer Jan Sramek Verlag, Vienna Hardcover. ISBN 978-3-7097-0314-4 2022, 448 pp

Publications

Journal of European Tort Law The Journal of European Tort Law (JETL) is published three times a year. The General Editor is Professor Ken Oliphant, University of Bristol. Further information, including subscription details and instructions for authors, is available on the JETL website . Queries may be addressed to . Articles, comments and reviews should be submitted for consideration to . The Journal applies a policy of double blind peer review.

Eurotort EUROTORT is the first comprehensive database of European cases on tort law. Access to the database is free (subject to prior registration) at .

407